Skip to content

Re-mapping the Higher Education Development Community

54

 

Introduction

We’re trying to update our map of what we might call the UK Higher Education Development Community – by which we mean those national professions and organisations with a substantial and explicit focus on improving higher education in the UK.
Obviously our definition of the UK Higher Education Development Community isn’t very precise. I’m not sure it can be. To make things manageable, we’ve generally not included commercial providers, or university-based development units (even those that that work outside their institutions), or Unions or employer associations. Perhaps another day, for a broader study. Accepting that many boundaries/interfaces are a little fuzzy.
Thank you to the many colleagues on the SEDA jiscmail who have already added suggestions to the (much shorter) list posted recently.
We’d welcome your help. Who’s still missing?
If you’re not sure whether it’s a Development Association or not, let me know about it anyway. If you can let me have its web address, so much the better. And please let me know if you spot any errors here.
We’ll keep updating this. A future version will add a line or two about each organisation.
We hope it will be useful to know who else is out there doing development. We also hope this list may serve as a tool to facilitate cooperation, among developers and their associations.
Thank you
 
David Baume
david@davidbaume.com

 

The current list

 
Standing Conference on Academic Practice (SCAP)  
  Heads of Educational Development Group (HEDG)  
  Jisc  
  Association for Learning Development in HE (ALDinHE)

  1. ALDinHE, the Association for Learning Development in Higher Education, is the membership association for staff who work as Learning Developers, or who have a role which involves supporting student learning.
  2. We have an annual conference, regular regional symposia, a journal (the Journal of Learning Development in Higher Education), a Jiscmail list, research grants, a website of free teaching and learning resources (LearnHigher), a CPD route towards HEA Fellowship, a professional recognition scheme
  3. The Jiscmail list is lively and active, used for answering queries, sharing practice and resources, and highlighting opportunities; ACLD (ALDinHE-Certified Learning Developer) is a new professional recognition scheme to develop the professional status of learning development; founder member of the International Consortium of Academic Language and Learning Developers (ICALLD)
  4. Social media – @aldinhe_LH
  5. Contact – info@aldinhe.ac.uk
  6. Founded – 2003, with the first LDHEN Symposium at London Metropolitan Univeristy 
 
  5  Association for Researcher Development (Vitae)  
  Association for Learning Technology (ALT)  
  Higher Education Academy (HEA)  
  Centre for Recording Achievement (CRA)  
  Higher Education Funding Council for England (HEFCE)  
  10 Scottish Higher Education Developers (SHED)  
  11 Global Forum for English for Academic Purposes Professionals (BALEAP)  
  12 Leadership Foundation for Higher Education (LFHE)  
  13 Society of College, National and University Libraries  
  14 Network For Excellence In Mathematics and Statistics Support (SIGMA)  
  15 Quality Assurance Agency (QAA)  
  16 All Ireland Society for Higher Education (AISHE)  
  17 Staff and Educational Development Association (SEDA)       
  18 Staff Development Forum (SDF)  
  19  The Library and Information Association (CILIP)  
  20 Society for Research in Higher Education (SRHE)  
  21 UK Council for Graduate Education (UKCGE)  
  22 Association of National Teaching Fellows (ANTF)  
  23 National Union of Students (NUS)  
  24 Scottish Funding Council (SFC)  
  25 Student Participation in Quality Scotland (sparqs)  
  26 Heads of ELearning Forum (HeLF)  
  27 The Economics Network  
  28 Higher Education Funding Council for Wales (HEFCW)  
  29 Department of Education (Northern Ireland)  
  30 Association of Colleges  (AoC)  
  31 Enhancement Themes Scotland  
  32 Collab Group  
  33 Colleges Wales (Colegau Cymru)  
  34 UK Advising and Tutoring (UKAT)  
  35 Association of Graduate Careers Advisory Services (AGCAS)  
  36 Researching, Advancing and Inspiring Student Engagement (RAISE)  
  37 Equality Challenge Unit  
  38 Principal Fellows of the Higher Education Academy (PFHEA)  
  39 GuildHE  
  40 Million+  
  41 UniversitiesUK (UUK)  
  42 University Alliance (UA)  
  43 The Russell Group  
  44 Mixed Economy Group (MEG)  
  45 Council for Higher Education in Art and Design (CHEAD)  
  46 WonkHE  
  47 Universities Scotland  
  48 The Association of University Administrators (AUA)  
  49 National Forum for the Enhancement of Teaching and Learning in Higher Education – Ireland

50 The Cathedrals Group

51 Writing Developers

52 Universities and Colleges Information Systems Association (UCISA)

53 Society of College, National and University Libraries (SCONUL)

54 Association for Authentic, Experiential, and Evidence-Based Learning

55 National Association of Disability Practitioners

Read more…

We may need to talk

Start:

Can you:

  • Think faster than you can type?
  • Talk faster than you can type?

If that’s you, and you haven’t tried voice recognition, or not for a while, then maybe, now, you should.

(Please don’t worry about the odd occasion when you talk faster than you are thinking. We all do it. In my case, usually after wine.)

Here and now:

I am dictating this. Into my iPhone. About as fast as I am thinking it. (For some reason I find that voice recognition is faster and more accurate in iOS than in macOS Sierra. But any account of the relative capabilities of different softwares is likely to be overturned by the next iteration of the software. I suggest, start with what you have.)

Seeing my spoken words appear on the screen in front of me is still, for me, a mall kind of magic.

I was a fan of the idea of voice recognition before it was, frankly, much good. My efforts with early versions of Dragon were – well, disappointing. Training to my voice was slow; the quality of the voice capture was often poor; and it took its time transcribing.(A colleague claimed that he trained his Dragon installation by reading it Alice in Wonderland. Which explains a lot about his writing style. And, occasionally, his content.) I believe Dragon is a lot better now. As I am sure are the others – it’s a jungle out there, and evolution is rapid and brutal, but great for the customer.

Nowadays: no training. Straight in. Fast, or fast enough – almost as fast I speak, certainly as fast as I can speak clearly. Accurate enough most of the time to mean that, even allowing for the necessary editing, voice recognition is quicker than my (not very good) typing.

A caution:

Voice recognition, certainly the iOS version, tries very hard indeed to make sense of what it hears. Sometimes, this takes it a long way away from what you meant and said. The main practical implication for me is – review often. Because, if you get a few paragraphs ahead before you check, you simply may not be able to reconstruct what you originally meant / said. And that precious pearl may be lost forever.

Learning:

But two things will happen over (quite a short period of) time.

  1. The system will get to know you and your speech better, thereby becoming more accurate. This is fairly obvious.
  2. More interesting, you will make the curious accommodations required to speak written English. I’ll say more about this.

Writing and speaking:

Crudely: writing and speaking are different kinds of language, different forms of expression.

Here’s a short test you might want to do. If you have access to voice recognition – and you almost certainly have, either on your smart phone or via Google – try it now.

  1. Turn on voice recognition and then talk about something you’re interested in for a couple of minutes, as if you were talking to a friend or colleague.
  2. Write / type for a couple of minutes on the same topic.
  3. Compare the transcript of your speaking with the words that you typed/wrote.

What do you notice?

Obviously, I don’t know. And the comparison isn’t simple:

  • You may have been so conscious of the fact that your spoken words were being transcribed that you spoke something much closer to written language than your more normal speech.
  • Or your writing for a colleague may have been much less formal than if you had been writing for a wider audience.

But it may be that;

  • Your transcribed speech took more words to say roughly the same thing than did your writing / typing;
  • Your transcribed speech was less formal, more conversational, than your written / typed text;
  • In particular your transcribed speech may have conformed less well to grammatical conventions, in particular to clear and conventional sentence structure and breaks between ideas or sentences.
  • Unlike most people, who generally talk mostly in phrases, you may already talk in complete sentences. Or even paragraphs. Or even chapters. Or even books. If you are one of these lucky, talented people, then voice recognition will be largely unproblematic for you, and you will simply become much more productive. I envy you.

Of course we all have a range of forms of both spoken and written expression. We can speak formally or informally, we can write formally or less formally. Audience and context make a big difference to how we speak and write.

But the fact remains that, for voice recognition to give you all the potential advantages of speed, you do need to learn to speak in something like the way or style in which you write, or want to write. And, as I am discovering as I dictate this paragraph, this is hard work, and requires fierce concentration – because, when you are using voice recognition, you are writing; in the specific sense of getting words onto the screen; faster perhaps than you could if you were typing.

But speaking into voice recognition slowly becomes easier. And then you will begin to see typing as it is – another form of technology-impeded human action. Like driving a car with manual gear change – “stick shift” to our American friends call it. Or indeed driving a car at all in the age of Google car and Tesla autopilot.Or repeatedly typing / speaking) the same information into form after form after form. Or – insert your own technological bête noire here.

Voices:

I over-simplified earlier. I don’t think it’s as simple as learning to speak in the same kind of written English that you used to produce. I write Tweets, emails, blogs, articles and book chapters through the use of voice recognition software. Here are some things I have noticed:

  • My style has become slightly less formal as I have made the transition to voice.
  • My sentences initially became longer, sometimes I felt too long – at first I used to solve this problem in the editing, whereas now (as is not demonstrated in the current sentence!) I have taught myself to speak in short sentences again. When appropriate. Or when I remember.
  • On a good day my writing is a little more vivid – I’m less likely to censor short flights of imaginative expression in speech then I am in writing. Of course, if I don’t like these flights, or don’t think them appropriate to the intended outlet or audience, I can always cut them out. And I do. Sometimes with a tear. Following Faulkner’s advice, endorsed by Stephen King, to “kill all your darlings.”

But the gain in speed I achieve (when I dictated the words “I achieve” just now they were transcribed  as “hi cheese”, which briefly entertained me) through the use of voice recognition is so great that the necessary additional editing time still leaves me ahead.

Quality:

What about the quality?

Quality, I feel, is as much a consequence of:

  • The research and thought and planning that go on before the thoughts are expressed either in speech or in writing, and
  • The process of editing

as it is a consequence of the process of committing thought to screen.

Although … some of my better quality ideas make it to the screen because I  capture them in speech, whereas I might well have lost them by the time my fingers caught up with my occasionally fleeting thoughts. This capture of what would otherwise have been lost just now happened, in the previous sentence. (The rather informal starting of a new paragraph with “Although” was undertaken as I dictated it, because I was conscious that the sentence was becoming rather long, but I didn’t want to lose the train of thought as I struggled to find an editing solution. Anyway, I want in this post to show how, now, for me, speech becomes writing.)

Environment:

I prefer to use voice recognition when I am alone. I share an open plan office with my partner, and I still feel a little embarrassed when talking to a machine while Carole is in the room, although she assures me that, as someone familiar with open plan office working, she is bothered by it not at all. But that doesn’t seem to stop it bothering me!

End:

Try voice recognition. Academics exhibit greater strangenesses than speaking, slightly after the manner of a BBC radio announcer from the 1930s, into a telephone or a computer. Pretend you are that rude person on the train, who shares (often) his enunciated thoughts with the carriage.

The learning curve for fluency with voice recognition is long but gentle. The benefits, including speed, start to show very early on.

And you may discover, as you proceed, that the computer isn’t the only one learning to recognise your speech. You may also become better able to recognise, appreciate, enjoy and improve your own various voices.

Let me know.

Three pillars of professionalism in academic development

Paper (OK, workshop) at ICED / HELTASA Conference, Cape Town, 23 November 2016

David Baume PhD SFSEDA SFHEA

Background

The concept of professionalism, both for those who teach in higher education and for academic developers, remains problematic and contested. For a recent account see Bostock, S., & Baume, D. (2016). Professions and professionalism in teaching and development. in D. Baume & C. Popovic (Eds.), Advancing practice in academic development (pp. 32–51). United Kingdom: Routledge.

But academic developers are oriented towards finding solutions, or at least to finding and implementing productive ways forward. A sense of ‘forward’ for academic development – “We suggest an overall purpose for academic development – to lead and support the improvement of student learning.” – forms the first sentence of D. Baume & C. Popovic (Eds.), op. cit.

To offer productive ways forward, this paper suggests three pillars of professionalism in academic development:

  1. Being scholarly;
  2. Being effective; and
  3. Enacting principles or values.

Taken together, these three pillars can give developers some confidence in their professionalism, when, as will continue to happen, our legitimacy is challenged. Of course the three pillars need to be implemented reflectively, critically and humanely.

Pillar 1                    Being scholarly

A recent model of scholarship, (Baume and Popovic, op. cit., p 5), suggests three (overlapping and progressing) ways to be scholarly:

  1. Being reflective, critical and analytic;
  2. Using ideas from the literature; and
  3. Contributing to the literature.

Participants in the Southern Africa Universities Learning and Teaching (SAULT) Forum in Windhoek, Namibia in February 2016 reported three main reasons to be scholarly:

  1. To remain current;
  2. To gain new ideas to apply to practice; and
  3. To gain and maintain the respect of colleagues and clients.

Participants said that, currently, for them, being scholarly mainly meant writing for publication. However, their accounts of being scholarly in the future pulled together the three kinds of scholarship described in the Baume and Popovic model. The model thus seemed to provide a useful tool, both for analysis and for planning. (D. Baume (2017). Scholarship in Action. Innovations in Education and Teaching International54(2))

Question 1: In what particular ways can you become still more scholarly in your practice?

Pillar 2                    Being effective

Like professionalism and scholarship, the concept of effectiveness is sometimes contested. For some it sounds like managerialism. For more on defining and showing effectiveness see Stefani, L., & Baume, D. (2016a). “Is it working?” Outcomes, monitoring and evaluation. In D. Baume & C. Popovic (Eds.), op. cit.

We developers talk about what we do – our actions. We talk bout what we make – our outputs. But surely the point of doing and making is to achieve outcomes, to make things better, to make specific things better, in specific and determinable, sometimes measurable, ways? And to do so in ways that embed, preserve and hopefully enhance professional relationships, scholarship, and values?

Question 2: For a specific project, what are you to trying to achieve? What are your intended outcomes? How will you know you have achieved them? If necessary, change the outcomes until you can see how to achieve them and how to evaluate their achievement.

Pillar 3                    Enacting principles or values

Many professional standards include statements of underpinning values or principles. Those of the UK Staff and Educational Development association, for example, include:

  1. Developing understanding of how people learn;
  2. Practising in ways that are scholarly, professional and ethical; and
  3. Valuing diversity and promoting inclusivity. (http://www.seda.ac.uk/core-mission-values)

The hard and vital step, it turns out, is not writing and agreeing such statements, but implementing them.

Question 3: Pick a value (related to education) that you believe in. How well, how far, do you and your institution implement it? What factors aid and impede its implementation. What can you do to implement it more, better?

 

 

Biography

I am an Independent international higher education consultant, researcher and writer. My most recent full-time post was with the UK Open University, where with colleagues I wrote courses on teaching in higher education.

I was founding Chair of the UK Staff and Educational Development Association (SEDA); cofounder of the UK Heads of Educational Development Group (HEDG); a founding council member of the International Consortium for Education Development (ICED); and a founding editor of the ICED journal, the International Journal for Academic Development (IJAD). I am the ICED representative on the Southern Africa Universities Learning and Teaching (SAULT) forum.

I have co-edited four books on academic development in higher education, and published over 60 papers and articles. My contributions to academic development nationally and internationally have been recognised by awards from SEDA and ICED.

 

David Baume PhD SFSEDA SFHEA, david@davidbaume.com, @David_Baume, www.davidbaume.com

“Is it working?” Monitoring and Evaluation in Academic Development

Workshop at ICED / HELTASA Conference, Cape Town, 22 November 2016

David Baume PhD SFSEDA SFHEA

Introduction and Rationale

There is a growing demand for accountability in Higher Education. Funders want to know that resources have been both properly and effectively applied. This requirement extends to academic development. The workshop will help participants to demonstrate effectiveness of their work.

Workshop outcomes

By the end of this workshop you will have:

  1. Determined, at least in outline, the intended outcomes of a development venture for which you have some responsibility. That is, you will have clarified (in negotiation with stakeholders) what the venture is intended to achieve. It could be a workshop, a programme, a development project, writing a policy or strategy – almost anything. Whatever it is, what particular things will make the situation to be improved better?
  2. Planned, at least in outline, how you will monitor progress towards the intended outcomes. This may include the use of intermediate outcomes and waypoints, and adjusting plans and activities (and maybe also intended outcomes) as required.
  3. Planned, again at least in outline, how you will evaluate the success of your venture in achieving its intended outcomes, and draw conclusions for future practice.

Notice that these are not learning outcomes, not statements of what you will be able to do. They are just outcomes – things you will have achieved.

Outcomes and evaluation – the big picture

Here, in summary, is a simple and powerful process for writing and checking intended outcomes, and then for monitoring and evaluating their attainment:

  1. Identify – preferably in negotiation with the other stakeholders – intended outcomes of the project. What is the development project or venture intended to achieve? What exactly do you want to improve? What do you mean by improved?
  2. How will you find out if the project or venture has been successful? What shows success?
  3. Check. If it’s hard to write an evaluation plan, then revisit and rework the outcomes, then indicators of success, until you can see how to monitor and evaluate their attainment.
  4. Plan and schedule how you will run the project to attain the goals. It’s generally better to do a project with people than for people. If it’s a big project, you may need to set interim goals, waypoints, which you can monitor.
  5. Run the project!
  6. Keep on asking “Is it working?” “Are we clearly moving towards our goals?” “Do we need to adjust – our methods? Maybe, even, sometimes, adjust our goals?”
  7. Towards the end, start to evaluate and report; with evidence; on whether the outcomes were achieved; how well they were achieved; what wasn’t achieved; any unexpected outcomes; what should be done next; and, perhaps above all, what has been learned?

Workshop shape and activities

You will see that the activities are directly linked to the outcomes
.

Here is the overall shape of the workshop around each activity. There may be some variations:

  1. I’ll say a little about each activity in turn.
  2. Then I’ll ask you to do the activity, or at any rate to start it.
  3. And then I’ll ask you to discuss it with a neighbour.
  4. I’ll ask you to share some of your answers.
  5. Sometimes, I’ll have a public conversation with you about your answer.

Activity 1               Read the script below, and the comments that follow it.

Activity 2               Choose a development project or venture of some kind.

It should be real. You should have some responsibility for it. And you should be at or near the start.

Activity 3               For this chosen venture, decide the intended outcomes, what the project is intended to achieve and to improve.

Activity 4               Plan how you will know if it has been successful. If that proves difficult, change the outcomes until you can see how to evaluate their attainment.

Activity 5               Pretend that the project is over. Draft a realistic evaluation report. What does that tell you about the intended outcomes, and about how you should have run the project?

 

A (hypothetical) conversation between the Head of Department X and an Academic Developer

HODX Thanks for coming. Look, what it is, the students are complaining about feedback. Could you run a few workshops to help us sort this out?
AD What in particular is bothering students?
HODX Well, they say sometimes the feedback is late. Sometimes they can’t understand it, can’t use it.
A D Okay. A couple of issues here. On the first one, late feedback. What is the department policy on turnaround time for student work?
HODX We haven’t really got a policy on it. Staff don’t like being tied down. We really should get a policy. But the staff are very busy …
AD I understand. Happy to help you work out a policy when you’re ready. But in the short term, we could do some surveys, even interviews. Ask the student how soon they’d like the feedback …
HODX They’ll say they want it tomorrow! No chance!
AD Well, let’s find out. There may be ways to do feedback a lot faster. But yes, we could ask staff how soon they can only think it is feasible to turn work around. We will get to some sort of compromise. The outcome we want here is …
HODX … a lot fewer student complaints about the late feedback!
AD That would be a good start. Second issue – students don’t understand the feedback. What do you think is going on there?
HODX (Pause) I think the problem may be, staff know the material they are teaching so well that their feedback is a bit – concise? I remember what you said at that teaching workshop last year – staff can forget what it’s like not to understand.
AD That’s possible. Well, we could ask students what would make feedback more comprehensible. Get their views back to the staff. Then, run a session for staff on making feedback more comprehensible, based on what the students say, and adding in a few ideas from the literature. Develop some guidelines. Then, after a few months, we could find out …
HODX … if students are finding the feedback more understandable.
AD We could do that.
HODX You don’t sound sure.
AD (Pause) There may be a bigger issue here. Not just “Do students understand the feedback?”, but; you hinted at it; “Are students using the feedback to help them decide what to keep on doing right and what to do differently next time?” After all, that’s the point of feedback, isn’t it?
HODX Interesting. How would you tackle that?
AD The usual. Student surveys. Student interviews. Actually, we could try something a bit newer. We could facilitate some conversation between students and staff about this, dig a bit deeper. That could be very useful. If staff would do it?
HODX I’m sure I could persuade a couple of them! What are we trying to achieve here?
AD I guess – students making good use of feedback to inform their future studies?

Of course, it’s not just about the kind of feedback staff give. It’s also about the pattern of assignments, maybe even the shape of the whole course.

HODX Whoa! Where did that come from?
AD If the next assignment is completely different, or if the timing is wrong, or students aren’t helped to use the feedback, then maybe some students aren’t using feedback because it’s just not possible for them to use it? Issues like that
HODX You always want to change the world, don’t you! Let’s stick with speeding up feedback and making it more useful for now. We can get to redesigning the course at the major review time in, what is it, two years.
AD Always happy to help.

Notice in this conversation:

  • The use of the idea of outcomes and evaluation, by both the HOD and the AD.
  • The AD working to identify what is going on, digging a bit deeper.
  • A reference to using the literature.
  • The AD seeding ideas for possible future work.
  • Negotiations about what is feasible now.
  • A good working relationship, with mutual respect.

Reference

Stefani, L., & Baume, D. “Is it working?’ Outcomes, monitoring and evaluation. In Baume, D., & Popovic, C. (Eds.). (2016). Advancing Practice in Academic Development. (pp. 157-173) London: RoutledgeFamler

 

 

David Baume PhD SFSEDA SFHEA, david@davidbaume.com, @David_Baume, www.davidbaume.com

 

Hypothetical Case Study on Clarifying Goals: ‘Enhancing the status of teaching’

A university policy aim might be to enhance the status of teaching. This is a laudable aim; but vague.

A non-rhetorical question to begin with: How would you identify the status of teaching in your university, and track its changes over time? Let’s try to sharpen the aim.

We might try to achieve a more rigorous definition. We could negotiate university meanings – university meanings, not the meanings; we are developers, not writers of dictionaries – for terms including enhance, status, even teaching …

Alternatively, we could take a more direct approach and ask the question – What would indicate an enhanced status of teaching’? We could decide, again using ideas from the literature, and/or we could ask within the university. We could devise and implement a survey to identify the current status of teaching. This would rapidly reveal some of the many meanings of the status of teaching.

Possible indicators of status accorded to teaching:

  • A formal teaching awards scheme – a plausible indicator that an institution is seeking to enhance the status of teaching. Beyond this, the number of scheme applications and awards each year, the rewards given to and more broadly the fuss made of award winners; these are all further indicators of an institution taking seriously the enhancement of the status of teaching.
  • Promotions criteria that include teaching – indicating that teaching is being valued.
  • (Also – are the criteria widely believed to be being used?)
  • Teaching ability being emphasized in recruitment advertisements, and taken seriously in selection processes, would be a further positive sign …

These and other possible approaches have in common is that they and/or local level, for staff to teach well, to improve their teaching, and to enhance the status of teaching.

Of course most of these processes could be implemented well or badly, strongly or feebly. All could be respected or not by academic staff, managers and students. All could be subverted or diminished by other policies and strategies which value, or are perceived as valuing, other kinds of activity – most obviously research or administration – more highly than teaching. Nonetheless, a university implementing such measures, and putting some effort into evaluating their effectiveness, could make a decent claim to be committed to enhancing the status, and also the quality, of teaching.

Analysing these, seeking a core, seeking context-specific (for example, discipline-specific) local variants, and feeding in any research-based accounts, all start to give an account of the status of teaching with which we can work. Our account should enable us to identify enhancement over a baseline. In enhancement work, it’s good to know which way is up.

Academic developers have many possible roles here. They can help universities, schools and departments to identify possible good academic practices that are broadly compatible with the norms and values of the institution, accepting also that one of the more challenging roles of the developers is sometimes to help the institution to shift its norms and values. Developers can make productive connections across the institution …

At some point a developer will also want to ask – ‘Why do we seek to enhance the status of teaching?’

Condensed and adapted from Stefani and Baume (op cit.)

Co-operation in Development – Summary David Baume

The survey

Nine responses were received to a survey on cooperation in development, from academic / educational development units.

Main development functions of units

The most frequently mentioned development functions can be grouped as:

  1. Staff development, including training teachers for a qualification, accrediting teachers, CPD, and supporting staff and faculties;
  2. Educational development, including improving learning, teaching and assessment and curriculum development;
  3. Institutional support, including policy and strategy development and projects
  4. Student development and support
  5. Learning technology development, implementation and evaluation
  6. Other functions – post graduate support, research into teaching and learning, horizon scanning, and QA

Beyond what once might have been classic academic development functions; perhaps A, B, C and some of F; we now see also D and E.

Other development functions elsewhere in the institution

These include HR, learning technology, student development, student services (including overseas students), faculty committees and research.

Frequency of contacts between the academic development unit and these other development functions:

Frequency of contact: Annually or less A few times each year Most or every month Most or every week
Number 1 13 8 5

Nature of contacts between the academic development unit and these other development functions:

Nature of contact: Formal Informal Both Policy / strategy Operational Both
Number 3 4 13 3 2 10

The largest scores for frequency of contact – a few times each year / most or every month – and type of contact – all four types! – are in bold.

Comments on what above all supports effective co-operation on development in your institution

Personal relationships and communicating (3 mentions each); the encouragement of leadership (2); and the alignment of strategy (1). Also mentioned are ‘seeing the person face-to-face’, ‘our small campus culture’, goodwill, energy, focus and effective resourcing.

Comments on what above all impedes effective co-operation on development in your institution

Structural factors (5 mentions), where the response was amplified, included organizational hierarchies, constant restructuring, silo working, geography and a lack of specific leads for specific functions. Communications factors (3) included ¹having EVERYTHING online via email¹ and the lack of time. One respondent reported the absence of goodwill, energy, focus and leadership as the main inhibitors of cooperation in development.

Comments on nature and frequency of co-operation with other units

These four longer comments from respondents suggest some of the complexity and benefits around inter-unit cooperation:

  1. As a sole operator in learning and teaching facilitation across the university, I establish currency in my role, by ‘supporting’ (not teaching, as [I do not have] an academic role) the PGCert provision. For a period, my role was positioned in the [learning and development function] of our Human Resources department, but I was [recently] re-located , as much of the ‘development’ I facilitated did not naturally align with colleagues’ specialisms in [organisational development]. I was therefore moved to [a unit concerned with research], from where I run an annual programme of CPD in learning and teaching and facilitate the university’s CPD for professional recognition. Being in research, I also support programmes around researcher development, particularly in terms of teaching/supporting learning, graduate teaching schemes, etc
  2. Our university is small and we often work with [the learner support function] to share data, review the best approaches to take for all students, and progress agendas with senior colleagues. Good personal relationships between these teams mean we routinely pick up the phone and use each other as sounding boards for work of common interest.
  3. [Relations are] good, and they are very helpful, now that [another unit has] realised we have been doing serious education research and they keep us in the loop and support us to bid for funding opportunities. The university has recently appointed a [senior post for] Education and we are hopeful they will enable the different bits to coalesce.
  4. At the formal level, these committees (depending on the Chair!) can be procedural – so it is possible to ask critical friend questions – but the real value is informal, getting into conversations with colleagues about developmental work. In many respects the people who are key on these committees are ones our team has known since they were on the PGCert and these relationships have built up over time.

‘Churn’, both in staffing and in structures, was also a theme, along with task-specific rather than general co-operation

Overview and possible implications

Co-operation between development units / functions in higher educational institutions is valuable, difficult, and mostly attainable. Good personal communications and relationships aid co-operation; structural factors and poor communications impede cooperation. Developers may wish to consider (1) working to establish good personal / professional relations, perhaps initially around specific, rather than big picture, co-operations; (2) more broadly being prepared to work across structural boundaries in pursuit of institutional goals and priorities; and (3) assuming that, whatever the current actual or perceived structural obstacles and political difficulties, no-one actually wants to prevent us from doing good stuff.

Co-operation in development

Notes for a survey and session at SEDA November 2016 Conference 

The importance of collaboration in learning and development has long been stressed. Working with and developing learning communities is a SEDA value. Kahn and Walsh (2009) provide theoretical underpinnings for and vivid examples of collaboration. Baume and Popovic (2016) contains many accounts of the importance of collaboration. The authors describe “the increased blurring of and collaboration between development functions;” (p 293). At greater length, we say:

Neighbours

Not all problems, opportunities or possible sites for action in higher education fall tidily under the heading of teaching, learning, assessment, course design, educational development, staff/faculty development, student development, advice and guidance, personal tutoring, language development, numeracy development, learning technology, management, researcher development, research supervision development, administration, support for students with specific learning difficulties, international education, support for students from overseas, equality of opportunity, graduate careers education and advice, employability, community links, open and distance learning, learning resources, estate planning, designing and equipping teaching and learning spaces, learning analytics, organizational development, library and information services, etc. … This suggests, if it were not already obvious, the great need for the various university development functions, including but not limited to those above, to cooperate.”

This is all very well. But organisational and political pressures can militate against collaboration. We all believe in cooperation. The issue which this discussion paper session will tackle is – how can we make it happen, in the real world of current higher education?

Baume, D., & Popovic, C. (Eds.). (2016). Advancing Practice in Academic Development. London: RoutledgeFamler.

Walsh, L., & Kahn, P. H. (2009). Collaborative working in higher education: The social academy. New York: Routledge.

 

Bloom and Course Design – Disaster Strikes!

Introduction

This follows https://davidbaume.com/2015/09/28/learning-and-knowledge-bloomin-obvious/ . I’d start there, if I were you.

Course design – bottom up?

So, Bloom’s taxonomy as a tool for analysing knowledge has problems. But disaster strikes when the taxonomy is used as a tool for course design. (I am here criticising uses of Bloom, not the taxonomy itself.)

When you’re designing a course, the temptation is hard to resist. Bloom offers us a taxonomy, a hierarchy, of knowledge. Each higher level builds on those below. Obviously a building must start with a foundation, perhaps built in a hole in the ground. So, to design a course, start at the lower levels, on a solid foundation of knowledge, of things known, of facts and theories. Then build on up.

First, teach students the facts. Then teach them to understand the facts. Then to apply the understood facts. Then to analyse, and finally to evaluate and synthesise, in whichever order. As easy and as logical as Lego. Just another brick in the wall.

Trouble is, it’s nonsense. It’s not how people learn. It may or not be a useful description of levels of knowledge. As a way to plan learning, it’s a disaster.

Do the thought experiment for a course or subject you know. What are a few basic bits of knowledge in your subject? Imagine teaching students to remember these, then either to recall them in an open-response question, or to recognise them among – wrong facts, I suppose – in a multiple-choice question. No context for the knowledge, no account of why it is important, or what it might mean. No higher level stuff. Just the facts. Learned and remembered.

OK, now let’s move on to understanding these facts, to explaining them, expressing them in different ways. Done.

Now, let’s learn to apply this knowledge and understanding, to use it to address questions and problems. And then, having applied it, to analyse it. And finally, let’s critically evaluate and synthesize these facts that we have learned, and then learned to understand and to apply.

This is ludicrous.

I have been taught this way. It really was ludicrous, and of course quite ineffective. Even as we were being taught the basic facts, we were trying to make sense of them, each in our way. Some of us were trying to understand the facts, to reformulate them in terms that made better sense to us, that linked to, whether supporting or contradicting, things we already knew, and maybe even partly understood. Some of us is sought to understand the facts by trying to apply them, although we were not always clear to what kinds of situations or problems the facts could be applied. Some of us tried to the facts, although again we had nowhere to stand to critique or evaluate, and were certainly not encouraged to do so. We were being lectured to, with no opportunity to ask questions, to explore or to discuss, to make our own sense. We all did some memorising. But it all felt rather pointless.

I suspect I may even have taught this way a few times, in my early days. To those students, I apologise.

What has gone wrong here?

How long have you got?

A dangerous metaphor

A metaphor has led us astray. A diagram has led us astray. The metaphor of foundation and building, and the pyramid diagram in which Bloom’s taxonomy is often presented, simply do not work for learning. Why not?

A building, whether or steel concrete or Lego, doesn’t start with a foundation. It doesn’t even start with a plan. It starts with an idea, maybe even with a vision, with a need, with a specification of some form. Only when this has been agreed can we develop a plan. Quite late in the process, concrete is poured, steel is erected, and bricks are laid – or clicked. These ‘basics’ and ‘foundation’ metaphors just don’t work.

The basics?

Let’s come at this from another angle, and maybe try to rescue Bloom, to find good ways of using the taxonomy in course design.

It’s appealing to suggest that we should start from the basics. What are they? What are the basics of your subject? Go on, think about it for a minute.

What are the basics of your subject? Are they: Facts? Theories? Principles? Purposes? Problems? Values? Ways of thinking? Ways of acting? Something else? Some complex combination of all of these and others?

For what it’s worth, and as an illustration, after thinking for a long time about my own discipline / profession / area of work, academic development, I got to the view that that its basis takes the form of a purpose or goal; to improve student learning. Of course academic development has many intermediate purposes, and many methods and theories and ways of thinking and working and the rest. But at its heart, I feel, is a purpose. “Improving student learning”. Which, as I write, I realise is a subset of a larger goal – to make the world a better place. Which is hopefully a goal of most if not all professional and scholarly activity. (Add quotation marks to any of these words and phrases as you need.)

But, whatever the basics of your discipline, where do they lie on Bloom’s taxonomy?

Probably more towards the top than the bottom.

I’m not against knowledge and comprehension and application. They have their place. But, I would suggest, they are important mainly in how they are used in at the higher levels. I’ll probably give this more detail in a future post. Knowledge and comprehension and application may indeed be required for the attainment of higher-level goals. But this absolutely doesn’t mean that the pursuit of high-level goals has to start at the bottom.

This is the mistake that is often made in the use of Bloom’s taxonomy in course design. The relationships between the different levels of the taxonomy are used to deduce, completely wrongly, a sequence of teaching. The structure of knowledge doesn’t determine the best way of learning. We know a bit about learning – the importance of a wish or need to know, learning as a process of making rather than simply absorbing sense, learning as an active business incorporating above all action and reflection, reflection often being aided by feedback. We need to apply what we know about learning to whatever it is that is to be learned. That’s the only safe way to produce a good course design, good learning activities, and hence good student learning.

Conclusion

So – Bloom’s taxonomy may have some limited uses a tool for analysis. But I can’t find a good use for it in course design. Can you? I’d love to hear.

Next time, moving away from Bloom, I’ll suggest some more fruitful bases for course design and the planning of teaching.

Sources

Bloom, B. S. (1956). The Taxonomy of Educational Objectives: Handbook 1 (1st ed.). London: Longman Higher Education.

Bloom, B. S. (2000). Taxonomy for Learning, Teaching, and Assessing, A: A Revision of Bloom’s Taxonomy of Educational Objectives, Complete Edition. (L. W. Anderson & D. R. Krathwohl, Eds.). New York: Longman Publishing Group.

Learning and Knowledge – Bloomin’ Obvious?

Introduction

This argument will grow over several posts. I’m developing the argument as part of the process of developing and writing a book, which I currently think will be about learning in higher education. Wish me luck!

Learning and Knowledge

It can be hard to talk about learning. For example – what is being learned? Among other things, subjects, of course. Knowledge.

Ah, knowledge. Or to give it its technical name – stuff.

When we educators talk about knowledge, we usually mean much more than just stuff. By knowledge we can also mean ways of thinking, ways of acting, ways of being; forms of meaning, principles, theories, values, and much besides. The sorts of ambitious and demanding knowledge that we rate highly.

But the word knowledge can still pull us down. Whatever higher-level things we want knowledge to mean, knowledge also, seemingly inexorably, seems to end up meaning, well, stuff. Things. Objects.

I’m not sure how this happens. Suggestions welcome. But it happens.

Why do I call it stuff? Because of what we do with it. Learners learn it, and teachers teach it. And this in turn can bring down the whole educational show.

How?

On a bad day, of which there are many, learning (stuff) becomes memorizing (stuff). Knowing (stuff) becomes having memorised (stuff). Teaching (stuff) becomes telling (stuff). Assessing stuff becomes finding out whether people know stuff, which tends to mean checking, either if they can recall stuff (through open response questions) or if they can recognise right stuff among wrong stuff (multiple choice questions.) Stuff. The downward pull of knowledge as stuff is strong. Not irresistible, but strong.

What to do?

We might choose to be explicit about the full range of what we mean by knowledge. Or we might want to stop talking about knowledge for a while, and talk instead about the full range of desired (and assessed) types and outcomes of learning. Or we might simply decide to stop treating knowledge like stuff. Stop telling it. Stop seeing if they’ve remembered it. Instead, teach across this full range, support and expect and assess learning across the whole range. Concentrate on the higher levels. Knowledge, alas, can drag you down.

But how to describe this range, these levels?

Bloomin’ Obvious?

Levels of learning and knowledge

This is the problem that Bloom is addressing in his taxonomies (Bloom, 1956 and 2000) – how to classify in some usable way the multiple types and levels of learning that we might see, expect, hope for or teach towards. The taxonomies were originally devised within a behaviourist educational paradigm. This paradigm saw teaching as providing prompts and stimuli which would provoke appropriate responses, learning and evidence of learning, which was then rewarded. The paradigm worked for rats and pigeons – why not for students?

Bloom produced classifications, taxonomies, for the cognitive, affective and psychomotor domains. That for the cognitive domain, considered here, has endured longest. It still features, sometimes in its updated (2000) version, in courses to train university teachers. It offers a classification of types of educational objectives or, as we now say, learning outcomes, of things people can with their rational brains.

Bloom  
  Level
Version 1 2 3 4 5 6
1956 Knowledge Comprehension Explanation Analysis Synthesis Evaluation
2000 Remembering Understanding Applying Analysing Evaluating Creating

6

Evaluation

Creating

(They’re in the table in the draft, honestly! They just don’t show in the Table when posted, Drat.)

The taxonomy has some use as a tool for analysis, particularly at level 1, knowledge / remembering. It’s good if we can be honest with ourselves and our learners, and say “You need to know / remember (which may mean recall or recognise) this.” But in a connected world, with well-indexed and sometimes authoritative knowledge only a skilful click or swipe or two away, our students may ask – “Why do we need to know?” This is a conversation well worth having. I often have it with myself, as a future post will show.

But the taxonomy crumbles at higher levels.

Bloom and Assessment

Crucially – when we ask students to show that they understand, or ask them to apply or analyse or evaluate or create, it is hard to be sure that they are not simply remembering a previous act of understanding or application or analysis or evaluation or creation. This is a giant hole in the security and integrity of assessment, and in the fiction that assessment faithfully assesses higher-level abilities.

How does this hole occur?

Most tutors prepare their students for examinations (let’s stick with exams for now, although the argument broadly works for other forms of assessment), Tutors typically set or suggest broadly similar assignments, essays, questions and topics, and perhaps offer feedback; or suggest what particular content the exam may address, and perhaps what issues, arguments, approaches may be preferred.

If the examination question is sufficiently different from the pre-assessment assignments, the student may actually need to work at higher levels. They may need to adapt or apply a method or argument to a slightly sufficiently different setting or content. But that may be as high as we get. “Do something completely original?” In an examination? There might be riots.

Often, we simply cannot know the true nature of the assessment task. The apparent assessment task, even accompanied by the intended learning outcomes and assessment criteria, does not confidently tell us what level of the taxonomy a student must work at tackle the question satisfactorily. To judge this we should need to see most if not all of a student’s previous work and learning and the feedback they received ….

Anyway, the levels are just not as clear as they appear. I have not seen a study of how reliably or how validly academics classify assessment tasks or learning outcomes against Bloom. I’d love to see such a study – it must have been done. But I have seen disagreements among academics about the highest level required by a task. (Most assessment tasks require at east some of the lower levels.)

Conclusion

So, Bloom has serious weaknesses as a tool for analysing knowledge. But a future post will see the much bigger horrors that can occur when Bloom is used as a basis for course design.

References

Bloom, B. S. (1956). The Taxonomy of Educational Objectives: Handbook 1 (1st ed.). London: Longman Higher Education.

Bloom, B. S. (2000). Taxonomy for Learning, Teaching, and Assessing, A: A Revision of Bloom’s Taxonomy of Educational Objectives, Complete Edition. (L. W. Anderson & D. R. Krathwohl, Eds.). New York: Longman Publishing Group.

The Proposed Teaching Excellence Framework (TEF) – Some Thoughts on Value Added as a Metric

The purpose of teaching is learning. We measure learning mostly through qualifications. It’s easy to compare a student’s qualifications at entry with their qualifications at exit, if we want to. All we need is a common measurement scale that embraces entry and exit qualifications, differentiating between different grades where applicable. 

Maybe something like this:

A-level – grades A* to E, 100 down to 50 points 

(A-level points for a student would of course be summed across the subjects they took.)

Cert HE – 600 points

DipHE / FD  – 800 points

First Degree, 1st class down to pass

These points are not the same as credit points, which describe number of hours of study rather than qualifications attained.

We might also include higher taught degrees – the same logic applies, I think. 

The actual numbers are arbitrary, although they are ordered – higher always means higher. But this arbitrariness doesn’t matter, as long as the entry and exit qualification scales don’t overlap. All we will be doing is subtracting points at entry from points at exit  – that is, identifying the value added for each student. These differences, these measures of value aded per student, will then be averaged by institutions, Schools, programmes, whoever’s value added we are currently interested in. And, once calculated, they can be compared. We are only interested in difference.

There’s a choice to be made about how we deal with completion rates. Is completion rate a measure of teaching quality, or a measure of how life affects students? Or both? I’m inclined to suggest that we only compute value added for those who complete, which may mean achieving an intermediate qualification, typically CertHe or DipHE.  As always with metrics, we need to be explicit about what we are proposing. There is not always a right answer.

This approach to value added only use existing data – no new data need be collected. Which is good.

Of course, what have historically been called ‘recruiting’ universities would score  higher on value added than ‘selecting’ universities. But the resultant debate would flush out views about the value of degrees from different institutions. These views may or may not may not survive sunlight. But these views are probably best expressed and tested. 

Anyway, value added is only one element of the framework. But it’s a tough one to argue against.

The problems with measuring value-added are not technical. They are political. But let’s have the debate!

This post was prompted by:

“No convincing metrics have been offered so far on how to measure added value without involving time consuming pre-and post tests.” in http://sally-brown.net/wp-content/plugins/download-monitor/download.php?id=238  from Sally Brown and Ruth Pickford, from here, and the subsequent LTHE Tweetchat, Storified here.

Tech in its place, part one

What expectations should we have of technology (hereafter, tech) in the work of higher education? What is the proper place of tech? What should developers do about tech?

Good tech

Good tech has at least these three related qualities. It just works. It does, or it can, make things better. And after a while it becomes almost invisible, almost unproblematic.

Creative disruption, then improvement, then a further step into cyborgia

With a new tech – whether it is new to the world, new to the institution or discipline, or new to the group or individual – there is an initial period of excitement and learning, sometimes accompanied by fear. During this period we discover the range of things that we can do with the new tech that we couldn’t previously do, or could do only with some difficulty, or less well, without it. We explore which of these new possibles we want to, and should, and can, use. Well managed and supported, this can be a time of creative and productive disruption.

And then the tech almost vanishes into us and our organisations, becomes embedded in our practice and in our thinking. We have become in one more sense tech-enhanced humans and organisations, as we did when we first wore glasses or contact lenses, or rode a bicycle, or drove, or travelled by aeroplane, or built a building. We have advanced a little further into the condition of cyborgia.

The ‘almost’ is important. Hopefully we are still at least a little conscious, when for example we telephone, of what we are doing. For example, as we make the call, hopefully we are sensitive to the risk of intrusion, aware of the person called’s context, prepared at least a little for whatever the call may bring us both. But the fact that we can often speak to someone without visiting them, through a system of vast and invisible complexity, is now, in the moment, for most people, relatively unproblematic.

How does this relate to the technologies of our work? What expectations do we have of the tech in use? Here are a few of my answers. You might find it useful to spend a few seconds noting what (else) you expect from the tech-supported people with whom you work.

We expect …

  • We expect to be able to compose and then send a message, perhaps with attachments in various media, to (very very nearly) everyone we know professionally, and maybe also personally. We are confident it will be in their inbox within minutes. We expect to send this message this without having to remember their contact details, just their name. No envelope or stamp required.
  • Building on this, we expect to be able to communicate with similar ease with defined groups and subsets of the people with whom we work.
  • We expect to be able to find, within seconds, contact information for someone we don’t yet know.
  • We expect to find at least a half-way useful answer, or at least a starting point to an answer, to an increasing number of questions, of growing complexity, by typing the question into a search engine.
  • We expect ourselves, and those with whom we communicate, to write in language that is grammatically correct and correctly spelt, at least according to the views of our software provider.
  • We expect those with whom we work to be able to locate and make critical, intelligent, appropriate use of (a) information at which we point them and (b) information of particular interest and use to them which they find for themselves; and then we expect them to make and share connections and relationships between information from these two kinds of sources.
  • Beyond literacy, beyond competence, beyond capability, we might expect or hope for a degree of fluency. Fluency in working with words and numbers and images and ideas appropriate to our disciplines and our professional and personal life. Fluency also in using the technologies through which these various elements of academic and professional work and life are more and more often created and manipulated and communicated and read and studied and used.

We shall expect …

You might find it even more interesting to cast an eye 50 years and more into the future, and begin to consider what expectations are reasonable, for the current students within our universities, throughout their working and personal lives.   Maybe I shall have a go at this in a future post, remembering that, as Neils Bohr  “Prediction is very difficult, especially about the future.” Note that he did not say ‘impossible’.

We won’t be able to teach our students at University all the necessary skills for their next 50 or more years. All we can do is help them become able, keen and confident to learn; of course selectively and critically; whatever new tech they want to / need to learn. Because the great majority of the tech will continue to become easier to learn and use. Whatever we may think about markets, the market should at least achieve this.

Note that these current and future expectations cannot neatly be separated out into our expectations of the tech and our expectations for individuals, although some of the expectations may be tech-led and others more people-led. They are all expectations of individuals using the tech.

Next

In the next post I shall test these expectations against our current experiences and realities of using the tech, and explore the places of tech in higher education.