Skip to content

Open Web Examination – OWE

Some thought experiments.


Revisit one of your recent examination papers. Make just one change to the rubric – “Students will have full access to the World Wide Web during this examination.” (Assume, at least for now, that they will still handwrite their answer.)

How would the nature and quality of students’ answers be different from the actual answers they wrote? Why?


You might want to narrow things down a little. You might want to ban the use of email and other forms of communication, within / outside the examination room. You might or might not want to let students access their Dropbox folder, or the University VLE. Your call. (Why did you make the choices you made?)

What differences would this additional item of rubric make? What would be gained, what would be lost, what would be?


Use the results of thought experiment 1 to plan the next examination, in the same subject, with the same new rubric – “Students will have full access to the World Wide Web during this examination.” What difference will that make to the questions you set? What would be a good Open Web version of last year’s questions? Or would you need to set radically different questions? What questions? Why?


Changing the exam without telling the students at the start of the course would obviously be wrong. So, assume that you have decided to set an Open Web Examination in this subject next time. First of all, whom if anyone would you have to persuade? What objections might they raise? How would you respond to these objections?  What concerns might you have? What are your responses to these concerns?


Assume that you have decided to set an Open Web Examination in this subject next time, and have obtained any necessary permissions.

How will the change to an Open Web Examination affect:

  • The design of the course?
  • The Intended learning outcomes?
  • The assessment criteria?
  • The way you teach the course?
  • The kinds of learning activities, both individual and collective, that you ask students to undertake?
  • The kinds of feedback they receive on their work?
  • The information and advice you give them on finding and sharing and using information?

In each case, again, why?


Jump forward a few years. Assume that the University has been using Open Web Examinations for the last few years, certainly all the time your current cohort of students has been at the University. Imagine announcing to them that you, are going to change to closed book examinations. What reactions would you anticipate? What questions would you expect to receive about the change? How would you answer these questions?


Imagine that, instead of handwriting their answers, students can word process their answers onto the same machine they are using to access the Web. Again, how do you think colleagues and students will react? Again, what changes would this make to the kinds of questions you ask, and to the way that you plan and run the course and teach the students? (One interesting side-effect would be to make plagiarism detection easier. It is difficult to run a hand-written examination script through Turnitin.)


You might want to go even further. Imagine students can communicate with each other, inside and outside the examination room. What differences would that make?


These thought experiments, these scenarios, obviously clarify differences between the conditions under which students are required to demonstrate some of their graduateness and the conditions under which, in the real world (including Universities), academic and professional work is done, with access to libraries, the web, email, interchange, conversation, feedback. Articulating these differences may encourage us to revisit conventional examination conditions, and check their continued appropriateness as part of certification for life and practice in the 21st century.

The scenarios draw attention to some of the capabilities that we do not assess – in the examination, at least, we may examine them elsewhere.

They suggest to me that closed book examinations show a sustained belief in the importance of memory, even in an age where we increasingly outsource our memory, though hopefully not our critical faculties. Thinking about Open Web Examination may encourage us to think hard about relations between knowing and doing in an age where both knowledge and action are changing faster than ever.

In a little more detail – Open Web Examination invites us to focus more closely on what students can do with, and to, what they know. It also invites us to focus on how they do what they do: For example, as we fervently hope, in ways that are critical, reflective, scholarly, principled, and values- and research-informed. In the good senses of both words, to act both academically and professionally.

Open Web Examination – OWE – also, Open World Examination.

What would it take for you to bring these OWE thought experiments to life? Again – what would be gained, lost, changed?



On Difficulty

What makes something – a subject, a topic,an idea, an ability, anything – difficult?

I found it easier to think about difficulty in relation to abilities, things we do. Is a 360° backflip on a BMX bike difficult? Well, I would find it so. I can’t speak for you. But I can probably speak for somebody who had already done a 720° backflip. They would probably say “No, a 360° is not so difficult.”

So, the difficulty of a particular ability is at least partly in the mind of the practitioner. And that difficulty is great or small at least partly in relation to what they already know they can do.

Staying with ability, and shifting from an individual to a societal view, another indicator of difficulty may be the smallness of the proportion of the population that can do it. The rarer, perhaps, the harder.

Staying with ability – we’ll get onto difficulty in relation to subjects, topics and ideas, l promise – other popular or intuitive dimensions of difficulty may be complexity and scale. I can play any particular chord from Tchaikovsky’s first piano concerto – give me a minute – but not all three movements. I can climb onto a small rock, but not climb Everest.

So, as elements or indicators of difficulty in practical tasks, I can offer:

1 Relation to my current capabilities: “It’s hard because I can’t do it (yet)”, which is “I haven’t done it yet” with the possible addition of “and I can’t immediately see how I would do it.”

2 The proportion of the population that can do it: “It’s difficult because lots of people can’t do it.” (Of course there are lots of things that most people don’t do mainly because they would never dream of or want to do them, but we’ll ignore these for now.)

Implicit in both the BMX and the Tchaikovsky examples:

3 The time taken to learn the ability: “It’s hard because it will obviously take a long time to learn to do it” and

4 The effort taken to learn the ability: “… a lot of work … “.

⁃ And then:

5 The complexity of the ability, the number of different elements and relationships in play; “Good grief but that looks complicated” and, related but not the same,

6 The scale – which may be size, duration, whatever applies. “Gosh but that looks enormous.” The Tchaikovsky example illustrates this well.

These are six possible dimensions of difficulty of abilities. What others do you see?

OK, at this point in the writing process, I admit I am surprised. If you’d asked me before I started what I was going to conclude, I’d have predicted that I would just say “Difficulty is purely, or mainly – i’m not sure which – subjective.” I enjoy writing when it gets me to an unexpected conclusion. The unexpected conclusion I’ve got to so far is that it is possible to talk about more-or-less objective factors which may contribute to difficulty.

But there is a sense in which my initial reaction was right, although of course badly incomplete. Difficulty is a feeling. Time or effort or complexity or rarity, any of the six dimensions of difficulty teased out above, may or may not bring with them a feeling of difficulty. For example, to the persistent or the individualistic, the large time investment required or the fact that the vast majority of people can’t do it may not be problems, may not be sources of difficulty. They may simply be facts. Or they may even attract, appeal.

As promised; what makes a subject, a topic, an idea, difficult?

I think, all of the above, as explored briefly here:

1 “I don’t understand it” and, perhaps, for a more persistent learner, “I can’t immediately see how I would come to understand it.”

2 Maybe “Very few people understand that.” or “Everybody says it’s really hard to understand”, a norm-referenced or crowd-sourced account of difficulty.

3 and 4, perceived or actual time and effort required, are very similar for ideas and for abilities. In education, time and effort may be reflected in the length of the course, or by popular accounts of how hard the course is, or by entry requirements.

5, Complexity, in both senses described here – the number of elements and the number of relations among them – is captured by Biggs in his SOLO taxonomy.

6, Scale is also captured by SOLO, and may be made manifest by the length of the reading list or the size of the textbook.

These close links between difficulties in abilities and difficulties in subjects, topics or ideas should not be a surprise. The many and complex relationships between abilities and knowledge are topics for many other blog posts. For now, I’ll simply note that, in education, knowledge usually embraces ability, which may be the ability to construct, critique, use and do many other things to and with the knowledge. Learning outcomes, on a good day, provide a bridge between knowledge and ability.

Beyond the six suggested above, there are further possible elements of difficulty in academic subjects, topics or ideas, if we work with learning outcomes rather than with knowledge as slabs of content.

7 Bloom offers us one way to conceptualise the difficulty of tasks. He considers a range of things we can do with and to knowledge, from recalling it, making sense of it and applying it to analysing it, evaluating it, and synthesising new knowledge. The higher levels of Bloom’s taxonomy are widely treated as corresponding to higher levels of difficulty, with for example lower level courses being confined to the lower levels of Bloom. I have serious difficulties with this, as i explore in other posts. But nonetheless the relating of Bloom level to difficulty is widely practised.

8 If we put Bloom and SOLO together, or rather set them orthogonal to each other, we can develop a useful account of difficulty as a composite function of level and complexity.

9 Wickedness is another dimension of difficulty, in the widely used sense of wicked problems. Wicked problems are not so much difficult to solve as insoluble, certainly incapable of an optimum solution. We may take solvability as another dimension of difficulty, applied of course to problems rather than just to content. To what extent, how far, can an exact or at least a satisfactory solution be obtained? The less solvable, perhaps, the more difficult.

10 If I can do arithmetic on small numbers, I can probably, with some care and effort, become able to do the same arithmetic on larger numbers. I am simply extending my capabilities.
By contrast if I can confidently do arithmetic and I am suddenly faced with differential and integral calculus, it may take me a long time to become competent and confident with them. They may feel to be much more difficult than doing arithmetic with larger numbers.

Why the difference in difficulty?

Doing arithmetic with larger numbers is still doing arithmetic. It’s more difficult, but it’s the same kind of activity. Differential and integral calculus are, by contrast, something else entirely. We have to change our view of what constitutes mathematics, adding calculus to arithmetic, and thereby add new ways of thinking, make sense of and then use new notations and concepts.

In Piaget’s language, calculus requires us to accommodate, to change, our view of what comprises mathematics. By contrast we can simply assimilate arithmetic with larger numbers into what we already knew about arithmetic. Rogers tells us that we are much more likely to resist new ideas and concepts that threaten our sense of self, our current worldview. Changing and extending paradigms, worldviews, is likely to be much more difficult than working within and extending current paradigms. This gives a further plausible account of difficulty.

11 Land and Meyer’s accounts of threshold concepts – ideas which among other qualities require a (usually irreversible) change in world view, and are therefore sometimes troublesome, difficult – usefully add to the work of Piaget and Rogers.

“What do you mean by difficult?”, or “Why do you say this is difficult?”, may be useful questions to ask learners, including of course ourselves. As the technology becomes better and better able to enact simple abilities, ideas, concepts, subjects, questions and tasks, then we are going to be left with the more difficult ones. (To which I reply, yippee, bring ’em on!)

Analysing the nature and sources of difficulty, as I have started to do here, doesn’t magically enable us to deal with these more difficult tasks. But it will give us valuable clues, precursors for planning and acting.

1 I am very grateful to Gerard Long, Head of Accounting and Finance at Waterford Institute of Technology, for a conversation earlier today which prompted this post.
2 The rather narrow broadband here in Jos, Nigeria, and the relatively small screen of the iPhone, together deny me ready access to references and sources. I shall insert these when back home, but not immediately,

As always – tell me what you think.

Re-mapping the Higher Education Development Community




We’re trying to update our map of what we might call the UK Higher Education Development Community – by which we mean those national professions and organisations with a substantial and explicit focus on improving higher education in the UK.
Obviously our definition of the UK Higher Education Development Community isn’t very precise. I’m not sure it can be. To make things manageable, we’ve generally not included commercial providers, or university-based development units (even those that that work outside their institutions), or Unions or employer associations. Perhaps another day, for a broader study. Accepting that many boundaries/interfaces are a little fuzzy.
Thank you to the many colleagues on the SEDA jiscmail who have already added suggestions to the (much shorter) list posted recently.
We’d welcome your help. Who’s still missing?
If you’re not sure whether it’s a Development Association or not, let me know about it anyway. If you can let me have its web address, so much the better. And please let me know if you spot any errors here.
We’ll keep updating this. A future version will add a line or two about each organisation.
We hope it will be useful to know who else is out there doing development. We also hope this list may serve as a tool to facilitate cooperation, among developers and their associations.
Thank you
David Baume


The current list

Standing Conference on Academic Practice (SCAP)  
  Heads of Educational Development Group (HEDG)  
  Association for Learning Development in HE (ALDinHE)

  1. ALDinHE, the Association for Learning Development in Higher Education, is the membership association for staff who work as Learning Developers, or who have a role which involves supporting student learning.
  2. We have an annual conference, regular regional symposia, a journal (the Journal of Learning Development in Higher Education), a Jiscmail list, research grants, a website of free teaching and learning resources (LearnHigher), a CPD route towards HEA Fellowship, a professional recognition scheme
  3. The Jiscmail list is lively and active, used for answering queries, sharing practice and resources, and highlighting opportunities; ACLD (ALDinHE-Certified Learning Developer) is a new professional recognition scheme to develop the professional status of learning development; founder member of the International Consortium of Academic Language and Learning Developers (ICALLD)
  4. Social media – @aldinhe_LH
  5. Contact –
  6. Founded – 2003, with the first LDHEN Symposium at London Metropolitan Univeristy 
  5  Association for Researcher Development (Vitae)  
  Association for Learning Technology (ALT)  
  Higher Education Academy (HEA)  
  Centre for Recording Achievement (CRA)  
  Higher Education Funding Council for England (HEFCE)  
  10 Scottish Higher Education Developers (SHED)  
  11 Global Forum for English for Academic Purposes Professionals (BALEAP)  
  12 Leadership Foundation for Higher Education (LFHE)  
  13 Society of College, National and University Libraries  
  14 Network For Excellence In Mathematics and Statistics Support (SIGMA)  
  15 Quality Assurance Agency (QAA)  
  16 All Ireland Society for Higher Education (AISHE)  
  17 Staff and Educational Development Association (SEDA)       
  18 Staff Development Forum (SDF)  
  19  The Library and Information Association (CILIP)  
  20 Society for Research in Higher Education (SRHE)  
  21 UK Council for Graduate Education (UKCGE)  
  22 Association of National Teaching Fellows (ANTF)  
  23 National Union of Students (NUS)  
  24 Scottish Funding Council (SFC)  
  25 Student Participation in Quality Scotland (sparqs)  
  26 Heads of ELearning Forum (HeLF)  
  27 The Economics Network  
  28 Higher Education Funding Council for Wales (HEFCW)  
  29 Department of Education (Northern Ireland)  
  30 Association of Colleges  (AoC)  
  31 Enhancement Themes Scotland  
  32 Collab Group  
  33 Colleges Wales (Colegau Cymru)  
  34 UK Advising and Tutoring (UKAT)  
  35 Association of Graduate Careers Advisory Services (AGCAS)  
  36 Researching, Advancing and Inspiring Student Engagement (RAISE)  
  37 Equality Challenge Unit  
  38 Principal Fellows of the Higher Education Academy (PFHEA)  
  39 GuildHE  
  40 Million+  
  41 UniversitiesUK (UUK)  
  42 University Alliance (UA)  
  43 The Russell Group  
  44 Mixed Economy Group (MEG)  
  45 Council for Higher Education in Art and Design (CHEAD)  
  46 WonkHE  
  47 Universities Scotland  
  48 The Association of University Administrators (AUA)  
  49 National Forum for the Enhancement of Teaching and Learning in Higher Education – Ireland

50 The Cathedrals Group

51 Writing Developers

52 Universities and Colleges Information Systems Association (UCISA)

53 Society of College, National and University Libraries (SCONUL)

54 Association for Authentic, Experiential, and Evidence-Based Learning

55 National Association of Disability Practitioners

Read more…

We may need to talk


Can you:

  • Think faster than you can type?
  • Talk faster than you can type?

If that’s you, and you haven’t tried voice recognition, or not for a while, then maybe, now, you should.

(Please don’t worry about the odd occasion when you talk faster than you are thinking. We all do it. In my case, usually after wine.)

Here and now:

I am dictating this. Into my iPhone. About as fast as I am thinking it. (For some reason I find that voice recognition is faster and more accurate in iOS than in macOS Sierra. But any account of the relative capabilities of different softwares is likely to be overturned by the next iteration of the software. I suggest, start with what you have.)

Seeing my spoken words appear on the screen in front of me is still, for me, a mall kind of magic.

I was a fan of the idea of voice recognition before it was, frankly, much good. My efforts with early versions of Dragon were – well, disappointing. Training to my voice was slow; the quality of the voice capture was often poor; and it took its time transcribing.(A colleague claimed that he trained his Dragon installation by reading it Alice in Wonderland. Which explains a lot about his writing style. And, occasionally, his content.) I believe Dragon is a lot better now. As I am sure are the others – it’s a jungle out there, and evolution is rapid and brutal, but great for the customer.

Nowadays: no training. Straight in. Fast, or fast enough – almost as fast I speak, certainly as fast as I can speak clearly. Accurate enough most of the time to mean that, even allowing for the necessary editing, voice recognition is quicker than my (not very good) typing.

A caution:

Voice recognition, certainly the iOS version, tries very hard indeed to make sense of what it hears. Sometimes, this takes it a long way away from what you meant and said. The main practical implication for me is – review often. Because, if you get a few paragraphs ahead before you check, you simply may not be able to reconstruct what you originally meant / said. And that precious pearl may be lost forever.


But two things will happen over (quite a short period of) time.

  1. The system will get to know you and your speech better, thereby becoming more accurate. This is fairly obvious.
  2. More interesting, you will make the curious accommodations required to speak written English. I’ll say more about this.

Writing and speaking:

Crudely: writing and speaking are different kinds of language, different forms of expression.

Here’s a short test you might want to do. If you have access to voice recognition – and you almost certainly have, either on your smart phone or via Google – try it now.

  1. Turn on voice recognition and then talk about something you’re interested in for a couple of minutes, as if you were talking to a friend or colleague.
  2. Write / type for a couple of minutes on the same topic.
  3. Compare the transcript of your speaking with the words that you typed/wrote.

What do you notice?

Obviously, I don’t know. And the comparison isn’t simple:

  • You may have been so conscious of the fact that your spoken words were being transcribed that you spoke something much closer to written language than your more normal speech.
  • Or your writing for a colleague may have been much less formal than if you had been writing for a wider audience.

But it may be that;

  • Your transcribed speech took more words to say roughly the same thing than did your writing / typing;
  • Your transcribed speech was less formal, more conversational, than your written / typed text;
  • In particular your transcribed speech may have conformed less well to grammatical conventions, in particular to clear and conventional sentence structure and breaks between ideas or sentences.
  • Unlike most people, who generally talk mostly in phrases, you may already talk in complete sentences. Or even paragraphs. Or even chapters. Or even books. If you are one of these lucky, talented people, then voice recognition will be largely unproblematic for you, and you will simply become much more productive. I envy you.

Of course we all have a range of forms of both spoken and written expression. We can speak formally or informally, we can write formally or less formally. Audience and context make a big difference to how we speak and write.

But the fact remains that, for voice recognition to give you all the potential advantages of speed, you do need to learn to speak in something like the way or style in which you write, or want to write. And, as I am discovering as I dictate this paragraph, this is hard work, and requires fierce concentration – because, when you are using voice recognition, you are writing; in the specific sense of getting words onto the screen; faster perhaps than you could if you were typing.

But speaking into voice recognition slowly becomes easier. And then you will begin to see typing as it is – another form of technology-impeded human action. Like driving a car with manual gear change – “stick shift” to our American friends call it. Or indeed driving a car at all in the age of Google car and Tesla autopilot.Or repeatedly typing / speaking) the same information into form after form after form. Or – insert your own technological bête noire here.


I over-simplified earlier. I don’t think it’s as simple as learning to speak in the same kind of written English that you used to produce. I write Tweets, emails, blogs, articles and book chapters through the use of voice recognition software. Here are some things I have noticed:

  • My style has become slightly less formal as I have made the transition to voice.
  • My sentences initially became longer, sometimes I felt too long – at first I used to solve this problem in the editing, whereas now (as is not demonstrated in the current sentence!) I have taught myself to speak in short sentences again. When appropriate. Or when I remember.
  • On a good day my writing is a little more vivid – I’m less likely to censor short flights of imaginative expression in speech then I am in writing. Of course, if I don’t like these flights, or don’t think them appropriate to the intended outlet or audience, I can always cut them out. And I do. Sometimes with a tear. Following Faulkner’s advice, endorsed by Stephen King, to “kill all your darlings.”

But the gain in speed I achieve (when I dictated the words “I achieve” just now they were transcribed  as “hi cheese”, which briefly entertained me) through the use of voice recognition is so great that the necessary additional editing time still leaves me ahead.


What about the quality?

Quality, I feel, is as much a consequence of:

  • The research and thought and planning that go on before the thoughts are expressed either in speech or in writing, and
  • The process of editing

as it is a consequence of the process of committing thought to screen.

Although … some of my better quality ideas make it to the screen because I  capture them in speech, whereas I might well have lost them by the time my fingers caught up with my occasionally fleeting thoughts. This capture of what would otherwise have been lost just now happened, in the previous sentence. (The rather informal starting of a new paragraph with “Although” was undertaken as I dictated it, because I was conscious that the sentence was becoming rather long, but I didn’t want to lose the train of thought as I struggled to find an editing solution. Anyway, I want in this post to show how, now, for me, speech becomes writing.)


I prefer to use voice recognition when I am alone. I share an open plan office with my partner, and I still feel a little embarrassed when talking to a machine while Carole is in the room, although she assures me that, as someone familiar with open plan office working, she is bothered by it not at all. But that doesn’t seem to stop it bothering me!


Try voice recognition. Academics exhibit greater strangenesses than speaking, slightly after the manner of a BBC radio announcer from the 1930s, into a telephone or a computer. Pretend you are that rude person on the train, who shares (often) his enunciated thoughts with the carriage.

The learning curve for fluency with voice recognition is long but gentle. The benefits, including speed, start to show very early on.

And you may discover, as you proceed, that the computer isn’t the only one learning to recognise your speech. You may also become better able to recognise, appreciate, enjoy and improve your own various voices.

Let me know.

Three pillars of professionalism in academic development

Paper (OK, workshop) at ICED / HELTASA Conference, Cape Town, 23 November 2016



The concept of professionalism, both for those who teach in higher education and for academic developers, remains problematic and contested. For a recent account see Bostock, S., & Baume, D. (2016). Professions and professionalism in teaching and development. in D. Baume & C. Popovic (Eds.), Advancing practice in academic development (pp. 32–51). United Kingdom: Routledge.

But academic developers are oriented towards finding solutions, or at least to finding and implementing productive ways forward. A sense of ‘forward’ for academic development – “We suggest an overall purpose for academic development – to lead and support the improvement of student learning.” – forms the first sentence of D. Baume & C. Popovic (Eds.), op. cit.

To offer productive ways forward, this paper suggests three pillars of professionalism in academic development:

  1. Being scholarly;
  2. Being effective; and
  3. Enacting principles or values.

Taken together, these three pillars can give developers some confidence in their professionalism, when, as will continue to happen, our legitimacy is challenged. Of course the three pillars need to be implemented reflectively, critically and humanely.

Pillar 1                    Being scholarly

A recent model of scholarship, (Baume and Popovic, op. cit., p 5), suggests three (overlapping and progressing) ways to be scholarly:

  1. Being reflective, critical and analytic;
  2. Using ideas from the literature; and
  3. Contributing to the literature.

Participants in the Southern Africa Universities Learning and Teaching (SAULT) Forum in Windhoek, Namibia in February 2016 reported three main reasons to be scholarly:

  1. To remain current;
  2. To gain new ideas to apply to practice; and
  3. To gain and maintain the respect of colleagues and clients.

Participants said that, currently, for them, being scholarly mainly meant writing for publication. However, their accounts of being scholarly in the future pulled together the three kinds of scholarship described in the Baume and Popovic model. The model thus seemed to provide a useful tool, both for analysis and for planning. (D. Baume (2017). Scholarship in Action. Innovations in Education and Teaching International54(2))

Question 1: In what particular ways can you become still more scholarly in your practice?

Pillar 2                    Being effective

Like professionalism and scholarship, the concept of effectiveness is sometimes contested. For some it sounds like managerialism. For more on defining and showing effectiveness see Stefani, L., & Baume, D. (2016a). “Is it working?” Outcomes, monitoring and evaluation. In D. Baume & C. Popovic (Eds.), op. cit.

We developers talk about what we do – our actions. We talk bout what we make – our outputs. But surely the point of doing and making is to achieve outcomes, to make things better, to make specific things better, in specific and determinable, sometimes measurable, ways? And to do so in ways that embed, preserve and hopefully enhance professional relationships, scholarship, and values?

Question 2: For a specific project, what are you to trying to achieve? What are your intended outcomes? How will you know you have achieved them? If necessary, change the outcomes until you can see how to achieve them and how to evaluate their achievement.

Pillar 3                    Enacting principles or values

Many professional standards include statements of underpinning values or principles. Those of the UK Staff and Educational Development association, for example, include:

  1. Developing understanding of how people learn;
  2. Practising in ways that are scholarly, professional and ethical; and
  3. Valuing diversity and promoting inclusivity. (

The hard and vital step, it turns out, is not writing and agreeing such statements, but implementing them.

Question 3: Pick a value (related to education) that you believe in. How well, how far, do you and your institution implement it? What factors aid and impede its implementation. What can you do to implement it more, better?




I am an Independent international higher education consultant, researcher and writer. My most recent full-time post was with the UK Open University, where with colleagues I wrote courses on teaching in higher education.

I was founding Chair of the UK Staff and Educational Development Association (SEDA); cofounder of the UK Heads of Educational Development Group (HEDG); a founding council member of the International Consortium for Education Development (ICED); and a founding editor of the ICED journal, the International Journal for Academic Development (IJAD). I am the ICED representative on the Southern Africa Universities Learning and Teaching (SAULT) forum.

I have co-edited four books on academic development in higher education, and published over 60 papers and articles. My contributions to academic development nationally and internationally have been recognised by awards from SEDA and ICED.


David Baume PhD SFSEDA SFHEA,, @David_Baume,

“Is it working?” Monitoring and Evaluation in Academic Development

Workshop at ICED / HELTASA Conference, Cape Town, 22 November 2016


Introduction and Rationale

There is a growing demand for accountability in Higher Education. Funders want to know that resources have been both properly and effectively applied. This requirement extends to academic development. The workshop will help participants to demonstrate effectiveness of their work.

Workshop outcomes

By the end of this workshop you will have:

  1. Determined, at least in outline, the intended outcomes of a development venture for which you have some responsibility. That is, you will have clarified (in negotiation with stakeholders) what the venture is intended to achieve. It could be a workshop, a programme, a development project, writing a policy or strategy – almost anything. Whatever it is, what particular things will make the situation to be improved better?
  2. Planned, at least in outline, how you will monitor progress towards the intended outcomes. This may include the use of intermediate outcomes and waypoints, and adjusting plans and activities (and maybe also intended outcomes) as required.
  3. Planned, again at least in outline, how you will evaluate the success of your venture in achieving its intended outcomes, and draw conclusions for future practice.

Notice that these are not learning outcomes, not statements of what you will be able to do. They are just outcomes – things you will have achieved.

Outcomes and evaluation – the big picture

Here, in summary, is a simple and powerful process for writing and checking intended outcomes, and then for monitoring and evaluating their attainment:

  1. Identify – preferably in negotiation with the other stakeholders – intended outcomes of the project. What is the development project or venture intended to achieve? What exactly do you want to improve? What do you mean by improved?
  2. How will you find out if the project or venture has been successful? What shows success?
  3. Check. If it’s hard to write an evaluation plan, then revisit and rework the outcomes, then indicators of success, until you can see how to monitor and evaluate their attainment.
  4. Plan and schedule how you will run the project to attain the goals. It’s generally better to do a project with people than for people. If it’s a big project, you may need to set interim goals, waypoints, which you can monitor.
  5. Run the project!
  6. Keep on asking “Is it working?” “Are we clearly moving towards our goals?” “Do we need to adjust – our methods? Maybe, even, sometimes, adjust our goals?”
  7. Towards the end, start to evaluate and report; with evidence; on whether the outcomes were achieved; how well they were achieved; what wasn’t achieved; any unexpected outcomes; what should be done next; and, perhaps above all, what has been learned?

Workshop shape and activities

You will see that the activities are directly linked to the outcomes

Here is the overall shape of the workshop around each activity. There may be some variations:

  1. I’ll say a little about each activity in turn.
  2. Then I’ll ask you to do the activity, or at any rate to start it.
  3. And then I’ll ask you to discuss it with a neighbour.
  4. I’ll ask you to share some of your answers.
  5. Sometimes, I’ll have a public conversation with you about your answer.

Activity 1               Read the script below, and the comments that follow it.

Activity 2               Choose a development project or venture of some kind.

It should be real. You should have some responsibility for it. And you should be at or near the start.

Activity 3               For this chosen venture, decide the intended outcomes, what the project is intended to achieve and to improve.

Activity 4               Plan how you will know if it has been successful. If that proves difficult, change the outcomes until you can see how to evaluate their attainment.

Activity 5               Pretend that the project is over. Draft a realistic evaluation report. What does that tell you about the intended outcomes, and about how you should have run the project?


A (hypothetical) conversation between the Head of Department X and an Academic Developer

HODX Thanks for coming. Look, what it is, the students are complaining about feedback. Could you run a few workshops to help us sort this out?
AD What in particular is bothering students?
HODX Well, they say sometimes the feedback is late. Sometimes they can’t understand it, can’t use it.
A D Okay. A couple of issues here. On the first one, late feedback. What is the department policy on turnaround time for student work?
HODX We haven’t really got a policy on it. Staff don’t like being tied down. We really should get a policy. But the staff are very busy …
AD I understand. Happy to help you work out a policy when you’re ready. But in the short term, we could do some surveys, even interviews. Ask the student how soon they’d like the feedback …
HODX They’ll say they want it tomorrow! No chance!
AD Well, let’s find out. There may be ways to do feedback a lot faster. But yes, we could ask staff how soon they can only think it is feasible to turn work around. We will get to some sort of compromise. The outcome we want here is …
HODX … a lot fewer student complaints about the late feedback!
AD That would be a good start. Second issue – students don’t understand the feedback. What do you think is going on there?
HODX (Pause) I think the problem may be, staff know the material they are teaching so well that their feedback is a bit – concise? I remember what you said at that teaching workshop last year – staff can forget what it’s like not to understand.
AD That’s possible. Well, we could ask students what would make feedback more comprehensible. Get their views back to the staff. Then, run a session for staff on making feedback more comprehensible, based on what the students say, and adding in a few ideas from the literature. Develop some guidelines. Then, after a few months, we could find out …
HODX … if students are finding the feedback more understandable.
AD We could do that.
HODX You don’t sound sure.
AD (Pause) There may be a bigger issue here. Not just “Do students understand the feedback?”, but; you hinted at it; “Are students using the feedback to help them decide what to keep on doing right and what to do differently next time?” After all, that’s the point of feedback, isn’t it?
HODX Interesting. How would you tackle that?
AD The usual. Student surveys. Student interviews. Actually, we could try something a bit newer. We could facilitate some conversation between students and staff about this, dig a bit deeper. That could be very useful. If staff would do it?
HODX I’m sure I could persuade a couple of them! What are we trying to achieve here?
AD I guess – students making good use of feedback to inform their future studies?

Of course, it’s not just about the kind of feedback staff give. It’s also about the pattern of assignments, maybe even the shape of the whole course.

HODX Whoa! Where did that come from?
AD If the next assignment is completely different, or if the timing is wrong, or students aren’t helped to use the feedback, then maybe some students aren’t using feedback because it’s just not possible for them to use it? Issues like that
HODX You always want to change the world, don’t you! Let’s stick with speeding up feedback and making it more useful for now. We can get to redesigning the course at the major review time in, what is it, two years.
AD Always happy to help.

Notice in this conversation:

  • The use of the idea of outcomes and evaluation, by both the HOD and the AD.
  • The AD working to identify what is going on, digging a bit deeper.
  • A reference to using the literature.
  • The AD seeding ideas for possible future work.
  • Negotiations about what is feasible now.
  • A good working relationship, with mutual respect.


Stefani, L., & Baume, D. “Is it working?’ Outcomes, monitoring and evaluation. In Baume, D., & Popovic, C. (Eds.). (2016). Advancing Practice in Academic Development. (pp. 157-173) London: RoutledgeFamler



David Baume PhD SFSEDA SFHEA,, @David_Baume,


Hypothetical Case Study on Clarifying Goals: ‘Enhancing the status of teaching’

A university policy aim might be to enhance the status of teaching. This is a laudable aim; but vague.

A non-rhetorical question to begin with: How would you identify the status of teaching in your university, and track its changes over time? Let’s try to sharpen the aim.

We might try to achieve a more rigorous definition. We could negotiate university meanings – university meanings, not the meanings; we are developers, not writers of dictionaries – for terms including enhance, status, even teaching …

Alternatively, we could take a more direct approach and ask the question – What would indicate an enhanced status of teaching’? We could decide, again using ideas from the literature, and/or we could ask within the university. We could devise and implement a survey to identify the current status of teaching. This would rapidly reveal some of the many meanings of the status of teaching.

Possible indicators of status accorded to teaching:

  • A formal teaching awards scheme – a plausible indicator that an institution is seeking to enhance the status of teaching. Beyond this, the number of scheme applications and awards each year, the rewards given to and more broadly the fuss made of award winners; these are all further indicators of an institution taking seriously the enhancement of the status of teaching.
  • Promotions criteria that include teaching – indicating that teaching is being valued.
  • (Also – are the criteria widely believed to be being used?)
  • Teaching ability being emphasized in recruitment advertisements, and taken seriously in selection processes, would be a further positive sign …

These and other possible approaches have in common is that they and/or local level, for staff to teach well, to improve their teaching, and to enhance the status of teaching.

Of course most of these processes could be implemented well or badly, strongly or feebly. All could be respected or not by academic staff, managers and students. All could be subverted or diminished by other policies and strategies which value, or are perceived as valuing, other kinds of activity – most obviously research or administration – more highly than teaching. Nonetheless, a university implementing such measures, and putting some effort into evaluating their effectiveness, could make a decent claim to be committed to enhancing the status, and also the quality, of teaching.

Analysing these, seeking a core, seeking context-specific (for example, discipline-specific) local variants, and feeding in any research-based accounts, all start to give an account of the status of teaching with which we can work. Our account should enable us to identify enhancement over a baseline. In enhancement work, it’s good to know which way is up.

Academic developers have many possible roles here. They can help universities, schools and departments to identify possible good academic practices that are broadly compatible with the norms and values of the institution, accepting also that one of the more challenging roles of the developers is sometimes to help the institution to shift its norms and values. Developers can make productive connections across the institution …

At some point a developer will also want to ask – ‘Why do we seek to enhance the status of teaching?’

Condensed and adapted from Stefani and Baume (op cit.)

Co-operation in Development – Summary David Baume

The survey

Nine responses were received to a survey on cooperation in development, from academic / educational development units.

Main development functions of units

The most frequently mentioned development functions can be grouped as:

  1. Staff development, including training teachers for a qualification, accrediting teachers, CPD, and supporting staff and faculties;
  2. Educational development, including improving learning, teaching and assessment and curriculum development;
  3. Institutional support, including policy and strategy development and projects
  4. Student development and support
  5. Learning technology development, implementation and evaluation
  6. Other functions – post graduate support, research into teaching and learning, horizon scanning, and QA

Beyond what once might have been classic academic development functions; perhaps A, B, C and some of F; we now see also D and E.

Other development functions elsewhere in the institution

These include HR, learning technology, student development, student services (including overseas students), faculty committees and research.

Frequency of contacts between the academic development unit and these other development functions:

Frequency of contact: Annually or less A few times each year Most or every month Most or every week
Number 1 13 8 5

Nature of contacts between the academic development unit and these other development functions:

Nature of contact: Formal Informal Both Policy / strategy Operational Both
Number 3 4 13 3 2 10

The largest scores for frequency of contact – a few times each year / most or every month – and type of contact – all four types! – are in bold.

Comments on what above all supports effective co-operation on development in your institution

Personal relationships and communicating (3 mentions each); the encouragement of leadership (2); and the alignment of strategy (1). Also mentioned are ‘seeing the person face-to-face’, ‘our small campus culture’, goodwill, energy, focus and effective resourcing.

Comments on what above all impedes effective co-operation on development in your institution

Structural factors (5 mentions), where the response was amplified, included organizational hierarchies, constant restructuring, silo working, geography and a lack of specific leads for specific functions. Communications factors (3) included ¹having EVERYTHING online via email¹ and the lack of time. One respondent reported the absence of goodwill, energy, focus and leadership as the main inhibitors of cooperation in development.

Comments on nature and frequency of co-operation with other units

These four longer comments from respondents suggest some of the complexity and benefits around inter-unit cooperation:

  1. As a sole operator in learning and teaching facilitation across the university, I establish currency in my role, by ‘supporting’ (not teaching, as [I do not have] an academic role) the PGCert provision. For a period, my role was positioned in the [learning and development function] of our Human Resources department, but I was [recently] re-located , as much of the ‘development’ I facilitated did not naturally align with colleagues’ specialisms in [organisational development]. I was therefore moved to [a unit concerned with research], from where I run an annual programme of CPD in learning and teaching and facilitate the university’s CPD for professional recognition. Being in research, I also support programmes around researcher development, particularly in terms of teaching/supporting learning, graduate teaching schemes, etc
  2. Our university is small and we often work with [the learner support function] to share data, review the best approaches to take for all students, and progress agendas with senior colleagues. Good personal relationships between these teams mean we routinely pick up the phone and use each other as sounding boards for work of common interest.
  3. [Relations are] good, and they are very helpful, now that [another unit has] realised we have been doing serious education research and they keep us in the loop and support us to bid for funding opportunities. The university has recently appointed a [senior post for] Education and we are hopeful they will enable the different bits to coalesce.
  4. At the formal level, these committees (depending on the Chair!) can be procedural – so it is possible to ask critical friend questions – but the real value is informal, getting into conversations with colleagues about developmental work. In many respects the people who are key on these committees are ones our team has known since they were on the PGCert and these relationships have built up over time.

‘Churn’, both in staffing and in structures, was also a theme, along with task-specific rather than general co-operation

Overview and possible implications

Co-operation between development units / functions in higher educational institutions is valuable, difficult, and mostly attainable. Good personal communications and relationships aid co-operation; structural factors and poor communications impede cooperation. Developers may wish to consider (1) working to establish good personal / professional relations, perhaps initially around specific, rather than big picture, co-operations; (2) more broadly being prepared to work across structural boundaries in pursuit of institutional goals and priorities; and (3) assuming that, whatever the current actual or perceived structural obstacles and political difficulties, no-one actually wants to prevent us from doing good stuff.