Skip to content

“Is it working?” Monitoring and Evaluation in Academic Development

November 16, 2016

Workshop at ICED / HELTASA Conference, Cape Town, 22 November 2016

David Baume PhD SFSEDA SFHEA

Introduction and Rationale

There is a growing demand for accountability in Higher Education. Funders want to know that resources have been both properly and effectively applied. This requirement extends to academic development. The workshop will help participants to demonstrate effectiveness of their work.

Workshop outcomes

By the end of this workshop you will have:

  1. Determined, at least in outline, the intended outcomes of a development venture for which you have some responsibility. That is, you will have clarified (in negotiation with stakeholders) what the venture is intended to achieve. It could be a workshop, a programme, a development project, writing a policy or strategy – almost anything. Whatever it is, what particular things will make the situation to be improved better?
  2. Planned, at least in outline, how you will monitor progress towards the intended outcomes. This may include the use of intermediate outcomes and waypoints, and adjusting plans and activities (and maybe also intended outcomes) as required.
  3. Planned, again at least in outline, how you will evaluate the success of your venture in achieving its intended outcomes, and draw conclusions for future practice.

Notice that these are not learning outcomes, not statements of what you will be able to do. They are just outcomes – things you will have achieved.

Outcomes and evaluation – the big picture

Here, in summary, is a simple and powerful process for writing and checking intended outcomes, and then for monitoring and evaluating their attainment:

  1. Identify – preferably in negotiation with the other stakeholders – intended outcomes of the project. What is the development project or venture intended to achieve? What exactly do you want to improve? What do you mean by improved?
  2. How will you find out if the project or venture has been successful? What shows success?
  3. Check. If it’s hard to write an evaluation plan, then revisit and rework the outcomes, then indicators of success, until you can see how to monitor and evaluate their attainment.
  4. Plan and schedule how you will run the project to attain the goals. It’s generally better to do a project with people than for people. If it’s a big project, you may need to set interim goals, waypoints, which you can monitor.
  5. Run the project!
  6. Keep on asking “Is it working?” “Are we clearly moving towards our goals?” “Do we need to adjust – our methods? Maybe, even, sometimes, adjust our goals?”
  7. Towards the end, start to evaluate and report; with evidence; on whether the outcomes were achieved; how well they were achieved; what wasn’t achieved; any unexpected outcomes; what should be done next; and, perhaps above all, what has been learned?

Workshop shape and activities

You will see that the activities are directly linked to the outcomes
.

Here is the overall shape of the workshop around each activity. There may be some variations:

  1. I’ll say a little about each activity in turn.
  2. Then I’ll ask you to do the activity, or at any rate to start it.
  3. And then I’ll ask you to discuss it with a neighbour.
  4. I’ll ask you to share some of your answers.
  5. Sometimes, I’ll have a public conversation with you about your answer.

Activity 1               Read the script below, and the comments that follow it.

Activity 2               Choose a development project or venture of some kind.

It should be real. You should have some responsibility for it. And you should be at or near the start.

Activity 3               For this chosen venture, decide the intended outcomes, what the project is intended to achieve and to improve.

Activity 4               Plan how you will know if it has been successful. If that proves difficult, change the outcomes until you can see how to evaluate their attainment.

Activity 5               Pretend that the project is over. Draft a realistic evaluation report. What does that tell you about the intended outcomes, and about how you should have run the project?

 

A (hypothetical) conversation between the Head of Department X and an Academic Developer

HODX Thanks for coming. Look, what it is, the students are complaining about feedback. Could you run a few workshops to help us sort this out?
AD What in particular is bothering students?
HODX Well, they say sometimes the feedback is late. Sometimes they can’t understand it, can’t use it.
A D Okay. A couple of issues here. On the first one, late feedback. What is the department policy on turnaround time for student work?
HODX We haven’t really got a policy on it. Staff don’t like being tied down. We really should get a policy. But the staff are very busy …
AD I understand. Happy to help you work out a policy when you’re ready. But in the short term, we could do some surveys, even interviews. Ask the student how soon they’d like the feedback …
HODX They’ll say they want it tomorrow! No chance!
AD Well, let’s find out. There may be ways to do feedback a lot faster. But yes, we could ask staff how soon they can only think it is feasible to turn work around. We will get to some sort of compromise. The outcome we want here is …
HODX … a lot fewer student complaints about the late feedback!
AD That would be a good start. Second issue – students don’t understand the feedback. What do you think is going on there?
HODX (Pause) I think the problem may be, staff know the material they are teaching so well that their feedback is a bit – concise? I remember what you said at that teaching workshop last year – staff can forget what it’s like not to understand.
AD That’s possible. Well, we could ask students what would make feedback more comprehensible. Get their views back to the staff. Then, run a session for staff on making feedback more comprehensible, based on what the students say, and adding in a few ideas from the literature. Develop some guidelines. Then, after a few months, we could find out …
HODX … if students are finding the feedback more understandable.
AD We could do that.
HODX You don’t sound sure.
AD (Pause) There may be a bigger issue here. Not just “Do students understand the feedback?”, but; you hinted at it; “Are students using the feedback to help them decide what to keep on doing right and what to do differently next time?” After all, that’s the point of feedback, isn’t it?
HODX Interesting. How would you tackle that?
AD The usual. Student surveys. Student interviews. Actually, we could try something a bit newer. We could facilitate some conversation between students and staff about this, dig a bit deeper. That could be very useful. If staff would do it?
HODX I’m sure I could persuade a couple of them! What are we trying to achieve here?
AD I guess – students making good use of feedback to inform their future studies?

Of course, it’s not just about the kind of feedback staff give. It’s also about the pattern of assignments, maybe even the shape of the whole course.

HODX Whoa! Where did that come from?
AD If the next assignment is completely different, or if the timing is wrong, or students aren’t helped to use the feedback, then maybe some students aren’t using feedback because it’s just not possible for them to use it? Issues like that
HODX You always want to change the world, don’t you! Let’s stick with speeding up feedback and making it more useful for now. We can get to redesigning the course at the major review time in, what is it, two years.
AD Always happy to help.

Notice in this conversation:

  • The use of the idea of outcomes and evaluation, by both the HOD and the AD.
  • The AD working to identify what is going on, digging a bit deeper.
  • A reference to using the literature.
  • The AD seeding ideas for possible future work.
  • Negotiations about what is feasible now.
  • A good working relationship, with mutual respect.

Reference

Stefani, L., & Baume, D. “Is it working?’ Outcomes, monitoring and evaluation. In Baume, D., & Popovic, C. (Eds.). (2016). Advancing Practice in Academic Development. (pp. 157-173) London: RoutledgeFamler

 

 

David Baume PhD SFSEDA SFHEA, david@davidbaume.com, @David_Baume, www.davidbaume.com

 

Hypothetical Case Study on Clarifying Goals: ‘Enhancing the status of teaching’

A university policy aim might be to enhance the status of teaching. This is a laudable aim; but vague.

A non-rhetorical question to begin with: How would you identify the status of teaching in your university, and track its changes over time? Let’s try to sharpen the aim.

We might try to achieve a more rigorous definition. We could negotiate university meanings – university meanings, not the meanings; we are developers, not writers of dictionaries – for terms including enhance, status, even teaching …

Alternatively, we could take a more direct approach and ask the question – What would indicate an enhanced status of teaching’? We could decide, again using ideas from the literature, and/or we could ask within the university. We could devise and implement a survey to identify the current status of teaching. This would rapidly reveal some of the many meanings of the status of teaching.

Possible indicators of status accorded to teaching:

  • A formal teaching awards scheme – a plausible indicator that an institution is seeking to enhance the status of teaching. Beyond this, the number of scheme applications and awards each year, the rewards given to and more broadly the fuss made of award winners; these are all further indicators of an institution taking seriously the enhancement of the status of teaching.
  • Promotions criteria that include teaching – indicating that teaching is being valued.
  • (Also – are the criteria widely believed to be being used?)
  • Teaching ability being emphasized in recruitment advertisements, and taken seriously in selection processes, would be a further positive sign …

These and other possible approaches have in common is that they and/or local level, for staff to teach well, to improve their teaching, and to enhance the status of teaching.

Of course most of these processes could be implemented well or badly, strongly or feebly. All could be respected or not by academic staff, managers and students. All could be subverted or diminished by other policies and strategies which value, or are perceived as valuing, other kinds of activity – most obviously research or administration – more highly than teaching. Nonetheless, a university implementing such measures, and putting some effort into evaluating their effectiveness, could make a decent claim to be committed to enhancing the status, and also the quality, of teaching.

Analysing these, seeking a core, seeking context-specific (for example, discipline-specific) local variants, and feeding in any research-based accounts, all start to give an account of the status of teaching with which we can work. Our account should enable us to identify enhancement over a baseline. In enhancement work, it’s good to know which way is up.

Academic developers have many possible roles here. They can help universities, schools and departments to identify possible good academic practices that are broadly compatible with the norms and values of the institution, accepting also that one of the more challenging roles of the developers is sometimes to help the institution to shift its norms and values. Developers can make productive connections across the institution …

At some point a developer will also want to ask – ‘Why do we seek to enhance the status of teaching?’

Condensed and adapted from Stefani and Baume (op cit.)

Leave a Comment

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: