Skip to content

Three pillars of professionalism in academic development

Paper to be presented at ICED / HELTASA Conference, Cape Town, 23 November 2016

David Baume PhD SFSEDA SFHEA

Background

The concept of professionalism, both for those who teach in higher education and for academic developers, remains problematic and contested. For a recent account see Bostock, S., & Baume, D. (2016). Professions and professionalism in teaching and development. in D. Baume & C. Popovic (Eds.), Advancing practice in academic development (pp. 32–51). United Kingdom: Routledge.

But academic developers are oriented towards finding solutions, or at least to finding and implementing productive ways forward. A sense of ‘forward’ for academic development – “We suggest an overall purpose for academic development – to lead and support the improvement of student learning.” – forms the first sentence of D. Baume & C. Popovic (Eds.), op. cit.

To offer productive ways forward, this paper suggests three pillars of professionalism in academic development:

  1. Being scholarly;
  2. Being effective; and
  3. Enacting principles or values.

Taken together, these three pillars can give developers some confidence in their professionalism, when, as will continue to happen, our legitimacy is challenged. Of course the three pillars need to be implemented reflectively, critically and humanely.

Pillar 1                    Being scholarly

A recent model of scholarship, (Baume and Popovic, op. cit., p 5), suggests three (overlapping and progressing) ways to be scholarly:

  1. Being reflective, critical and analytic;
  2. Using ideas from the literature; and
  3. Contributing to the literature.

Participants in the Southern Africa Universities Learning and Teaching (SAULT) Forum in Windhoek, Namibia in February 2016 reported three main reasons to be scholarly:

  1. To remain current;
  2. To gain new ideas to apply to practice; and
  3. To gain and maintain the respect of colleagues and clients.

Participants said that, currently, for them, being scholarly mainly meant writing for publication. However, their accounts of being scholarly in the future pulled together the three kinds of scholarship described in the Baume and Popovic model. The model thus seemed to provide a useful tool, both for analysis and for planning. (D. Baume (2017). Scholarship in Action. Innovations in Education and Teaching International54(2))

Question 1: In what particular ways can you become still more scholarly in your practice?

Pillar 2                    Being effective

Like professionalism and scholarship, the concept of effectiveness is sometimes contested. For some it sounds like managerialism. For more on defining and showing effectiveness see Stefani, L., & Baume, D. (2016a). “Is it working?” Outcomes, monitoring and evaluation. In D. Baume & C. Popovic (Eds.), op. cit.

We developers talk about what we do – our actions. We talk bout what we make – our outputs. But surely the point of doing and making is to achieve outcomes, to make things better, to make specific things better, in specific and determinable, sometimes measurable, ways? And to do so in ways that embed, preserve and hopefully enhance professional relationships, scholarship, and values?

Question 2: For a specific project, what are you to trying to achieve? What are your intended outcomes? How will you know you have achieved them? If necessary, change the outcomes until you can see how to achieve them and how to evaluate their achievement.

Pillar 3                    Enacting principles or values

Many professional standards include statements of underpinning values or principles. Those of the UK Staff and Educational Development association, for example, include:

  1. Developing understanding of how people learn;
  2. Practising in ways that are scholarly, professional and ethical; and
  3. Valuing diversity and promoting inclusivity. (http://www.seda.ac.uk/core-mission-values)

The hard and vital step, it turns out, is not writing and agreeing such statements, but implementing them.

Question 3: Pick a value (related to education) that you believe in. How well, how far, do you and your institution implement it? What factors aid and impede its implementation. What can you do to implement it more, better?

 

 

Biography

I am an Independent international higher education consultant, researcher and writer. My most recent full-time post was with the UK Open University, where with colleagues I wrote courses on teaching in higher education.

I was founding Chair of the UK Staff and Educational Development Association (SEDA); cofounder of the UK Heads of Educational Development Group (HEDG); a founding council member of the International Consortium for Education Development (ICED); and a founding editor of the ICED journal, the International Journal for Academic Development (IJAD). I am the ICED representative on the Southern Africa Universities Learning and Teaching (SAULT) forum.

I have co-edited four books on academic development in higher education, and published over 60 papers and articles. My contributions to academic development nationally and internationally have been recognised by awards from SEDA and ICED.

 

David Baume PhD SFSEDA SFHEA, david@davidbaume.com, @David_Baume, www.davidbaume.com

“Is it working?” Monitoring and Evaluation in Academic Development

Workshop at ICED / HELTASA Conference, Cape Town, 22 November 2016

David Baume PhD SFSEDA SFHEA

Introduction and Rationale

There is a growing demand for accountability in Higher Education. Funders want to know that resources have been both properly and effectively applied. This requirement extends to academic development. The workshop will help participants to demonstrate effectiveness of their work.

Workshop outcomes

By the end of this workshop you will have:

  1. Determined, at least in outline, the intended outcomes of a development venture for which you have some responsibility. That is, you will have clarified (in negotiation with stakeholders) what the venture is intended to achieve. It could be a workshop, a programme, a development project, writing a policy or strategy – almost anything. Whatever it is, what particular things will make the situation to be improved better?
  2. Planned, at least in outline, how you will monitor progress towards the intended outcomes. This may include the use of intermediate outcomes and waypoints, and adjusting plans and activities (and maybe also intended outcomes) as required.
  3. Planned, again at least in outline, how you will evaluate the success of your venture in achieving its intended outcomes, and draw conclusions for future practice.

Notice that these are not learning outcomes, not statements of what you will be able to do. They are just outcomes – things you will have achieved.

Outcomes and evaluation – the big picture

Here, in summary, is a simple and powerful process for writing and checking intended outcomes, and then for monitoring and evaluating their attainment:

  1. Identify – preferably in negotiation with the other stakeholders – intended outcomes of the project. What is the development project or venture intended to achieve? What exactly do you want to improve? What do you mean by improved?
  2. How will you find out if the project or venture has been successful? What shows success?
  3. Check. If it’s hard to write an evaluation plan, then revisit and rework the outcomes, then indicators of success, until you can see how to monitor and evaluate their attainment.
  4. Plan and schedule how you will run the project to attain the goals. It’s generally better to do a project with people than for people. If it’s a big project, you may need to set interim goals, waypoints, which you can monitor.
  5. Run the project!
  6. Keep on asking “Is it working?” “Are we clearly moving towards our goals?” “Do we need to adjust – our methods? Maybe, even, sometimes, adjust our goals?”
  7. Towards the end, start to evaluate and report; with evidence; on whether the outcomes were achieved; how well they were achieved; what wasn’t achieved; any unexpected outcomes; what should be done next; and, perhaps above all, what has been learned?

Workshop shape and activities

You will see that the activities are directly linked to the outcomes
.

Here is the overall shape of the workshop around each activity. There may be some variations:

  1. I’ll say a little about each activity in turn.
  2. Then I’ll ask you to do the activity, or at any rate to start it.
  3. And then I’ll ask you to discuss it with a neighbour.
  4. I’ll ask you to share some of your answers.
  5. Sometimes, I’ll have a public conversation with you about your answer.

Activity 1               Read the script below, and the comments that follow it.

Activity 2               Choose a development project or venture of some kind.

It should be real. You should have some responsibility for it. And you should be at or near the start.

Activity 3               For this chosen venture, decide the intended outcomes, what the project is intended to achieve and to improve.

Activity 4               Plan how you will know if it has been successful. If that proves difficult, change the outcomes until you can see how to evaluate their attainment.

Activity 5               Pretend that the project is over. Draft a realistic evaluation report. What does that tell you about the intended outcomes, and about how you should have run the project?

 

A (hypothetical) conversation between the Head of Department X and an Academic Developer

HODX Thanks for coming. Look, what it is, the students are complaining about feedback. Could you run a few workshops to help us sort this out?
AD What in particular is bothering students?
HODX Well, they say sometimes the feedback is late. Sometimes they can’t understand it, can’t use it.
A D Okay. A couple of issues here. On the first one, late feedback. What is the department policy on turnaround time for student work?
HODX We haven’t really got a policy on it. Staff don’t like being tied down. We really should get a policy. But the staff are very busy …
AD I understand. Happy to help you work out a policy when you’re ready. But in the short term, we could do some surveys, even interviews. Ask the student how soon they’d like the feedback …
HODX They’ll say they want it tomorrow! No chance!
AD Well, let’s find out. There may be ways to do feedback a lot faster. But yes, we could ask staff how soon they can only think it is feasible to turn work around. We will get to some sort of compromise. The outcome we want here is …
HODX … a lot fewer student complaints about the late feedback!
AD That would be a good start. Second issue – students don’t understand the feedback. What do you think is going on there?
HODX (Pause) I think the problem may be, staff know the material they are teaching so well that their feedback is a bit – concise? I remember what you said at that teaching workshop last year – staff can forget what it’s like not to understand.
AD That’s possible. Well, we could ask students what would make feedback more comprehensible. Get their views back to the staff. Then, run a session for staff on making feedback more comprehensible, based on what the students say, and adding in a few ideas from the literature. Develop some guidelines. Then, after a few months, we could find out …
HODX … if students are finding the feedback more understandable.
AD We could do that.
HODX You don’t sound sure.
AD (Pause) There may be a bigger issue here. Not just “Do students understand the feedback?”, but; you hinted at it; “Are students using the feedback to help them decide what to keep on doing right and what to do differently next time?” After all, that’s the point of feedback, isn’t it?
HODX Interesting. How would you tackle that?
AD The usual. Student surveys. Student interviews. Actually, we could try something a bit newer. We could facilitate some conversation between students and staff about this, dig a bit deeper. That could be very useful. If staff would do it?
HODX I’m sure I could persuade a couple of them! What are we trying to achieve here?
AD I guess – students making good use of feedback to inform their future studies?

Of course, it’s not just about the kind of feedback staff give. It’s also about the pattern of assignments, maybe even the shape of the whole course.

HODX Whoa! Where did that come from?
AD If the next assignment is completely different, or if the timing is wrong, or students aren’t helped to use the feedback, then maybe some students aren’t using feedback because it’s just not possible for them to use it? Issues like that
HODX You always want to change the world, don’t you! Let’s stick with speeding up feedback and making it more useful for now. We can get to redesigning the course at the major review time in, what is it, two years.
AD Always happy to help.

Notice in this conversation:

  • The use of the idea of outcomes and evaluation, by both the HOD and the AD.
  • The AD working to identify what is going on, digging a bit deeper.
  • A reference to using the literature.
  • The AD seeding ideas for possible future work.
  • Negotiations about what is feasible now.
  • A good working relationship, with mutual respect.

Reference

Stefani, L., & Baume, D. “Is it working?’ Outcomes, monitoring and evaluation. In Baume, D., & Popovic, C. (Eds.). (2016). Advancing Practice in Academic Development. (pp. 157-173) London: RoutledgeFamler

 

 

David Baume PhD SFSEDA SFHEA, david@davidbaume.com, @David_Baume, www.davidbaume.com

 

Hypothetical Case Study on Clarifying Goals: ‘Enhancing the status of teaching’

A university policy aim might be to enhance the status of teaching. This is a laudable aim; but vague.

A non-rhetorical question to begin with: How would you identify the status of teaching in your university, and track its changes over time? Let’s try to sharpen the aim.

We might try to achieve a more rigorous definition. We could negotiate university meanings – university meanings, not the meanings; we are developers, not writers of dictionaries – for terms including enhance, status, even teaching …

Alternatively, we could take a more direct approach and ask the question – What would indicate an enhanced status of teaching’? We could decide, again using ideas from the literature, and/or we could ask within the university. We could devise and implement a survey to identify the current status of teaching. This would rapidly reveal some of the many meanings of the status of teaching.

Possible indicators of status accorded to teaching:

  • A formal teaching awards scheme – a plausible indicator that an institution is seeking to enhance the status of teaching. Beyond this, the number of scheme applications and awards each year, the rewards given to and more broadly the fuss made of award winners; these are all further indicators of an institution taking seriously the enhancement of the status of teaching.
  • Promotions criteria that include teaching – indicating that teaching is being valued.
  • (Also – are the criteria widely believed to be being used?)
  • Teaching ability being emphasized in recruitment advertisements, and taken seriously in selection processes, would be a further positive sign …

These and other possible approaches have in common is that they and/or local level, for staff to teach well, to improve their teaching, and to enhance the status of teaching.

Of course most of these processes could be implemented well or badly, strongly or feebly. All could be respected or not by academic staff, managers and students. All could be subverted or diminished by other policies and strategies which value, or are perceived as valuing, other kinds of activity – most obviously research or administration – more highly than teaching. Nonetheless, a university implementing such measures, and putting some effort into evaluating their effectiveness, could make a decent claim to be committed to enhancing the status, and also the quality, of teaching.

Analysing these, seeking a core, seeking context-specific (for example, discipline-specific) local variants, and feeding in any research-based accounts, all start to give an account of the status of teaching with which we can work. Our account should enable us to identify enhancement over a baseline. In enhancement work, it’s good to know which way is up.

Academic developers have many possible roles here. They can help universities, schools and departments to identify possible good academic practices that are broadly compatible with the norms and values of the institution, accepting also that one of the more challenging roles of the developers is sometimes to help the institution to shift its norms and values. Developers can make productive connections across the institution …

At some point a developer will also want to ask – ‘Why do we seek to enhance the status of teaching?’

Condensed and adapted from Stefani and Baume (op cit.)

Co-operation in Development – Summary David Baume

The survey

Nine responses were received to a survey on cooperation in development, from academic / educational development units.

Main development functions of units

The most frequently mentioned development functions can be grouped as:

  1. Staff development, including training teachers for a qualification, accrediting teachers, CPD, and supporting staff and faculties;
  2. Educational development, including improving learning, teaching and assessment and curriculum development;
  3. Institutional support, including policy and strategy development and projects
  4. Student development and support
  5. Learning technology development, implementation and evaluation
  6. Other functions – post graduate support, research into teaching and learning, horizon scanning, and QA

Beyond what once might have been classic academic development functions; perhaps A, B, C and some of F; we now see also D and E.

Other development functions elsewhere in the institution

These include HR, learning technology, student development, student services (including overseas students), faculty committees and research.

Frequency of contacts between the academic development unit and these other development functions:

Frequency of contact: Annually or less A few times each year Most or every month Most or every week
Number 1 13 8 5

Nature of contacts between the academic development unit and these other development functions:

Nature of contact: Formal Informal Both Policy / strategy Operational Both
Number 3 4 13 3 2 10

The largest scores for frequency of contact – a few times each year / most or every month – and type of contact – all four types! – are in bold.

Comments on what above all supports effective co-operation on development in your institution

Personal relationships and communicating (3 mentions each); the encouragement of leadership (2); and the alignment of strategy (1). Also mentioned are ‘seeing the person face-to-face’, ‘our small campus culture’, goodwill, energy, focus and effective resourcing.

Comments on what above all impedes effective co-operation on development in your institution

Structural factors (5 mentions), where the response was amplified, included organizational hierarchies, constant restructuring, silo working, geography and a lack of specific leads for specific functions. Communications factors (3) included ¹having EVERYTHING online via email¹ and the lack of time. One respondent reported the absence of goodwill, energy, focus and leadership as the main inhibitors of cooperation in development.

Comments on nature and frequency of co-operation with other units

These four longer comments from respondents suggest some of the complexity and benefits around inter-unit cooperation:

  1. As a sole operator in learning and teaching facilitation across the university, I establish currency in my role, by ‘supporting’ (not teaching, as [I do not have] an academic role) the PGCert provision. For a period, my role was positioned in the [learning and development function] of our Human Resources department, but I was [recently] re-located , as much of the ‘development’ I facilitated did not naturally align with colleagues’ specialisms in [organisational development]. I was therefore moved to [a unit concerned with research], from where I run an annual programme of CPD in learning and teaching and facilitate the university’s CPD for professional recognition. Being in research, I also support programmes around researcher development, particularly in terms of teaching/supporting learning, graduate teaching schemes, etc
  2. Our university is small and we often work with [the learner support function] to share data, review the best approaches to take for all students, and progress agendas with senior colleagues. Good personal relationships between these teams mean we routinely pick up the phone and use each other as sounding boards for work of common interest.
  3. [Relations are] good, and they are very helpful, now that [another unit has] realised we have been doing serious education research and they keep us in the loop and support us to bid for funding opportunities. The university has recently appointed a [senior post for] Education and we are hopeful they will enable the different bits to coalesce.
  4. At the formal level, these committees (depending on the Chair!) can be procedural – so it is possible to ask critical friend questions – but the real value is informal, getting into conversations with colleagues about developmental work. In many respects the people who are key on these committees are ones our team has known since they were on the PGCert and these relationships have built up over time.

‘Churn’, both in staffing and in structures, was also a theme, along with task-specific rather than general co-operation

Overview and possible implications

Co-operation between development units / functions in higher educational institutions is valuable, difficult, and mostly attainable. Good personal communications and relationships aid co-operation; structural factors and poor communications impede cooperation. Developers may wish to consider (1) working to establish good personal / professional relations, perhaps initially around specific, rather than big picture, co-operations; (2) more broadly being prepared to work across structural boundaries in pursuit of institutional goals and priorities; and (3) assuming that, whatever the current actual or perceived structural obstacles and political difficulties, no-one actually wants to prevent us from doing good stuff.

Co-operation in development

Notes for a survey and session at SEDA November 2016 Conference 

The importance of collaboration in learning and development has long been stressed. Working with and developing learning communities is a SEDA value. Kahn and Walsh (2009) provide theoretical underpinnings for and vivid examples of collaboration. Baume and Popovic (2016) contains many accounts of the importance of collaboration. The authors describe “the increased blurring of and collaboration between development functions;” (p 293). At greater length, we say:

Neighbours

Not all problems, opportunities or possible sites for action in higher education fall tidily under the heading of teaching, learning, assessment, course design, educational development, staff/faculty development, student development, advice and guidance, personal tutoring, language development, numeracy development, learning technology, management, researcher development, research supervision development, administration, support for students with specific learning difficulties, international education, support for students from overseas, equality of opportunity, graduate careers education and advice, employability, community links, open and distance learning, learning resources, estate planning, designing and equipping teaching and learning spaces, learning analytics, organizational development, library and information services, etc. … This suggests, if it were not already obvious, the great need for the various university development functions, including but not limited to those above, to cooperate.”

This is all very well. But organisational and political pressures can militate against collaboration. We all believe in cooperation. The issue which this discussion paper session will tackle is – how can we make it happen, in the real world of current higher education?

Baume, D., & Popovic, C. (Eds.). (2016). Advancing Practice in Academic Development. London: RoutledgeFamler.

Walsh, L., & Kahn, P. H. (2009). Collaborative working in higher education: The social academy. New York: Routledge.

 

Bloom and Course Design – Disaster Strikes!

Introduction

This follows https://davidbaume.com/2015/09/28/learning-and-knowledge-bloomin-obvious/ . I’d start there, if I were you.

Course design – bottom up?

So, Bloom’s taxonomy as a tool for analysing knowledge has problems. But disaster strikes when the taxonomy is used as a tool for course design. (I am here criticising uses of Bloom, not the taxonomy itself.)

When you’re designing a course, the temptation is hard to resist. Bloom offers us a taxonomy, a hierarchy, of knowledge. Each higher level builds on those below. Obviously a building must start with a foundation, perhaps built in a hole in the ground. So, to design a course, start at the lower levels, on a solid foundation of knowledge, of things known, of facts and theories. Then build on up.

First, teach students the facts. Then teach them to understand the facts. Then to apply the understood facts. Then to analyse, and finally to evaluate and synthesise, in whichever order. As easy and as logical as Lego. Just another brick in the wall.

Trouble is, it’s nonsense. It’s not how people learn. It may or not be a useful description of levels of knowledge. As a way to plan learning, it’s a disaster.

Do the thought experiment for a course or subject you know. What are a few basic bits of knowledge in your subject? Imagine teaching students to remember these, then either to recall them in an open-response question, or to recognise them among – wrong facts, I suppose – in a multiple-choice question. No context for the knowledge, no account of why it is important, or what it might mean. No higher level stuff. Just the facts. Learned and remembered.

OK, now let’s move on to understanding these facts, to explaining them, expressing them in different ways. Done.

Now, let’s learn to apply this knowledge and understanding, to use it to address questions and problems. And then, having applied it, to analyse it. And finally, let’s critically evaluate and synthesize these facts that we have learned, and then learned to understand and to apply.

This is ludicrous.

I have been taught this way. It really was ludicrous, and of course quite ineffective. Even as we were being taught the basic facts, we were trying to make sense of them, each in our way. Some of us were trying to understand the facts, to reformulate them in terms that made better sense to us, that linked to, whether supporting or contradicting, things we already knew, and maybe even partly understood. Some of us is sought to understand the facts by trying to apply them, although we were not always clear to what kinds of situations or problems the facts could be applied. Some of us tried to the facts, although again we had nowhere to stand to critique or evaluate, and were certainly not encouraged to do so. We were being lectured to, with no opportunity to ask questions, to explore or to discuss, to make our own sense. We all did some memorising. But it all felt rather pointless.

I suspect I may even have taught this way a few times, in my early days. To those students, I apologise.

What has gone wrong here?

How long have you got?

A dangerous metaphor

A metaphor has led us astray. A diagram has led us astray. The metaphor of foundation and building, and the pyramid diagram in which Bloom’s taxonomy is often presented, simply do not work for learning. Why not?

A building, whether or steel concrete or Lego, doesn’t start with a foundation. It doesn’t even start with a plan. It starts with an idea, maybe even with a vision, with a need, with a specification of some form. Only when this has been agreed can we develop a plan. Quite late in the process, concrete is poured, steel is erected, and bricks are laid – or clicked. These ‘basics’ and ‘foundation’ metaphors just don’t work.

The basics?

Let’s come at this from another angle, and maybe try to rescue Bloom, to find good ways of using the taxonomy in course design.

It’s appealing to suggest that we should start from the basics. What are they? What are the basics of your subject? Go on, think about it for a minute.

What are the basics of your subject? Are they: Facts? Theories? Principles? Purposes? Problems? Values? Ways of thinking? Ways of acting? Something else? Some complex combination of all of these and others?

For what it’s worth, and as an illustration, after thinking for a long time about my own discipline / profession / area of work, academic development, I got to the view that that its basis takes the form of a purpose or goal; to improve student learning. Of course academic development has many intermediate purposes, and many methods and theories and ways of thinking and working and the rest. But at its heart, I feel, is a purpose. “Improving student learning”. Which, as I write, I realise is a subset of a larger goal – to make the world a better place. Which is hopefully a goal of most if not all professional and scholarly activity. (Add quotation marks to any of these words and phrases as you need.)

But, whatever the basics of your discipline, where do they lie on Bloom’s taxonomy?

Probably more towards the top than the bottom.

I’m not against knowledge and comprehension and application. They have their place. But, I would suggest, they are important mainly in how they are used in at the higher levels. I’ll probably give this more detail in a future post. Knowledge and comprehension and application may indeed be required for the attainment of higher-level goals. But this absolutely doesn’t mean that the pursuit of high-level goals has to start at the bottom.

This is the mistake that is often made in the use of Bloom’s taxonomy in course design. The relationships between the different levels of the taxonomy are used to deduce, completely wrongly, a sequence of teaching. The structure of knowledge doesn’t determine the best way of learning. We know a bit about learning – the importance of a wish or need to know, learning as a process of making rather than simply absorbing sense, learning as an active business incorporating above all action and reflection, reflection often being aided by feedback. We need to apply what we know about learning to whatever it is that is to be learned. That’s the only safe way to produce a good course design, good learning activities, and hence good student learning.

Conclusion

So – Bloom’s taxonomy may have some limited uses a tool for analysis. But I can’t find a good use for it in course design. Can you? I’d love to hear.

Next time, moving away from Bloom, I’ll suggest some more fruitful bases for course design and the planning of teaching.

Sources

Bloom, B. S. (1956). The Taxonomy of Educational Objectives: Handbook 1 (1st ed.). London: Longman Higher Education.

Bloom, B. S. (2000). Taxonomy for Learning, Teaching, and Assessing, A: A Revision of Bloom’s Taxonomy of Educational Objectives, Complete Edition. (L. W. Anderson & D. R. Krathwohl, Eds.). New York: Longman Publishing Group.

Learning and Knowledge – Bloomin’ Obvious?

Introduction

This argument will grow over several posts. I’m developing the argument as part of the process of developing and writing a book, which I currently think will be about learning in higher education. Wish me luck!

Learning and Knowledge

It can be hard to talk about learning. For example – what is being learned? Among other things, subjects, of course. Knowledge.

Ah, knowledge. Or to give it its technical name – stuff.

When we educators talk about knowledge, we usually mean much more than just stuff. By knowledge we can also mean ways of thinking, ways of acting, ways of being; forms of meaning, principles, theories, values, and much besides. The sorts of ambitious and demanding knowledge that we rate highly.

But the word knowledge can still pull us down. Whatever higher-level things we want knowledge to mean, knowledge also, seemingly inexorably, seems to end up meaning, well, stuff. Things. Objects.

I’m not sure how this happens. Suggestions welcome. But it happens.

Why do I call it stuff? Because of what we do with it. Learners learn it, and teachers teach it. And this in turn can bring down the whole educational show.

How?

On a bad day, of which there are many, learning (stuff) becomes memorizing (stuff). Knowing (stuff) becomes having memorised (stuff). Teaching (stuff) becomes telling (stuff). Assessing stuff becomes finding out whether people know stuff, which tends to mean checking, either if they can recall stuff (through open response questions) or if they can recognise right stuff among wrong stuff (multiple choice questions.) Stuff. The downward pull of knowledge as stuff is strong. Not irresistible, but strong.

What to do?

We might choose to be explicit about the full range of what we mean by knowledge. Or we might want to stop talking about knowledge for a while, and talk instead about the full range of desired (and assessed) types and outcomes of learning. Or we might simply decide to stop treating knowledge like stuff. Stop telling it. Stop seeing if they’ve remembered it. Instead, teach across this full range, support and expect and assess learning across the whole range. Concentrate on the higher levels. Knowledge, alas, can drag you down.

But how to describe this range, these levels?

Bloomin’ Obvious?

Levels of learning and knowledge

This is the problem that Bloom is addressing in his taxonomies (Bloom, 1956 and 2000) – how to classify in some usable way the multiple types and levels of learning that we might see, expect, hope for or teach towards. The taxonomies were originally devised within a behaviourist educational paradigm. This paradigm saw teaching as providing prompts and stimuli which would provoke appropriate responses, learning and evidence of learning, which was then rewarded. The paradigm worked for rats and pigeons – why not for students?

Bloom produced classifications, taxonomies, for the cognitive, affective and psychomotor domains. That for the cognitive domain, considered here, has endured longest. It still features, sometimes in its updated (2000) version, in courses to train university teachers. It offers a classification of types of educational objectives or, as we now say, learning outcomes, of things people can with their rational brains.

Bloom  
  Level
Version 1 2 3 4 5 6
1956 Knowledge Comprehension Explanation Analysis Synthesis Evaluation
2000 Remembering Understanding Applying Analysing Evaluating Creating

6

Evaluation

Creating

(They’re in the table in the draft, honestly! They just don’t show in the Table when posted, Drat.)

The taxonomy has some use as a tool for analysis, particularly at level 1, knowledge / remembering. It’s good if we can be honest with ourselves and our learners, and say “You need to know / remember (which may mean recall or recognise) this.” But in a connected world, with well-indexed and sometimes authoritative knowledge only a skilful click or swipe or two away, our students may ask – “Why do we need to know?” This is a conversation well worth having. I often have it with myself, as a future post will show.

But the taxonomy crumbles at higher levels.

Bloom and Assessment

Crucially – when we ask students to show that they understand, or ask them to apply or analyse or evaluate or create, it is hard to be sure that they are not simply remembering a previous act of understanding or application or analysis or evaluation or creation. This is a giant hole in the security and integrity of assessment, and in the fiction that assessment faithfully assesses higher-level abilities.

How does this hole occur?

Most tutors prepare their students for examinations (let’s stick with exams for now, although the argument broadly works for other forms of assessment), Tutors typically set or suggest broadly similar assignments, essays, questions and topics, and perhaps offer feedback; or suggest what particular content the exam may address, and perhaps what issues, arguments, approaches may be preferred.

If the examination question is sufficiently different from the pre-assessment assignments, the student may actually need to work at higher levels. They may need to adapt or apply a method or argument to a slightly sufficiently different setting or content. But that may be as high as we get. “Do something completely original?” In an examination? There might be riots.

Often, we simply cannot know the true nature of the assessment task. The apparent assessment task, even accompanied by the intended learning outcomes and assessment criteria, does not confidently tell us what level of the taxonomy a student must work at tackle the question satisfactorily. To judge this we should need to see most if not all of a student’s previous work and learning and the feedback they received ….

Anyway, the levels are just not as clear as they appear. I have not seen a study of how reliably or how validly academics classify assessment tasks or learning outcomes against Bloom. I’d love to see such a study – it must have been done. But I have seen disagreements among academics about the highest level required by a task. (Most assessment tasks require at east some of the lower levels.)

Conclusion

So, Bloom has serious weaknesses as a tool for analysing knowledge. But a future post will see the much bigger horrors that can occur when Bloom is used as a basis for course design.

References

Bloom, B. S. (1956). The Taxonomy of Educational Objectives: Handbook 1 (1st ed.). London: Longman Higher Education.

Bloom, B. S. (2000). Taxonomy for Learning, Teaching, and Assessing, A: A Revision of Bloom’s Taxonomy of Educational Objectives, Complete Edition. (L. W. Anderson & D. R. Krathwohl, Eds.). New York: Longman Publishing Group.

The Proposed Teaching Excellence Framework (TEF) – Some Thoughts on Value Added as a Metric

The purpose of teaching is learning. We measure learning mostly through qualifications. It’s easy to compare a student’s qualifications at entry with their qualifications at exit, if we want to. All we need is a common measurement scale that embraces entry and exit qualifications, differentiating between different grades where applicable. 

Maybe something like this:

A-level – grades A* to E, 100 down to 50 points 

(A-level points for a student would of course be summed across the subjects they took.)

Cert HE – 600 points

DipHE / FD  – 800 points

First Degree, 1st class down to pass

These points are not the same as credit points, which describe number of hours of study rather than qualifications attained.

We might also include higher taught degrees – the same logic applies, I think. 

The actual numbers are arbitrary, although they are ordered – higher always means higher. But this arbitrariness doesn’t matter, as long as the entry and exit qualification scales don’t overlap. All we will be doing is subtracting points at entry from points at exit  – that is, identifying the value added for each student. These differences, these measures of value aded per student, will then be averaged by institutions, Schools, programmes, whoever’s value added we are currently interested in. And, once calculated, they can be compared. We are only interested in difference.

There’s a choice to be made about how we deal with completion rates. Is completion rate a measure of teaching quality, or a measure of how life affects students? Or both? I’m inclined to suggest that we only compute value added for those who complete, which may mean achieving an intermediate qualification, typically CertHe or DipHE.  As always with metrics, we need to be explicit about what we are proposing. There is not always a right answer.

This approach to value added only use existing data – no new data need be collected. Which is good.

Of course, what have historically been called ‘recruiting’ universities would score  higher on value added than ‘selecting’ universities. But the resultant debate would flush out views about the value of degrees from different institutions. These views may or may not may not survive sunlight. But these views are probably best expressed and tested. 

Anyway, value added is only one element of the framework. But it’s a tough one to argue against.

The problems with measuring value-added are not technical. They are political. But let’s have the debate!

This post was prompted by:

“No convincing metrics have been offered so far on how to measure added value without involving time consuming pre-and post tests.” in http://sally-brown.net/wp-content/plugins/download-monitor/download.php?id=238  from Sally Brown and Ruth Pickford, from here, and the subsequent LTHE Tweetchat, Storified here.