- Think faster than you can type?
- Talk faster than you can type?
If that’s you, and you haven’t tried voice recognition, or not for a while, then maybe, now, you should.
(Please don’t worry about the odd occasion when you talk faster than you are thinking. We all do it. In my case, usually after wine.)
Here and now:
I am dictating this. Into my iPhone. About as fast as I am thinking it. (For some reason I find that voice recognition is faster and more accurate in iOS than in macOS Sierra. But any account of the relative capabilities of different softwares is likely to be overturned by the next iteration of the software. I suggest, start with what you have.)
Seeing my spoken words appear on the screen in front of me is still, for me, a mall kind of magic.
I was a fan of the idea of voice recognition before it was, frankly, much good. My efforts with early versions of Dragon were – well, disappointing. Training to my voice was slow; the quality of the voice capture was often poor; and it took its time transcribing.(A colleague claimed that he trained his Dragon installation by reading it Alice in Wonderland. Which explains a lot about his writing style. And, occasionally, his content.) I believe Dragon is a lot better now. As I am sure are the others – it’s a jungle out there, and evolution is rapid and brutal, but great for the customer.
Nowadays: no training. Straight in. Fast, or fast enough – almost as fast I speak, certainly as fast as I can speak clearly. Accurate enough most of the time to mean that, even allowing for the necessary editing, voice recognition is quicker than my (not very good) typing.
Voice recognition, certainly the iOS version, tries very hard indeed to make sense of what it hears. Sometimes, this takes it a long way away from what you meant and said. The main practical implication for me is – review often. Because, if you get a few paragraphs ahead before you check, you simply may not be able to reconstruct what you originally meant / said. And that precious pearl may be lost forever.
But two things will happen over (quite a short period of) time.
- The system will get to know you and your speech better, thereby becoming more accurate. This is fairly obvious.
- More interesting, you will make the curious accommodations required to speak written English. I’ll say more about this.
Writing and speaking:
Crudely: writing and speaking are different kinds of language, different forms of expression.
Here’s a short test you might want to do. If you have access to voice recognition – and you almost certainly have, either on your smart phone or via Google – try it now.
- Turn on voice recognition and then talk about something you’re interested in for a couple of minutes, as if you were talking to a friend or colleague.
- Write / type for a couple of minutes on the same topic.
- Compare the transcript of your speaking with the words that you typed/wrote.
What do you notice?
Obviously, I don’t know. And the comparison isn’t simple:
- You may have been so conscious of the fact that your spoken words were being transcribed that you spoke something much closer to written language than your more normal speech.
- Or your writing for a colleague may have been much less formal than if you had been writing for a wider audience.
But it may be that;
- Your transcribed speech took more words to say roughly the same thing than did your writing / typing;
- Your transcribed speech was less formal, more conversational, than your written / typed text;
- In particular your transcribed speech may have conformed less well to grammatical conventions, in particular to clear and conventional sentence structure and breaks between ideas or sentences.
- Unlike most people, who generally talk mostly in phrases, you may already talk in complete sentences. Or even paragraphs. Or even chapters. Or even books. If you are one of these lucky, talented people, then voice recognition will be largely unproblematic for you, and you will simply become much more productive. I envy you.
Of course we all have a range of forms of both spoken and written expression. We can speak formally or informally, we can write formally or less formally. Audience and context make a big difference to how we speak and write.
But the fact remains that, for voice recognition to give you all the potential advantages of speed, you do need to learn to speak in something like the way or style in which you write, or want to write. And, as I am discovering as I dictate this paragraph, this is hard work, and requires fierce concentration – because, when you are using voice recognition, you are writing; in the specific sense of getting words onto the screen; faster perhaps than you could if you were typing.
But speaking into voice recognition slowly becomes easier. And then you will begin to see typing as it is – another form of technology-impeded human action. Like driving a car with manual gear change – “stick shift” to our American friends call it. Or indeed driving a car at all in the age of Google car and Tesla autopilot.Or repeatedly typing / speaking) the same information into form after form after form. Or – insert your own technological bête noire here.
I over-simplified earlier. I don’t think it’s as simple as learning to speak in the same kind of written English that you used to produce. I write Tweets, emails, blogs, articles and book chapters through the use of voice recognition software. Here are some things I have noticed:
- My style has become slightly less formal as I have made the transition to voice.
- My sentences initially became longer, sometimes I felt too long – at first I used to solve this problem in the editing, whereas now (as is not demonstrated in the current sentence!) I have taught myself to speak in short sentences again. When appropriate. Or when I remember.
- On a good day my writing is a little more vivid – I’m less likely to censor short flights of imaginative expression in speech then I am in writing. Of course, if I don’t like these flights, or don’t think them appropriate to the intended outlet or audience, I can always cut them out. And I do. Sometimes with a tear. Following Faulkner’s advice, endorsed by Stephen King, to “kill all your darlings.”
But the gain in speed I achieve (when I dictated the words “I achieve” just now they were transcribed as “hi cheese”, which briefly entertained me) through the use of voice recognition is so great that the necessary additional editing time still leaves me ahead.
What about the quality?
Quality, I feel, is as much a consequence of:
- The research and thought and planning that go on before the thoughts are expressed either in speech or in writing, and
- The process of editing
as it is a consequence of the process of committing thought to screen.
Although … some of my better quality ideas make it to the screen because I capture them in speech, whereas I might well have lost them by the time my fingers caught up with my occasionally fleeting thoughts. This capture of what would otherwise have been lost just now happened, in the previous sentence. (The rather informal starting of a new paragraph with “Although” was undertaken as I dictated it, because I was conscious that the sentence was becoming rather long, but I didn’t want to lose the train of thought as I struggled to find an editing solution. Anyway, I want in this post to show how, now, for me, speech becomes writing.)
I prefer to use voice recognition when I am alone. I share an open plan office with my partner, and I still feel a little embarrassed when talking to a machine while Carole is in the room, although she assures me that, as someone familiar with open plan office working, she is bothered by it not at all. But that doesn’t seem to stop it bothering me!
Try voice recognition. Academics exhibit greater strangenesses than speaking, slightly after the manner of a BBC radio announcer from the 1930s, into a telephone or a computer. Pretend you are that rude person on the train, who shares (often) his enunciated thoughts with the carriage.
The learning curve for fluency with voice recognition is long but gentle. The benefits, including speed, start to show very early on.
And you may discover, as you proceed, that the computer isn’t the only one learning to recognise your speech. You may also become better able to recognise, appreciate, enjoy and improve your own various voices.
Let me know.
Paper (OK, workshop) at ICED / HELTASA Conference, Cape Town, 23 November 2016
David Baume PhD SFSEDA SFHEA
The concept of professionalism, both for those who teach in higher education and for academic developers, remains problematic and contested. For a recent account see Bostock, S., & Baume, D. (2016). Professions and professionalism in teaching and development. in D. Baume & C. Popovic (Eds.), Advancing practice in academic development (pp. 32–51). United Kingdom: Routledge.
But academic developers are oriented towards finding solutions, or at least to finding and implementing productive ways forward. A sense of ‘forward’ for academic development – “We suggest an overall purpose for academic development – to lead and support the improvement of student learning.” – forms the first sentence of D. Baume & C. Popovic (Eds.), op. cit.
To offer productive ways forward, this paper suggests three pillars of professionalism in academic development:
- Being scholarly;
- Being effective; and
- Enacting principles or values.
Taken together, these three pillars can give developers some confidence in their professionalism, when, as will continue to happen, our legitimacy is challenged. Of course the three pillars need to be implemented reflectively, critically and humanely.
Pillar 1 Being scholarly
A recent model of scholarship, (Baume and Popovic, op. cit., p 5), suggests three (overlapping and progressing) ways to be scholarly:
- Being reflective, critical and analytic;
- Using ideas from the literature; and
- Contributing to the literature.
Participants in the Southern Africa Universities Learning and Teaching (SAULT) Forum in Windhoek, Namibia in February 2016 reported three main reasons to be scholarly:
- To remain current;
- To gain new ideas to apply to practice; and
- To gain and maintain the respect of colleagues and clients.
Participants said that, currently, for them, being scholarly mainly meant writing for publication. However, their accounts of being scholarly in the future pulled together the three kinds of scholarship described in the Baume and Popovic model. The model thus seemed to provide a useful tool, both for analysis and for planning. (D. Baume (2017). Scholarship in Action. Innovations in Education and Teaching International, 54(2))
Question 1: In what particular ways can you become still more scholarly in your practice?
Pillar 2 Being effective
Like professionalism and scholarship, the concept of effectiveness is sometimes contested. For some it sounds like managerialism. For more on defining and showing effectiveness see Stefani, L., & Baume, D. (2016a). “Is it working?” Outcomes, monitoring and evaluation. In D. Baume & C. Popovic (Eds.), op. cit.
We developers talk about what we do – our actions. We talk bout what we make – our outputs. But surely the point of doing and making is to achieve outcomes, to make things better, to make specific things better, in specific and determinable, sometimes measurable, ways? And to do so in ways that embed, preserve and hopefully enhance professional relationships, scholarship, and values?
Question 2: For a specific project, what are you to trying to achieve? What are your intended outcomes? How will you know you have achieved them? If necessary, change the outcomes until you can see how to achieve them and how to evaluate their achievement.
Pillar 3 Enacting principles or values
Many professional standards include statements of underpinning values or principles. Those of the UK Staff and Educational Development association, for example, include:
- Developing understanding of how people learn;
- Practising in ways that are scholarly, professional and ethical; and
- Valuing diversity and promoting inclusivity. (http://www.seda.ac.uk/core-mission-values)
The hard and vital step, it turns out, is not writing and agreeing such statements, but implementing them.
Question 3: Pick a value (related to education) that you believe in. How well, how far, do you and your institution implement it? What factors aid and impede its implementation. What can you do to implement it more, better?
I am an Independent international higher education consultant, researcher and writer. My most recent full-time post was with the UK Open University, where with colleagues I wrote courses on teaching in higher education.
I was founding Chair of the UK Staff and Educational Development Association (SEDA); cofounder of the UK Heads of Educational Development Group (HEDG); a founding council member of the International Consortium for Education Development (ICED); and a founding editor of the ICED journal, the International Journal for Academic Development (IJAD). I am the ICED representative on the Southern Africa Universities Learning and Teaching (SAULT) forum.
I have co-edited four books on academic development in higher education, and published over 60 papers and articles. My contributions to academic development nationally and internationally have been recognised by awards from SEDA and ICED.
Workshop at ICED / HELTASA Conference, Cape Town, 22 November 2016
David Baume PhD SFSEDA SFHEA
Introduction and Rationale
There is a growing demand for accountability in Higher Education. Funders want to know that resources have been both properly and effectively applied. This requirement extends to academic development. The workshop will help participants to demonstrate effectiveness of their work.
By the end of this workshop you will have:
- Determined, at least in outline, the intended outcomes of a development venture for which you have some responsibility. That is, you will have clarified (in negotiation with stakeholders) what the venture is intended to achieve. It could be a workshop, a programme, a development project, writing a policy or strategy – almost anything. Whatever it is, what particular things will make the situation to be improved better?
- Planned, at least in outline, how you will monitor progress towards the intended outcomes. This may include the use of intermediate outcomes and waypoints, and adjusting plans and activities (and maybe also intended outcomes) as required.
- Planned, again at least in outline, how you will evaluate the success of your venture in achieving its intended outcomes, and draw conclusions for future practice.
Notice that these are not learning outcomes, not statements of what you will be able to do. They are just outcomes – things you will have achieved.
Outcomes and evaluation – the big picture
Here, in summary, is a simple and powerful process for writing and checking intended outcomes, and then for monitoring and evaluating their attainment:
- Identify – preferably in negotiation with the other stakeholders – intended outcomes of the project. What is the development project or venture intended to achieve? What exactly do you want to improve? What do you mean by improved?
- How will you find out if the project or venture has been successful? What shows success?
- Check. If it’s hard to write an evaluation plan, then revisit and rework the outcomes, then indicators of success, until you can see how to monitor and evaluate their attainment.
- Plan and schedule how you will run the project to attain the goals. It’s generally better to do a project with people than for people. If it’s a big project, you may need to set interim goals, waypoints, which you can monitor.
- Run the project!
- Keep on asking “Is it working?” “Are we clearly moving towards our goals?” “Do we need to adjust – our methods? Maybe, even, sometimes, adjust our goals?”
- Towards the end, start to evaluate and report; with evidence; on whether the outcomes were achieved; how well they were achieved; what wasn’t achieved; any unexpected outcomes; what should be done next; and, perhaps above all, what has been learned?
Workshop shape and activities
You will see that the activities are directly linked to the outcomes
Here is the overall shape of the workshop around each activity. There may be some variations:
- I’ll say a little about each activity in turn.
- Then I’ll ask you to do the activity, or at any rate to start it.
- And then I’ll ask you to discuss it with a neighbour.
- I’ll ask you to share some of your answers.
- Sometimes, I’ll have a public conversation with you about your answer.
Activity 1 Read the script below, and the comments that follow it.
Activity 2 Choose a development project or venture of some kind.
It should be real. You should have some responsibility for it. And you should be at or near the start.
Activity 3 For this chosen venture, decide the intended outcomes, what the project is intended to achieve and to improve.
Activity 4 Plan how you will know if it has been successful. If that proves difficult, change the outcomes until you can see how to evaluate their attainment.
Activity 5 Pretend that the project is over. Draft a realistic evaluation report. What does that tell you about the intended outcomes, and about how you should have run the project?
A (hypothetical) conversation between the Head of Department X and an Academic Developer
|HODX||Thanks for coming. Look, what it is, the students are complaining about feedback. Could you run a few workshops to help us sort this out?|
|AD||What in particular is bothering students?|
|HODX||Well, they say sometimes the feedback is late. Sometimes they can’t understand it, can’t use it.|
|A D||Okay. A couple of issues here. On the first one, late feedback. What is the department policy on turnaround time for student work?|
|HODX||We haven’t really got a policy on it. Staff don’t like being tied down. We really should get a policy. But the staff are very busy …|
|AD||I understand. Happy to help you work out a policy when you’re ready. But in the short term, we could do some surveys, even interviews. Ask the student how soon they’d like the feedback …|
|HODX||They’ll say they want it tomorrow! No chance!|
|AD||Well, let’s find out. There may be ways to do feedback a lot faster. But yes, we could ask staff how soon they can only think it is feasible to turn work around. We will get to some sort of compromise. The outcome we want here is …|
|HODX||… a lot fewer student complaints about the late feedback!|
|AD||That would be a good start. Second issue – students don’t understand the feedback. What do you think is going on there?|
|HODX||(Pause) I think the problem may be, staff know the material they are teaching so well that their feedback is a bit – concise? I remember what you said at that teaching workshop last year – staff can forget what it’s like not to understand.|
|AD||That’s possible. Well, we could ask students what would make feedback more comprehensible. Get their views back to the staff. Then, run a session for staff on making feedback more comprehensible, based on what the students say, and adding in a few ideas from the literature. Develop some guidelines. Then, after a few months, we could find out …|
|HODX||… if students are finding the feedback more understandable.|
|AD||We could do that.|
|HODX||You don’t sound sure.|
|AD||(Pause) There may be a bigger issue here. Not just “Do students understand the feedback?”, but; you hinted at it; “Are students using the feedback to help them decide what to keep on doing right and what to do differently next time?” After all, that’s the point of feedback, isn’t it?|
|HODX||Interesting. How would you tackle that?|
|AD||The usual. Student surveys. Student interviews. Actually, we could try something a bit newer. We could facilitate some conversation between students and staff about this, dig a bit deeper. That could be very useful. If staff would do it?|
|HODX||I’m sure I could persuade a couple of them! What are we trying to achieve here?|
|AD||I guess – students making good use of feedback to inform their future studies?
Of course, it’s not just about the kind of feedback staff give. It’s also about the pattern of assignments, maybe even the shape of the whole course.
|HODX||Whoa! Where did that come from?|
|AD||If the next assignment is completely different, or if the timing is wrong, or students aren’t helped to use the feedback, then maybe some students aren’t using feedback because it’s just not possible for them to use it? Issues like that|
|HODX||You always want to change the world, don’t you! Let’s stick with speeding up feedback and making it more useful for now. We can get to redesigning the course at the major review time in, what is it, two years.|
|AD||Always happy to help.|
Notice in this conversation:
- The use of the idea of outcomes and evaluation, by both the HOD and the AD.
- The AD working to identify what is going on, digging a bit deeper.
- A reference to using the literature.
- The AD seeding ideas for possible future work.
- Negotiations about what is feasible now.
- A good working relationship, with mutual respect.
Stefani, L., & Baume, D. “Is it working?’ Outcomes, monitoring and evaluation. In Baume, D., & Popovic, C. (Eds.). (2016). Advancing Practice in Academic Development. (pp. 157-173) London: RoutledgeFamler
Hypothetical Case Study on Clarifying Goals: ‘Enhancing the status of teaching’
A university policy aim might be to enhance the status of teaching. This is a laudable aim; but vague.
A non-rhetorical question to begin with: How would you identify the status of teaching in your university, and track its changes over time? Let’s try to sharpen the aim.
We might try to achieve a more rigorous definition. We could negotiate university meanings – university meanings, not the meanings; we are developers, not writers of dictionaries – for terms including enhance, status, even teaching …
Alternatively, we could take a more direct approach and ask the question – What would indicate an enhanced status of teaching’? We could decide, again using ideas from the literature, and/or we could ask within the university. We could devise and implement a survey to identify the current status of teaching. This would rapidly reveal some of the many meanings of the status of teaching.
Possible indicators of status accorded to teaching:
- A formal teaching awards scheme – a plausible indicator that an institution is seeking to enhance the status of teaching. Beyond this, the number of scheme applications and awards each year, the rewards given to and more broadly the fuss made of award winners; these are all further indicators of an institution taking seriously the enhancement of the status of teaching.
- Promotions criteria that include teaching – indicating that teaching is being valued.
- (Also – are the criteria widely believed to be being used?)
- Teaching ability being emphasized in recruitment advertisements, and taken seriously in selection processes, would be a further positive sign …
These and other possible approaches have in common is that they and/or local level, for staff to teach well, to improve their teaching, and to enhance the status of teaching.
Of course most of these processes could be implemented well or badly, strongly or feebly. All could be respected or not by academic staff, managers and students. All could be subverted or diminished by other policies and strategies which value, or are perceived as valuing, other kinds of activity – most obviously research or administration – more highly than teaching. Nonetheless, a university implementing such measures, and putting some effort into evaluating their effectiveness, could make a decent claim to be committed to enhancing the status, and also the quality, of teaching.
Analysing these, seeking a core, seeking context-specific (for example, discipline-specific) local variants, and feeding in any research-based accounts, all start to give an account of the status of teaching with which we can work. Our account should enable us to identify enhancement over a baseline. In enhancement work, it’s good to know which way is up.
Academic developers have many possible roles here. They can help universities, schools and departments to identify possible good academic practices that are broadly compatible with the norms and values of the institution, accepting also that one of the more challenging roles of the developers is sometimes to help the institution to shift its norms and values. Developers can make productive connections across the institution …
At some point a developer will also want to ask – ‘Why do we seek to enhance the status of teaching?’
Condensed and adapted from Stefani and Baume (op cit.)
Nine responses were received to a survey on cooperation in development, from academic / educational development units.
Main development functions of units
The most frequently mentioned development functions can be grouped as:
- Staff development, including training teachers for a qualification, accrediting teachers, CPD, and supporting staff and faculties;
- Educational development, including improving learning, teaching and assessment and curriculum development;
- Institutional support, including policy and strategy development and projects
- Student development and support
- Learning technology development, implementation and evaluation
- Other functions – post graduate support, research into teaching and learning, horizon scanning, and QA
Beyond what once might have been classic academic development functions; perhaps A, B, C and some of F; we now see also D and E.
Other development functions elsewhere in the institution
These include HR, learning technology, student development, student services (including overseas students), faculty committees and research.
Frequency of contacts between the academic development unit and these other development functions:
|Frequency of contact:||Annually or less||A few times each year||Most or every month||Most or every week|
Nature of contacts between the academic development unit and these other development functions:
|Nature of contact:||Formal||Informal||Both||Policy / strategy||Operational||Both|
The largest scores for frequency of contact – a few times each year / most or every month – and type of contact – all four types! – are in bold.
Comments on what above all supports effective co-operation on development in your institution
Personal relationships and communicating (3 mentions each); the encouragement of leadership (2); and the alignment of strategy (1). Also mentioned are ‘seeing the person face-to-face’, ‘our small campus culture’, goodwill, energy, focus and effective resourcing.
Comments on what above all impedes effective co-operation on development in your institution
Structural factors (5 mentions), where the response was amplified, included organizational hierarchies, constant restructuring, silo working, geography and a lack of specific leads for specific functions. Communications factors (3) included ¹having EVERYTHING online via email¹ and the lack of time. One respondent reported the absence of goodwill, energy, focus and leadership as the main inhibitors of cooperation in development.
Comments on nature and frequency of co-operation with other units
These four longer comments from respondents suggest some of the complexity and benefits around inter-unit cooperation:
- As a sole operator in learning and teaching facilitation across the university, I establish currency in my role, by ‘supporting’ (not teaching, as [I do not have] an academic role) the PGCert provision. For a period, my role was positioned in the [learning and development function] of our Human Resources department, but I was [recently] re-located , as much of the ‘development’ I facilitated did not naturally align with colleagues’ specialisms in [organisational development]. I was therefore moved to [a unit concerned with research], from where I run an annual programme of CPD in learning and teaching and facilitate the university’s CPD for professional recognition. Being in research, I also support programmes around researcher development, particularly in terms of teaching/supporting learning, graduate teaching schemes, etc
- Our university is small and we often work with [the learner support function] to share data, review the best approaches to take for all students, and progress agendas with senior colleagues. Good personal relationships between these teams mean we routinely pick up the phone and use each other as sounding boards for work of common interest.
- [Relations are] good, and they are very helpful, now that [another unit has] realised we have been doing serious education research and they keep us in the loop and support us to bid for funding opportunities. The university has recently appointed a [senior post for] Education and we are hopeful they will enable the different bits to coalesce.
- At the formal level, these committees (depending on the Chair!) can be procedural – so it is possible to ask critical friend questions – but the real value is informal, getting into conversations with colleagues about developmental work. In many respects the people who are key on these committees are ones our team has known since they were on the PGCert and these relationships have built up over time.
‘Churn’, both in staffing and in structures, was also a theme, along with task-specific rather than general co-operation
Overview and possible implications
Co-operation between development units / functions in higher educational institutions is valuable, difficult, and mostly attainable. Good personal communications and relationships aid co-operation; structural factors and poor communications impede cooperation. Developers may wish to consider (1) working to establish good personal / professional relations, perhaps initially around specific, rather than big picture, co-operations; (2) more broadly being prepared to work across structural boundaries in pursuit of institutional goals and priorities; and (3) assuming that, whatever the current actual or perceived structural obstacles and political difficulties, no-one actually wants to prevent us from doing good stuff.
Notes for a survey and session at SEDA November 2016 Conference
The importance of collaboration in learning and development has long been stressed. Working with and developing learning communities is a SEDA value. Kahn and Walsh (2009) provide theoretical underpinnings for and vivid examples of collaboration. Baume and Popovic (2016) contains many accounts of the importance of collaboration. The authors describe “the increased blurring of and collaboration between development functions;” (p 293). At greater length, we say:
Not all problems, opportunities or possible sites for action in higher education fall tidily under the heading of teaching, learning, assessment, course design, educational development, staff/faculty development, student development, advice and guidance, personal tutoring, language development, numeracy development, learning technology, management, researcher development, research supervision development, administration, support for students with specific learning difficulties, international education, support for students from overseas, equality of opportunity, graduate careers education and advice, employability, community links, open and distance learning, learning resources, estate planning, designing and equipping teaching and learning spaces, learning analytics, organizational development, library and information services, etc. … This suggests, if it were not already obvious, the great need for the various university development functions, including but not limited to those above, to cooperate.”
This is all very well. But organisational and political pressures can militate against collaboration. We all believe in cooperation. The issue which this discussion paper session will tackle is – how can we make it happen, in the real world of current higher education?
Baume, D., & Popovic, C. (Eds.). (2016). Advancing Practice in Academic Development. London: RoutledgeFamler.
Walsh, L., & Kahn, P. H. (2009). Collaborative working in higher education: The social academy. New York: Routledge.
This follows https://davidbaume.com/2015/09/28/learning-and-knowledge-bloomin-obvious/ . I’d start there, if I were you.
Course design – bottom up?
So, Bloom’s taxonomy as a tool for analysing knowledge has problems. But disaster strikes when the taxonomy is used as a tool for course design. (I am here criticising uses of Bloom, not the taxonomy itself.)
When you’re designing a course, the temptation is hard to resist. Bloom offers us a taxonomy, a hierarchy, of knowledge. Each higher level builds on those below. Obviously a building must start with a foundation, perhaps built in a hole in the ground. So, to design a course, start at the lower levels, on a solid foundation of knowledge, of things known, of facts and theories. Then build on up.
First, teach students the facts. Then teach them to understand the facts. Then to apply the understood facts. Then to analyse, and finally to evaluate and synthesise, in whichever order. As easy and as logical as Lego. Just another brick in the wall.
Trouble is, it’s nonsense. It’s not how people learn. It may or not be a useful description of levels of knowledge. As a way to plan learning, it’s a disaster.
Do the thought experiment for a course or subject you know. What are a few basic bits of knowledge in your subject? Imagine teaching students to remember these, then either to recall them in an open-response question, or to recognise them among – wrong facts, I suppose – in a multiple-choice question. No context for the knowledge, no account of why it is important, or what it might mean. No higher level stuff. Just the facts. Learned and remembered.
OK, now let’s move on to understanding these facts, to explaining them, expressing them in different ways. Done.
Now, let’s learn to apply this knowledge and understanding, to use it to address questions and problems. And then, having applied it, to analyse it. And finally, let’s critically evaluate and synthesize these facts that we have learned, and then learned to understand and to apply.
This is ludicrous.
I have been taught this way. It really was ludicrous, and of course quite ineffective. Even as we were being taught the basic facts, we were trying to make sense of them, each in our way. Some of us were trying to understand the facts, to reformulate them in terms that made better sense to us, that linked to, whether supporting or contradicting, things we already knew, and maybe even partly understood. Some of us is sought to understand the facts by trying to apply them, although we were not always clear to what kinds of situations or problems the facts could be applied. Some of us tried to the facts, although again we had nowhere to stand to critique or evaluate, and were certainly not encouraged to do so. We were being lectured to, with no opportunity to ask questions, to explore or to discuss, to make our own sense. We all did some memorising. But it all felt rather pointless.
I suspect I may even have taught this way a few times, in my early days. To those students, I apologise.
What has gone wrong here?
How long have you got?
A dangerous metaphor
A metaphor has led us astray. A diagram has led us astray. The metaphor of foundation and building, and the pyramid diagram in which Bloom’s taxonomy is often presented, simply do not work for learning. Why not?
A building, whether or steel concrete or Lego, doesn’t start with a foundation. It doesn’t even start with a plan. It starts with an idea, maybe even with a vision, with a need, with a specification of some form. Only when this has been agreed can we develop a plan. Quite late in the process, concrete is poured, steel is erected, and bricks are laid – or clicked. These ‘basics’ and ‘foundation’ metaphors just don’t work.
Let’s come at this from another angle, and maybe try to rescue Bloom, to find good ways of using the taxonomy in course design.
It’s appealing to suggest that we should start from the basics. What are they? What are the basics of your subject? Go on, think about it for a minute.
What are the basics of your subject? Are they: Facts? Theories? Principles? Purposes? Problems? Values? Ways of thinking? Ways of acting? Something else? Some complex combination of all of these and others?
For what it’s worth, and as an illustration, after thinking for a long time about my own discipline / profession / area of work, academic development, I got to the view that that its basis takes the form of a purpose or goal; to improve student learning. Of course academic development has many intermediate purposes, and many methods and theories and ways of thinking and working and the rest. But at its heart, I feel, is a purpose. “Improving student learning”. Which, as I write, I realise is a subset of a larger goal – to make the world a better place. Which is hopefully a goal of most if not all professional and scholarly activity. (Add quotation marks to any of these words and phrases as you need.)
But, whatever the basics of your discipline, where do they lie on Bloom’s taxonomy?
Probably more towards the top than the bottom.
I’m not against knowledge and comprehension and application. They have their place. But, I would suggest, they are important mainly in how they are used in at the higher levels. I’ll probably give this more detail in a future post. Knowledge and comprehension and application may indeed be required for the attainment of higher-level goals. But this absolutely doesn’t mean that the pursuit of high-level goals has to start at the bottom.
This is the mistake that is often made in the use of Bloom’s taxonomy in course design. The relationships between the different levels of the taxonomy are used to deduce, completely wrongly, a sequence of teaching. The structure of knowledge doesn’t determine the best way of learning. We know a bit about learning – the importance of a wish or need to know, learning as a process of making rather than simply absorbing sense, learning as an active business incorporating above all action and reflection, reflection often being aided by feedback. We need to apply what we know about learning to whatever it is that is to be learned. That’s the only safe way to produce a good course design, good learning activities, and hence good student learning.
So – Bloom’s taxonomy may have some limited uses a tool for analysis. But I can’t find a good use for it in course design. Can you? I’d love to hear.
Next time, moving away from Bloom, I’ll suggest some more fruitful bases for course design and the planning of teaching.
Bloom, B. S. (1956). The Taxonomy of Educational Objectives: Handbook 1 (1st ed.). London: Longman Higher Education.
Bloom, B. S. (2000). Taxonomy for Learning, Teaching, and Assessing, A: A Revision of Bloom’s Taxonomy of Educational Objectives, Complete Edition. (L. W. Anderson & D. R. Krathwohl, Eds.). New York: Longman Publishing Group.
This argument will grow over several posts. I’m developing the argument as part of the process of developing and writing a book, which I currently think will be about learning in higher education. Wish me luck!
Learning and Knowledge
It can be hard to talk about learning. For example – what is being learned? Among other things, subjects, of course. Knowledge.
Ah, knowledge. Or to give it its technical name – stuff.
When we educators talk about knowledge, we usually mean much more than just stuff. By knowledge we can also mean ways of thinking, ways of acting, ways of being; forms of meaning, principles, theories, values, and much besides. The sorts of ambitious and demanding knowledge that we rate highly.
But the word knowledge can still pull us down. Whatever higher-level things we want knowledge to mean, knowledge also, seemingly inexorably, seems to end up meaning, well, stuff. Things. Objects.
I’m not sure how this happens. Suggestions welcome. But it happens.
Why do I call it stuff? Because of what we do with it. Learners learn it, and teachers teach it. And this in turn can bring down the whole educational show.
On a bad day, of which there are many, learning (stuff) becomes memorizing (stuff). Knowing (stuff) becomes having memorised (stuff). Teaching (stuff) becomes telling (stuff). Assessing stuff becomes finding out whether people know stuff, which tends to mean checking, either if they can recall stuff (through open response questions) or if they can recognise right stuff among wrong stuff (multiple choice questions.) Stuff. The downward pull of knowledge as stuff is strong. Not irresistible, but strong.
What to do?
We might choose to be explicit about the full range of what we mean by knowledge. Or we might want to stop talking about knowledge for a while, and talk instead about the full range of desired (and assessed) types and outcomes of learning. Or we might simply decide to stop treating knowledge like stuff. Stop telling it. Stop seeing if they’ve remembered it. Instead, teach across this full range, support and expect and assess learning across the whole range. Concentrate on the higher levels. Knowledge, alas, can drag you down.
But how to describe this range, these levels?
Levels of learning and knowledge
This is the problem that Bloom is addressing in his taxonomies (Bloom, 1956 and 2000) – how to classify in some usable way the multiple types and levels of learning that we might see, expect, hope for or teach towards. The taxonomies were originally devised within a behaviourist educational paradigm. This paradigm saw teaching as providing prompts and stimuli which would provoke appropriate responses, learning and evidence of learning, which was then rewarded. The paradigm worked for rats and pigeons – why not for students?
Bloom produced classifications, taxonomies, for the cognitive, affective and psychomotor domains. That for the cognitive domain, considered here, has endured longest. It still features, sometimes in its updated (2000) version, in courses to train university teachers. It offers a classification of types of educational objectives or, as we now say, learning outcomes, of things people can with their rational brains.
(They’re in the table in the draft, honestly! They just don’t show in the Table when posted, Drat.)
The taxonomy has some use as a tool for analysis, particularly at level 1, knowledge / remembering. It’s good if we can be honest with ourselves and our learners, and say “You need to know / remember (which may mean recall or recognise) this.” But in a connected world, with well-indexed and sometimes authoritative knowledge only a skilful click or swipe or two away, our students may ask – “Why do we need to know?” This is a conversation well worth having. I often have it with myself, as a future post will show.
But the taxonomy crumbles at higher levels.
Bloom and Assessment
Crucially – when we ask students to show that they understand, or ask them to apply or analyse or evaluate or create, it is hard to be sure that they are not simply remembering a previous act of understanding or application or analysis or evaluation or creation. This is a giant hole in the security and integrity of assessment, and in the fiction that assessment faithfully assesses higher-level abilities.
How does this hole occur?
Most tutors prepare their students for examinations (let’s stick with exams for now, although the argument broadly works for other forms of assessment), Tutors typically set or suggest broadly similar assignments, essays, questions and topics, and perhaps offer feedback; or suggest what particular content the exam may address, and perhaps what issues, arguments, approaches may be preferred.
If the examination question is sufficiently different from the pre-assessment assignments, the student may actually need to work at higher levels. They may need to adapt or apply a method or argument to a slightly sufficiently different setting or content. But that may be as high as we get. “Do something completely original?” In an examination? There might be riots.
Often, we simply cannot know the true nature of the assessment task. The apparent assessment task, even accompanied by the intended learning outcomes and assessment criteria, does not confidently tell us what level of the taxonomy a student must work at tackle the question satisfactorily. To judge this we should need to see most if not all of a student’s previous work and learning and the feedback they received ….
Anyway, the levels are just not as clear as they appear. I have not seen a study of how reliably or how validly academics classify assessment tasks or learning outcomes against Bloom. I’d love to see such a study – it must have been done. But I have seen disagreements among academics about the highest level required by a task. (Most assessment tasks require at east some of the lower levels.)
So, Bloom has serious weaknesses as a tool for analysing knowledge. But a future post will see the much bigger horrors that can occur when Bloom is used as a basis for course design.
Bloom, B. S. (1956). The Taxonomy of Educational Objectives: Handbook 1 (1st ed.). London: Longman Higher Education.
Bloom, B. S. (2000). Taxonomy for Learning, Teaching, and Assessing, A: A Revision of Bloom’s Taxonomy of Educational Objectives, Complete Edition. (L. W. Anderson & D. R. Krathwohl, Eds.). New York: Longman Publishing Group.