Skip to content

Bloom and Course Design – Disaster Strikes!

Introduction

This follows https://davidbaume.com/2015/09/28/learning-and-knowledge-bloomin-obvious/ . I’d start there, if I were you.

Course design – bottom up?

So, Bloom’s taxonomy as a tool for analysing knowledge has problems. But disaster strikes when the taxonomy is used as a tool for course design. (I am here criticising uses of Bloom, not the taxonomy itself.)

When you’re designing a course, the temptation is hard to resist. Bloom offers us a taxonomy, a hierarchy, of knowledge. Each higher level builds on those below. Obviously a building must start with a foundation, perhaps built in a hole in the ground. So, to design a course, start at the lower levels, on a solid foundation of knowledge, of things known, of facts and theories. Then build on up.

First, teach students the facts. Then teach them to understand the facts. Then to apply the understood facts. Then to analyse, and finally to evaluate and synthesise, in whichever order. As easy and as logical as Lego. Just another brick in the wall.

Trouble is, it’s nonsense. It’s not how people learn. It may or not be a useful description of levels of knowledge. As a way to plan learning, it’s a disaster.

Do the thought experiment for a course or subject you know. What are a few basic bits of knowledge in your subject? Imagine teaching students to remember these, then either to recall them in an open-response question, or to recognise them among – wrong facts, I suppose – in a multiple-choice question. No context for the knowledge, no account of why it is important, or what it might mean. No higher level stuff. Just the facts. Learned and remembered.

OK, now let’s move on to understanding these facts, to explaining them, expressing them in different ways. Done.

Now, let’s learn to apply this knowledge and understanding, to use it to address questions and problems. And then, having applied it, to analyse it. And finally, let’s critically evaluate and synthesize these facts that we have learned, and then learned to understand and to apply.

This is ludicrous.

I have been taught this way. It really was ludicrous, and of course quite ineffective. Even as we were being taught the basic facts, we were trying to make sense of them, each in our way. Some of us were trying to understand the facts, to reformulate them in terms that made better sense to us, that linked to, whether supporting or contradicting, things we already knew, and maybe even partly understood. Some of us is sought to understand the facts by trying to apply them, although we were not always clear to what kinds of situations or problems the facts could be applied. Some of us tried to the facts, although again we had nowhere to stand to critique or evaluate, and were certainly not encouraged to do so. We were being lectured to, with no opportunity to ask questions, to explore or to discuss, to make our own sense. We all did some memorising. But it all felt rather pointless.

I suspect I may even have taught this way a few times, in my early days. To those students, I apologise.

What has gone wrong here?

How long have you got?

A dangerous metaphor

A metaphor has led us astray. A diagram has led us astray. The metaphor of foundation and building, and the pyramid diagram in which Bloom’s taxonomy is often presented, simply do not work for learning. Why not?

A building, whether or steel concrete or Lego, doesn’t start with a foundation. It doesn’t even start with a plan. It starts with an idea, maybe even with a vision, with a need, with a specification of some form. Only when this has been agreed can we develop a plan. Quite late in the process, concrete is poured, steel is erected, and bricks are laid – or clicked. These ‘basics’ and ‘foundation’ metaphors just don’t work.

The basics?

Let’s come at this from another angle, and maybe try to rescue Bloom, to find good ways of using the taxonomy in course design.

It’s appealing to suggest that we should start from the basics. What are they? What are the basics of your subject? Go on, think about it for a minute.

What are the basics of your subject? Are they: Facts? Theories? Principles? Purposes? Problems? Values? Ways of thinking? Ways of acting? Something else? Some complex combination of all of these and others?

For what it’s worth, and as an illustration, after thinking for a long time about my own discipline / profession / area of work, academic development, I got to the view that that its basis takes the form of a purpose or goal; to improve student learning. Of course academic development has many intermediate purposes, and many methods and theories and ways of thinking and working and the rest. But at its heart, I feel, is a purpose. “Improving student learning”. Which, as I write, I realise is a subset of a larger goal – to make the world a better place. Which is hopefully a goal of most if not all professional and scholarly activity. (Add quotation marks to any of these words and phrases as you need.)

But, whatever the basics of your discipline, where do they lie on Bloom’s taxonomy?

Probably more towards the top than the bottom.

I’m not against knowledge and comprehension and application. They have their place. But, I would suggest, they are important mainly in how they are used in at the higher levels. I’ll probably give this more detail in a future post. Knowledge and comprehension and application may indeed be required for the attainment of higher-level goals. But this absolutely doesn’t mean that the pursuit of high-level goals has to start at the bottom.

This is the mistake that is often made in the use of Bloom’s taxonomy in course design. The relationships between the different levels of the taxonomy are used to deduce, completely wrongly, a sequence of teaching. The structure of knowledge doesn’t determine the best way of learning. We know a bit about learning – the importance of a wish or need to know, learning as a process of making rather than simply absorbing sense, learning as an active business incorporating above all action and reflection, reflection often being aided by feedback. We need to apply what we know about learning to whatever it is that is to be learned. That’s the only safe way to produce a good course design, good learning activities, and hence good student learning.

Conclusion

So – Bloom’s taxonomy may have some limited uses a tool for analysis. But I can’t find a good use for it in course design. Can you? I’d love to hear.

Next time, moving away from Bloom, I’ll suggest some more fruitful bases for course design and the planning of teaching.

Sources

Bloom, B. S. (1956). The Taxonomy of Educational Objectives: Handbook 1 (1st ed.). London: Longman Higher Education.

Bloom, B. S. (2000). Taxonomy for Learning, Teaching, and Assessing, A: A Revision of Bloom’s Taxonomy of Educational Objectives, Complete Edition. (L. W. Anderson & D. R. Krathwohl, Eds.). New York: Longman Publishing Group.

Learning and Knowledge – Bloomin’ Obvious?

Introduction

This argument will grow over several posts. I’m developing the argument as part of the process of developing and writing a book, which I currently think will be about learning in higher education. Wish me luck!

Learning and Knowledge

It can be hard to talk about learning. For example – what is being learned? Among other things, subjects, of course. Knowledge.

Ah, knowledge. Or to give it its technical name – stuff.

When we educators talk about knowledge, we usually mean much more than just stuff. By knowledge we can also mean ways of thinking, ways of acting, ways of being; forms of meaning, principles, theories, values, and much besides. The sorts of ambitious and demanding knowledge that we rate highly.

But the word knowledge can still pull us down. Whatever higher-level things we want knowledge to mean, knowledge also, seemingly inexorably, seems to end up meaning, well, stuff. Things. Objects.

I’m not sure how this happens. Suggestions welcome. But it happens.

Why do I call it stuff? Because of what we do with it. Learners learn it, and teachers teach it. And this in turn can bring down the whole educational show.

How?

On a bad day, of which there are many, learning (stuff) becomes memorizing (stuff). Knowing (stuff) becomes having memorised (stuff). Teaching (stuff) becomes telling (stuff). Assessing stuff becomes finding out whether people know stuff, which tends to mean checking, either if they can recall stuff (through open response questions) or if they can recognise right stuff among wrong stuff (multiple choice questions.) Stuff. The downward pull of knowledge as stuff is strong. Not irresistible, but strong.

What to do?

We might choose to be explicit about the full range of what we mean by knowledge. Or we might want to stop talking about knowledge for a while, and talk instead about the full range of desired (and assessed) types and outcomes of learning. Or we might simply decide to stop treating knowledge like stuff. Stop telling it. Stop seeing if they’ve remembered it. Instead, teach across this full range, support and expect and assess learning across the whole range. Concentrate on the higher levels. Knowledge, alas, can drag you down.

But how to describe this range, these levels?

Bloomin’ Obvious?

Levels of learning and knowledge

This is the problem that Bloom is addressing in his taxonomies (Bloom, 1956 and 2000) – how to classify in some usable way the multiple types and levels of learning that we might see, expect, hope for or teach towards. The taxonomies were originally devised within a behaviourist educational paradigm. This paradigm saw teaching as providing prompts and stimuli which would provoke appropriate responses, learning and evidence of learning, which was then rewarded. The paradigm worked for rats and pigeons – why not for students?

Bloom produced classifications, taxonomies, for the cognitive, affective and psychomotor domains. That for the cognitive domain, considered here, has endured longest. It still features, sometimes in its updated (2000) version, in courses to train university teachers. It offers a classification of types of educational objectives or, as we now say, learning outcomes, of things people can with their rational brains.

Bloom  
  Level
Version 1 2 3 4 5 6
1956 Knowledge Comprehension Explanation Analysis Synthesis Evaluation
2000 Remembering Understanding Applying Analysing Evaluating Creating

6

Evaluation

Creating

(They’re in the table in the draft, honestly! They just don’t show in the Table when posted, Drat.)

The taxonomy has some use as a tool for analysis, particularly at level 1, knowledge / remembering. It’s good if we can be honest with ourselves and our learners, and say “You need to know / remember (which may mean recall or recognise) this.” But in a connected world, with well-indexed and sometimes authoritative knowledge only a skilful click or swipe or two away, our students may ask – “Why do we need to know?” This is a conversation well worth having. I often have it with myself, as a future post will show.

But the taxonomy crumbles at higher levels.

Bloom and Assessment

Crucially – when we ask students to show that they understand, or ask them to apply or analyse or evaluate or create, it is hard to be sure that they are not simply remembering a previous act of understanding or application or analysis or evaluation or creation. This is a giant hole in the security and integrity of assessment, and in the fiction that assessment faithfully assesses higher-level abilities.

How does this hole occur?

Most tutors prepare their students for examinations (let’s stick with exams for now, although the argument broadly works for other forms of assessment), Tutors typically set or suggest broadly similar assignments, essays, questions and topics, and perhaps offer feedback; or suggest what particular content the exam may address, and perhaps what issues, arguments, approaches may be preferred.

If the examination question is sufficiently different from the pre-assessment assignments, the student may actually need to work at higher levels. They may need to adapt or apply a method or argument to a slightly sufficiently different setting or content. But that may be as high as we get. “Do something completely original?” In an examination? There might be riots.

Often, we simply cannot know the true nature of the assessment task. The apparent assessment task, even accompanied by the intended learning outcomes and assessment criteria, does not confidently tell us what level of the taxonomy a student must work at tackle the question satisfactorily. To judge this we should need to see most if not all of a student’s previous work and learning and the feedback they received ….

Anyway, the levels are just not as clear as they appear. I have not seen a study of how reliably or how validly academics classify assessment tasks or learning outcomes against Bloom. I’d love to see such a study – it must have been done. But I have seen disagreements among academics about the highest level required by a task. (Most assessment tasks require at east some of the lower levels.)

Conclusion

So, Bloom has serious weaknesses as a tool for analysing knowledge. But a future post will see the much bigger horrors that can occur when Bloom is used as a basis for course design.

References

Bloom, B. S. (1956). The Taxonomy of Educational Objectives: Handbook 1 (1st ed.). London: Longman Higher Education.

Bloom, B. S. (2000). Taxonomy for Learning, Teaching, and Assessing, A: A Revision of Bloom’s Taxonomy of Educational Objectives, Complete Edition. (L. W. Anderson & D. R. Krathwohl, Eds.). New York: Longman Publishing Group.

The Proposed Teaching Excellence Framework (TEF) – Some Thoughts on Value Added as a Metric

The purpose of teaching is learning. We measure learning mostly through qualifications. It’s easy to compare a student’s qualifications at entry with their qualifications at exit, if we want to. All we need is a common measurement scale that embraces entry and exit qualifications, differentiating between different grades where applicable. 

Maybe something like this:

A-level – grades A* to E, 100 down to 50 points 

(A-level points for a student would of course be summed across the subjects they took.)

Cert HE – 600 points

DipHE / FD  – 800 points

First Degree, 1st class down to pass

These points are not the same as credit points, which describe number of hours of study rather than qualifications attained.

We might also include higher taught degrees – the same logic applies, I think. 

The actual numbers are arbitrary, although they are ordered – higher always means higher. But this arbitrariness doesn’t matter, as long as the entry and exit qualification scales don’t overlap. All we will be doing is subtracting points at entry from points at exit  – that is, identifying the value added for each student. These differences, these measures of value aded per student, will then be averaged by institutions, Schools, programmes, whoever’s value added we are currently interested in. And, once calculated, they can be compared. We are only interested in difference.

There’s a choice to be made about how we deal with completion rates. Is completion rate a measure of teaching quality, or a measure of how life affects students? Or both? I’m inclined to suggest that we only compute value added for those who complete, which may mean achieving an intermediate qualification, typically CertHe or DipHE.  As always with metrics, we need to be explicit about what we are proposing. There is not always a right answer.

This approach to value added only use existing data – no new data need be collected. Which is good.

Of course, what have historically been called ‘recruiting’ universities would score  higher on value added than ‘selecting’ universities. But the resultant debate would flush out views about the value of degrees from different institutions. These views may or may not may not survive sunlight. But these views are probably best expressed and tested. 

Anyway, value added is only one element of the framework. But it’s a tough one to argue against.

The problems with measuring value-added are not technical. They are political. But let’s have the debate!

This post was prompted by:

“No convincing metrics have been offered so far on how to measure added value without involving time consuming pre-and post tests.” in http://sally-brown.net/wp-content/plugins/download-monitor/download.php?id=238  from Sally Brown and Ruth Pickford, from here, and the subsequent LTHE Tweetchat, Storified here.

Tech in its place, part one

What expectations should we have of technology (hereafter, tech) in the work of higher education? What is the proper place of tech? What should developers do about tech?

Good tech

Good tech has at least these three related qualities. It just works. It does, or it can, make things better. And after a while it becomes almost invisible, almost unproblematic.

Creative disruption, then improvement, then a further step into cyborgia

With a new tech – whether it is new to the world, new to the institution or discipline, or new to the group or individual – there is an initial period of excitement and learning, sometimes accompanied by fear. During this period we discover the range of things that we can do with the new tech that we couldn’t previously do, or could do only with some difficulty, or less well, without it. We explore which of these new possibles we want to, and should, and can, use. Well managed and supported, this can be a time of creative and productive disruption.

And then the tech almost vanishes into us and our organisations, becomes embedded in our practice and in our thinking. We have become in one more sense tech-enhanced humans and organisations, as we did when we first wore glasses or contact lenses, or rode a bicycle, or drove, or travelled by aeroplane, or built a building. We have advanced a little further into the condition of cyborgia.

The ‘almost’ is important. Hopefully we are still at least a little conscious, when for example we telephone, of what we are doing. For example, as we make the call, hopefully we are sensitive to the risk of intrusion, aware of the person called’s context, prepared at least a little for whatever the call may bring us both. But the fact that we can often speak to someone without visiting them, through a system of vast and invisible complexity, is now, in the moment, for most people, relatively unproblematic.

How does this relate to the technologies of our work? What expectations do we have of the tech in use? Here are a few of my answers. You might find it useful to spend a few seconds noting what (else) you expect from the tech-supported people with whom you work.

We expect …

  • We expect to be able to compose and then send a message, perhaps with attachments in various media, to (very very nearly) everyone we know professionally, and maybe also personally. We are confident it will be in their inbox within minutes. We expect to send this message this without having to remember their contact details, just their name. No envelope or stamp required.
  • Building on this, we expect to be able to communicate with similar ease with defined groups and subsets of the people with whom we work.
  • We expect to be able to find, within seconds, contact information for someone we don’t yet know.
  • We expect to find at least a half-way useful answer, or at least a starting point to an answer, to an increasing number of questions, of growing complexity, by typing the question into a search engine.
  • We expect ourselves, and those with whom we communicate, to write in language that is grammatically correct and correctly spelt, at least according to the views of our software provider.
  • We expect those with whom we work to be able to locate and make critical, intelligent, appropriate use of (a) information at which we point them and (b) information of particular interest and use to them which they find for themselves; and then we expect them to make and share connections and relationships between information from these two kinds of sources.
  • Beyond literacy, beyond competence, beyond capability, we might expect or hope for a degree of fluency. Fluency in working with words and numbers and images and ideas appropriate to our disciplines and our professional and personal life. Fluency also in using the technologies through which these various elements of academic and professional work and life are more and more often created and manipulated and communicated and read and studied and used.

We shall expect …

You might find it even more interesting to cast an eye 50 years and more into the future, and begin to consider what expectations are reasonable, for the current students within our universities, throughout their working and personal lives.   Maybe I shall have a go at this in a future post, remembering that, as Neils Bohr  “Prediction is very difficult, especially about the future.” Note that he did not say ‘impossible’.

We won’t be able to teach our students at University all the necessary skills for their next 50 or more years. All we can do is help them become able, keen and confident to learn; of course selectively and critically; whatever new tech they want to / need to learn. Because the great majority of the tech will continue to become easier to learn and use. Whatever we may think about markets, the market should at least achieve this.

Note that these current and future expectations cannot neatly be separated out into our expectations of the tech and our expectations for individuals, although some of the expectations may be tech-led and others more people-led. They are all expectations of individuals using the tech.

Next

In the next post I shall test these expectations against our current experiences and realities of using the tech, and explore the places of tech in higher education.

Originality, Part Five – Originality and Knowledge

Background

In previous posts in this series I have explored relationships between originality, education and learning, and ways in which originality can be developed. If you’re starting here, welcome, and you may find it useful to at least skim these previous posts. In the last of this series, for now at least, I shall explore the big one, the relationship between originality and knowledge.

Why do I call this the big one?

On knowledge

The academic world reveres knowledge. Research is valued as the production of knowledge. Teaching is often described (and also experienced) as the transmission or handing on of knowledge. Expertise involves (not exclusively) having knowledge. Experts are people who know a lot. This academic view and valuing of knowledge is reflected in the popular domain, where quizzes mostly value knowledge, much less often valuing the ability to reason, solve problems, or make connections, seen in exceptions such as The Krypton Factor and Brain of Britain.

How does originality relate to this very high value placed on knowledge?

Originality and the development of new knowledge

One way is through the role of originality in the development of new knowledge. This is an often mysterious, hidden, hard-to-describe process, even for those who develop such new knowledge. And even when the process is described; sometimes very vividly, as in Kekule’s account (see http://tinyurl.com/Kekule) of realising that a possible structure for benzene could be a six-carbon-atom ring, rather than a string – this idea came to him through a vivid daydream of a snake eating its own tail.

Do such accounts offer help for those who would create new knowledge? I think so. Such stories, perhaps with the hero narrative played down a little, suggest the value of letting imagination run free, allowing wild images to form and then checking what implications the images may have for the problem at hand.

We find an important link here between originality and knowledge, through a scientific method in which hypotheses, models, explanations can be developed through any process at all, then tested rigorously for their predictive or explanatory power.

Creating as well as testing hypotheses

It may be that current education places a little too much emphasis on the rigorous testing of hypotheses, and not enough on generating the hypotheses in the first place. This imbalance may in turn draw a picture of science and technology, and perhaps other disciplines involving some element of critical analysis – hopefully, then, most disciplines –as more procedural, more knowledge-stuffed, and less welcoming of originality, than they actually are. A route here to making many disciplines more attractive, to a wider range of students; perhaps, also, to making them more fun, and maybe even more productive?

This does not mean a lowering of standards. Only ideas that survive tough tests will become accepted as valued knowledge. The academy is safe.

Originality valued as the development of hypotheses for testing can also bring to life the sometime empty rhetoric of constructivist approaches to learning, by being explicit about what are being constructed – hypotheses – and saying how these hypotheses will be used.

I realise, or hope, that there are vast differences between different disciplines in these respects.

Perils of over-emphasising knowledge

I sometimes fear that over-emphasis on knowledge; whether propositional (know what), procedural (know how), or conceptual / theoretical (know why); may tend to drive out originality. But before that: there is a hierarchy of valuings of knowledge. The language of education shows clearly how propositional and conceptual / theoretical knowledge are valued over know-how. The UK Minister of Education has made this utterly explicit very recently – http://tinyurl.com/GoveKnow. Know-how is usually referred to as skill, and generally has lower status than knowledge. (Events can re-balance our view of this. As an eye surgeon replaced my somewhat cloudy lens with a shiny new plastic one earlier this week I was hugely more concerned with her skill than her knowledge, much though I also value the latter. Actually I was unconscious at the time, but you know what I mean.)

A race to the bottom

Knowledge on the page or the screen looks so certain, does it not? The first, natural, thing for a learner to do with knowledge on a page seems to be to try to learn it. Teachers, valuing what they know, have a corresponding tendency to teach it. The players having variously taught it and learned it, the next obvious thing is to assess it, to find out if it has been learned. Propositional knowledge consists mainly of – well, propositions. Conceptual / theoretical knowledge similarly consists of concepts and theories. And all of these tend to be taught and learned as stuff. The pathology of this is relatively easy to explain. Learning becomes memorising. Memorised knowledge is relatively easy to assess. However ambitious we are. And the sheer quantity of knowledge out there, sifted through the quality-assuring processes of refereeing and review, is enough to fill and over-fill any course we could design. Obviously, we must teach more. Because there is more to know. This is a kind of race to the bottom, not because knowledge is unimportant, but because, increasingly, it isn’t enough

Our concerns about originality

Also, I suspect that we are ambivalent about originality. I suggested in earlier posts typologies of originality, from local and (on a separate axis) perhaps not world-changing (“I had that idea, though it may well be flawed, and others may well have had it before.” to both global and world-changing originality (a version of e=mc2 in 1905). Thinking about originality may push us to reflect critically on the nature and extend of our own originality, reflection which we may not may not always find encouraging.

And anyway originality is hard to assess, is it not? Particularly if we are assessing local originality, where there may be an inverse relationship between knowledge and originality – the less I know, the more locally original ideas I may have.

The normal academic instinct, I think, at this point, is to let knowledge trump originality, to say “You should have known that.” rather than “Well done for having that idea.”

I feel, on balance, that this generally is an unhelpful stance for a teacher to take. Why?

A changing relationship between knowledge and originality

Knowledge is becoming much more readily accessible. The machines have replaced much manual work. They are now replacing more and brain work, progressively leaving the more and more difficult and more rewarding work to us. The relationship between Moore’s Law of progress in the power of computers and their ability to do some of the difficult stuff we do (such as, of course, being original) may nor may not be linear. But there will be some positive correlation, now and into the future.

But, however this plays out, I’m pretty sure that originality in graduates and academics will continue to become a more and more important and valued ability. Of course our graduates will still know a lot. But their knowledge will increasingly be a side-product of their ability to be critically original, working with and shaping the technology, and accessing and using the knowledge, selectively and critically, when they need it.

Originality, Part Four – Becoming critically original

If you’re starting here, I suggest you scan the three previous posts on originality

Becoming critically original

Of course, originality needs to become an increasingly critical originality. The particular critiques, and more broadly the critical approaches, will need to be developed by lecturers, by students alone and with their peers, and in conversation between students and lecturers.

The lecturer’s skill lies in getting the nature and weight and progression of their responses to student work right. Not treading on students’ dreams, nor dishonestly flattering under the badge of being sensitive, but rather steadily demonstrating and practising and discussing an increasingly critical and informed approach to work and study, which includes being explicit about the rationale for their critical comments.

Tempting clichés about steel and fires will be resisted. But the students need to test their original ideas against a growing range of the literature, against the establish corpus of knowledge and (depending on the subject they are studying) perhaps also against their own experience and data.

This need not be a discouraging experience. The students will come to enjoy and value both the critical and the creative parts of critical originality. They will find the unexpected satisfaction which can follow from laying aside (perhaps with a sigh) an idea they have developed which is not supported by further reading and evidence. And they will find delight in, from time to time, confirming that their new idea has some strength and validity; has some explanatory, even predictive, power; and deserves to be taken further forward. Also, they will learn not to be discouraged when they find someone has got there before them. Local originality is not a failure of global originality. Rather it is a step on the long road that may lead to global originality.

And so on through the careers of an academic.

Becoming a professor would not be the only happy ending to this story. Being critically original is a capability and an approach to work that is valued within and well beyond the University.

But if such critical originality is to be a goal of education, as well as an aspiration, we need to take it seriously, to be explicit about it, and to explain and illustrate in our own work what it can mean. We need to give students opportunities to develop their critical originality, and to receive feedback on their attainment of it. Students’ critical originality needs to be developed within the discipline being studied, although students may welcome the chance to apply the approach to other areas.

And we need to assess it in clearly valid ways. Generically, that might involve students undertaking some work that is at least locally original; critiquing the work; and identifying and making a reasoned case for the nature and extent of it originality. This will play out differently in different subjects.

But first we need to be clear what originality and critical originality can mean, within and beyond the subject. As I have attempted to do here. I’d love to know if any of this helps.

Closing comment

The author claims this post to be locally original. He is conscious that he has read about this topic and related topics over the years. He is therefore confident that the post uses ideas previously read and mostly forgotten. He has also chosen to omit ideas that might have been relevant. He makes no claim to global originality. He hopes some of the ideas will be useful – utility is not the same as originality. He feels better for thus having made the status of this post clear.

In the next and possibly for now final post on originality I shall explore relationships between originality and knowledge.

Originality, Part Three – Becoming original

If you’re starting here, I suggest you scan the two previous posts on originality

Becoming original

So how do we help our students become appropriately original?

The account in the previous post may suggest one way. Teach them more and more content; teach them to engage with the content, to critique and use it. And, perhaps, a few of them will become professors.

Of course, something is missing from this account. Originality does not automatically follow from the accumulation of knowledge, even from persistent active engagement with knowledge. Indeed, accumulation may on a bad day bury a flickering originality under the weight of content. Originality; alongside other academic, disciplinary and professional qualities; also needs to be encouraged and supported and rewarded and valued. From day one.

Helping people to become original

How do we help people become original? From the start of their studies:

  1. We explicitly value originality.
  2. We talk with (not to) our students about what originality means, and why it matters, in the particular disciplines they are studying.
  3. We disentangle local from global originality, perhaps using some of the ideas from earlier posts.
  4. We make originality into a learning outcome for their programmes of study – “Students will be able to go beyond what they had been taught and read, and come up with ideas, suggestions, explanations, possibly even theories and models which are (in the sense used in these blogs) at least locally original.”
  5. Or we make originality one of the criteria against which their work will be assessed.
  6. We encourage students to critique their own work, with reference to, among other qualities, its originality.
  7. We reward originality, with attention and then with marks and grades.
  8. We provide many opportunities and much encouragement for students to develop and demonstrate originality.
  9. Then, throughout the course of their studies, we encourage them along the spectrum from local towards more global originality, in part by teaching them how to engage with the wider literature of the subject, and in part by helping the become more (and justifiably) confident in their originality.

I need to say more about this last point. There is a wide-spread view of the process of learning. It is rarely made explicit, but it is often clearly visible in the structure of our courses, our teaching, our assessment. This view says that, first, we learn the content. Then, as a later step, we learn to critique it, apply it, be original in it.

I don’t think this view is accurate, for reasons I shall probably return to in another post. But, for now, I’ll say two things about the relationship between engaging with the wider literature of the subject and being and becoming original:

  1. Good course design and good teaching encourage students to see the literature, not as tablets of stone, but as an evolving set of more or less original ideas and understandings, each building on and then going beyond some previous work. Originality should be one lens through we which we read, study and made sense of the literature. We can do this by encouraging our students (and ourselves) to analyze the links and relationships between papers, to identify the particular originality of a publication and how it relates to predecessors. This will help us and our students to see the structure of the discipline or profession – perhaps structure is a bit static, better perhaps to see how the discipline or profession moves, develops.
  2. Once this more active approach to the literature becomes habit, our students can seek out and review the literature by asking how the literature relates to their own recent and perhaps at least locally original work. This inverts normal relationship between student work and the literature. It puts the student’s work, and in particular the possible originality of the student’s work, centre stage. It asks students to explore how the literature supports or refutes the student’s ideas. This requires a student to take their own work and their own local originality, seriously. It helps them to act, and thereby to see themselves, as a scholar, as a proto-member of the discipline or profession, rather than as a dependent and naive supplicant.

I said in my first post on originality that I wasn’t going to talk about the quality of the newly originated idea. But quality is obviously important here. So in the next post I’ll give attention to a powerful route to increasing the quality of ideas. I’ll talk about critical originality.

Follow

Get every new post delivered to your Inbox.

Join 389 other followers