Assessment, the beginning or the end?

I’ve been reading a lot about assessment recently, particularly formative assessment, and it has made me reconsider some of the focus of all the curriculum work since Curriculum became a thing. I have been, and still am, a convinced proponent of a knowledge rich curriculum. I believe that we should be teaching students powerful knowledge which can make their worlds richer, open doors, and enable them to take part in those ‘great conversations’. But I’m beginning to wonder whether, in all our curriculum planning, we’ve focused so much on what we want to teach students, and how we should teach them, that we’ve forgotten to think enough about how we will know whether they are learning.

This is not surprising. It’s relatively straightforward to make a list of what we think students should know, much harder to ascertain whether they have learned, or even understood it. Learning and thinking happen in the invisible worlds of our students’ minds. We cannot see what they are thinking, or know what learning has taken place. Whenever we ask a question or engage in discussion with a student, set them an essay to write, some questions to answer, or a test to complete, it’s important to remember that we are observing a performance. Whether or not we can make valid inferences of learning from this depends on the design of the task, the conditions in which it was completed, when it was completed, how tired the student is, what other things were vying for their attention at the time, and myriad other factors. We cannot assume that what we have taught has been learned. Nor can we assume that because a student can ‘perform’ (i.e. do what we have asked of them) they have really learned.

For an in depth discussion of this distinction between learning and performance, read Soderstrom and Bjork’s (2015) review, in which they summarise learning and performance as follows:

“The primary goal of instruction should be to facilitate long-term learning—that is, to create relatively permanent changes in comprehension, understanding, and skills of the types that will support long-term retention and transfer. During the instruction or training process, however, what we can observe and measure is performance, which is often an unreliable index of whether the relatively long-term changes that constitute learning have taken place.”

Although we cannot observe learning, cognitive science has provided us with models which help us to understand how students learn, and can support us in facilitating learning. Willingham’s simple model of memory is a well known example of this, and is integral to some of the cognitive principles of learning which he sets out in ‘Why don’t students like school?’.

Willingham’s Simple Model of Memory (From Why don’t students like school?)
  • Cognitive Principle: Memory is the residue of thought

We know our students need to think about things in order to learn them, so we plan tasks which will make them think hard and help them to strengthen the knowledge they already have, connect new knowledge with old, and build links between different areas of knowledge.

  • Cognitive Principle: Factual knowledge precedes skill

We know that knowledge is important and invest a lot of time in retrieval practice (which often focuses on factual knowledge) to strengthen long term memory, aware that this will help to reduce cognitive load as we teach more and more complex ideas. Often this knowledge is set out in a knowledge organiser, a booklet, or core questions and answers which specify what students need to know. 

  • Cognitive Principle: We understand new things in the context of things we already know

The process of learning involves activating existing schemas, and integrating new knowledge into them, or building new links within them. We take this into account when teaching, through processes such as, recapping relevant prior learning before teaching a new related concept, and planning learning sequences so that knowledge builds on what has come before.

  • Cognitive Principle: Proficiency requires practice

We know that practice is important so we include plenty of opportunities for students to do this, both when new concepts are introduced, and later on, as we revisit them in different contexts.

I wonder, whether in the focus on setting out knowledge and doing lots of retrieval practice and practice questions, we have to some extent lost clarity around what we really want students to know and be able to do. What do I mean? Have we been valuing performance so much – looking for ‘correctness’ in retrieval practice or the answer to a question – that we have forgotten to really probe what students are thinking, and whether they have truly understood what we are teaching.

You might think that ‘correctness’ is what we want. Surely correct answers are evidence of learning? But this may well not be the case. Remember, any time a student answers a question, or completes a piece of work, we are looking at their performance. If this is asking them to answer some questions about what we’ve just been teaching them, then this cannot be an assessment of their learning (learning is a relatively permanent chance to long-term memory, impossible to achieve in the course of a lesson), at best it can reveal whether they have understood something or not. But even that is not straightforward.

For example, I might be teaching a lesson about states of matter. I question students following my explanation, using mini whiteboards to get responses from across the whole class, and address any errors at that point. I then get students to do some practice questions in their books, and before they leave the lesson I use an exit ticket to see what they are able to do. None of these can determine whether learning has taken place. I’m not arguing that these things don’t have value, they are vital facets of the instructional and learning processes, and they would almost certainly yield valuable information if we had time to check all the work that students have completed, every lesson. But they’re a hopeless means of assessing learning, and can be an inefficient way of checking for understanding, if all they tell us is that a student can or cannot do something correctly at that moment.

The vital questions we have to be asking ourselves over the course of a lesson are: “What are my students thinking?”, “Have they understood?”, and “If not, why not?”. Afterall, if they are thinking incorrectly, or have a poor understanding today, it’s very unlikely that they will have learned whatever we’re teaching in a month or a year’s time, unless we know what they’re thinking and can help them to correct it. We can be seeking answers to these questions in every instructional interaction we have with students, but developing and using diagnostic questions focused on what we want students to know and be able to do, and planning how we will use the information we get from them will give us a much better handle on what what’s actually going on in their minds. 

Millar’s (2016) paper really got me thinking about this. It sets out a sequence for planning a curriculum in science (as a science specialist, I can’t comment on how this would transfer to other subjects):

From Millar (2016)

Thinking about curriculum development that I have done and read about, there has been so much excellent work done on the first four stages of this, but I wonder whether the fifth step may have been somewhat neglected. In my own department’s curriculum work I can identify what we have put in place with regard to stages 1-4:

Stage 1: A Curriculum MapMain topics and concepts within them are identified.
Stage 2: NarrativeThe narrative is set out through booklets which we use for teaching
Stage 3: Learning IntentionsAlso specified in the booklets, and in core questions and answers for each topic
Stage 4: Evidence of Learning StatementsWe don’t have a formal list of these, but they are set out through the questions, tasks and activities in the booklets

Stage 5 is more elusive, but the arrows on the right of the diagram illustrate how powerful this stage is in clarifying what has come before. For example, it’s one thing to know that, ‘students should be able to calculate resultant forces from force diagrams’ (a learning intention), even to have a set of questions which we expect students to be able to answer (which to some extent clarifies the learning intention – e.g. are students expected to be able to calculate resultant forces in a line, or at a range of angles?). But how much more powerful this would be if we had a question or two ready to be used within or at the end of a lesson, focused on what we want students to be able to do, and which gave the teacher really clear information about what students were thinking about resultant forces, specifically focused on common misconceptions which would allow them to rapidly make decisions about next steps in response to students’ answers.

For example (taken from Best Evidence Science Teaching resources):

This question could be used as an exit ticket and would allow the teacher to rapidly check whether students have answered correctly and to probe their understanding of when to add or subtract the forces, and whether students hold the misconception that bigger forces always result in a bigger resultant force. In addition to making the learning intentions clear, and checking student’s understanding, this type of question is also really helpful for novice teachers in developing a better understanding of common patterns of incorrect thinking which they might encounter, and thus preparing for how they might address them in the classroom.

Do we need to integrate assessment more fully into our understanding and development of a curriculum? Curriculum and pedagogy seem to have become very closely linked (through the rise of the knowledge rich curriculum and the influence of cognitive science on classroom practice), with assessment often coming as an afterthought, and too frequently distorted and made unhelpful by a focus on exam questions (see a previous blog for a discussion about this). Should assessment primarily be the end, the destination, of our curriculum plans (as high stakes exams so often are), or a starting point, to show us where we need to go next? Should our curriculum plans include more questions like the one about forces above? Questions which exemplify how we will check understanding and set out the common misconceptions which might arise so that we can plan instruction accordingly. Questions which will open the window into the thinking of our students and help us to move them forward. None of this is new, diagnostic questions have been around for years and there are banks of them available to use (at least in science). Rather, it’s a shift in focus during teaching from ‘correctness’ (can they do this?) to ’making thinking visible’ (have they really understood this?). And in the emphasis of our planning – what exactly do students need to know, be able to do, and understand, and how can this be assessed in a way which facilitates responsive teaching?

We still need curricula in which the knowledge is “specified in detail”, “taught to be remembered”, and “sequenced and mapped deliberately and coherently” (Sherrington, 2018) but we also need to know how we’re going to check that students have truly understood so that we’re always building on firm foundations, and making this a priority will also clarify the knowledge which was our starting point.

References

Best Evidence Science Teaching resources: https://www.stem.org.uk/best-evidence-science-teaching

Millar, R. (2016). Using assessment to drive the development of teaching-learning sequences.
In J. Lavonen, K. Juuti, J. Lampiselkä, A. Uitto & K. Hahl (Eds.), Electronic Proceedings of the ESERA 2015 Conference. Science education research: Engaging learners for a sustainable future, Part 11 (co-ed. J. Dolin & P. Kind) (pp. 1631-1642). Helsinki, Finland: University of Helsinki. Available to download from: http://www.esera.org/publications/esera-conference-proceedings/science-education-research-esera-2015/ 1

Sherrington, T. (2018). What is a knowledge rich curriculum? https://teacherhead.com/2018/06/06/what-is-a-knowledge-rich-curriculum-principle-and-practice/

Soderstrom, N. C., & Bjork, R. A. (2015). Learning versus performance: An integrative review. Perspectives on Psychological Science, 10(2), 176–199.

Willingham, D. T. (2009). Why don’t students like school? Jossey-Bass.

One thought on “Assessment, the beginning or the end?

Leave a comment