Assessment

\I write at the end of a long and very interesting professional development day.  One of the HS focuses for the day was Asssessment – in particular, how we determine student academic progress.  Having just awarded effort and/or attainment grades to all students, you might think this would be a simple business – we can just look at the data. But like most things, there’s more to assessment than meets the eye – it is actually a subtle business.  Looking at the data can raise as many questions as it answers.  

So we have the initial question; how do we decide if a student gets a 5 or a 6? Or if their effort is a Good or a Satisfactory?  We have rubrics, as you know, that explain what the various categories mean in some detail, but of course it’s not always easy to know quite how the descriptors apply to any particular student in any particular class.  The descriptor for 6, for example, describes “a consistent and thorough understanding of the required knowledge and skills and the ability to apply them in a wide variety of situations”.  All well and good, but the devil is in the details, as is so often the case – how can we assess a ‘thorough understanding’?  What does that mean for a grade 10 student in terms of algebra, for example? And how wide is a ‘wide variety’ of situations?  Three vaguely similar situations?  Or five almost unrecognisably different ones? How would we assess a partial understanding of a complex situation
against a full understanding of a simple one?

It’s not that these questions cannot be answered; they can.  We have examples, syllabuses, past papers, we are working towards articulated standards and benchmarks and so on; and we can use task-specific rubrics to provide some clarity. But underlying each layer of clarification is the inescapable judgement of the teacher; professional, informed by a variety of evidence, but ultimately a judgement, not an objective fact.  And in one sense that’s obvious – because what all sophisticated educators are trying to do, all the time, is to get inside the students’ heads, to see what they understand. We are trying to see how students perceive a topic, how they do or do not grasp the complexities, and how likely they are to be able to use their knowledge well.  Of course, we can get a great deal of evidence, but the evidence is just a proxy for what we are really trying to report on – a student’s current level of thinking capacity.

Once we have made our judgements about our grades for a class, and have perhaps done some cross-class standardisation(not straightforward in itself), we then have some further questions.  How do we compare between subjects?  How do we compare ‘excellent’ in art with ‘excellent’ in physics?  It’s not even really obvious what that question really means, let alone how to answer it, as it calls into question the subject-specific meaning of the term excellent in Art and in Science – and that’s two big philosophical debates before we even start to compare!

And of course as we struggle with all this, we are always mindful of the balance between supporting student understanding and assessing student understanding. These may not always be completely divorced, but the wag who observed that weighing the pig does nothing to fatten it had a point.  If we are too precise, too prescriptive in what we are trying to do with assessment, then there will be a cost to the students in terms of allowing creativity, flair, and the ability to take a task in a direction unforeseen by the teacher.  So we need to allow a few loopholes, a few avenues for uncertainty, in the full knowledge that this makes fully reliable assessment even harder to achieve.

So I write all this to let you know just how much thought and attention goes into student assessment.  It’s not a straightforward matter; it is both intellectually and organisationally complex. I think my colleagues do a remarkable job in taking all the objective evidence into account, and then also looking at your son or daughter and tempering that evidence with their judgement that cannot easily be captured by rubrics.  It’s hard to imagine how we might make a better judgement.  And when we can follow up these assessments with face-to-face conversations – as teacher and students do in class, and teacher and parents do at parents’ evenings (we assume you are covering the third side of that triangle!) – the assessments are powerful tools for improvement.

Share this article

Leave a Reply

Your email address will not be published. Required fields are marked *