Last month on this blog, I wrote a guest post about comparative judgement (CJ) – a method of assessment in which pieces of student work are compared with each other rather than being evaluated in isolation. I argued there that CJ has several advantages over more widespread forms of assessment, such as rubric-based marking: it is generally less time-consuming to perform, also reduces the time and costs associated with training raters to use a rubric, and offers a broader, more inclusive approach to construct definition.
In this second post, I look at research that explores the suitability of comparative judgement for the assessment of second-language writing. But before I get into this research, it’s worth briefly asking why it is necessary to put forward novel methods for L2 assessment in the first place. Here you can find the slides (recording to follow) of the TAFSIG AGM on November 2nd 2022.
I’d like to introduce you to an assessment method that I think you might find interesting. It’s called Comparative Judgement (CJ). In this post I’m going to explain how it works, and then in a later post I’ll share with you some findings from our research at UCLouvain on CJ for second-language assessment. But before I explain how comparative judgement works, I want to tell you how it doesn’t work.
Representing the TAFSIG and attending my first BALEAP conference |