Why is the government planning to impose written tests on seven-year-olds? Firstly, the government wanted to test four-year-olds but have been outflanked by schools. Secondly, the government is creating the impression that it is possible to test young children in a way which is unbiased. Lastly, this is all underpinned by flawed hypotheses about schools, teaching and children.
The government has introduced a Reception Baseline assessment which schools must introduce in September 2016. Following a typically convoluted - and very political - development process, three providers have been approved. Two of the RBAs are test based, and the third – from Early Excellence – is not. Schools have, overwhelmingly, decided not to test four-year-olds and have opted for the Early Excellence teacher assessment option.
Why is this a problem for the government? Well, the RBA was introduced because politicians have issues with teacher-assessed Key Stage 1 reporting of reading, writing and maths achievement. Schools are accused, for example, of depressing KS1 results so that children’s overall progress across KS2 isn’t accurate. Additionally, the government felt that schools were not being held rigorously accountable for their KS1 children’s progress, and that a measure of progress from 4 to 11 was required.
Politicians have, for a number of reasons, bought into the fallacy that it is possible to test young children in a way which is unbiased and accurate. Now, on one level, they know that this isn’t possible. Commentators like Daisy Christodoulou have pointed out that we have good research to show that, being human, teachers are not immune to the biases all humans have when they try to make judgements about children’s progress and attainment, particularly that of disadvantaged groups, so it follows that teacher assessments are biased and therefore wrong.
And as Daisy has pointed out, tests are fair inasmuch as ‘every pupil is treated the same, they take the same questions in the same conditions at the same time, and it’s hard or even impossible to get special treatment.’ The key question is, does this make tests unbiased and accurate in the way politicians want them to be?
Well, no. It would seem to be obvious that tests are biased towards the more able*. What is not obvious, it seems, is that this is true regardless of who teaches them. And whilst many people are uncomfortable about what we know about ability, we know that ability is not equally distributed. Overall, smart children do well in test situations, and the less able do less well. The inconvenient truth that this is primarily a function of the child, not their teachers or schools, is simply being ignored in this debate.
In addition, test accuracy is a minefield for an almost endless number of reasons. This is true of tests for older children, and if anything, it’s worse for younger children grouped into annualised cohorts which do not account for age within cohort. Some seven-year-olds have enormous advantages over their classmates purely because of their birthdates. And test scores are extremely fuzzy at best.
These two problems mean that assessments of school children abound with both false positives and false negatives. Some children, teachers and schools look great, some don’t, because the tests are biased towards those who are good at learning and taking tests, and the tests are biased, fuzzy and inaccurate.
And as I’ve argued before, much of this drive to use biased, noisy, inaccurate assessments and tests to identify ‘failing’ schools, teachers and pupils is built on a narrative which is fundamentally flawed. The narrative ignores the fact that ‘we differ at a psychological level in our capacity to learn, and our capability to learn to learn.’ It assumes that Teacher Input leads directly to Pupil Output, which is simply not true.
It also assumes what US teacher Eric Kalenze calls the Lazy Bum Hypothesis - that we need to ‘beat lazy teachers into working harder, talentless teachers into becoming more effective and autopilot teachers into retirement.’ This false hypothesis holds that some children don’t make progress because their teachers are simply too lazy to make them do so, which is clearly not the case.
Even more depressingly, pro-‘accountability’ zealots will claim that those opposed to high stakes assessment are ‘anti-accountability’. But this too is simply a misunderstanding. Opposition to high stakes test culture should be based on carefully explaining the flawed assumptions which are made about biased, fuzzy and inaccurate tests which, for many reasons (the chief of which is that most test score variation comes from the children themselves) throw up misleading ‘trends’ in aggregated data which are, in reality, simply a reflection of what we know to be true about children’s differing capability and capacity to learn.
As I’ve said before, learning is done by the child, and not by the teacher. Assessing children, especially young children – however you do it – is always biased, fuzzy and inaccurate. Tests are obviously biased. Reintroducing written tests for seven-year-olds is a retrograde step which will not help to improve schooling in any way, and will simply keep schools focused on incorrectly identified problems which simply do not exist in reality.
*Through out this post, I've used the word 'bias' in the sense that it means 'inclination or prejudice for or against one person or group'. So when I say tests are biased, I mean specifically that they are biased towards those who have a greater capacity to learn and capability to learn to learn, and thus do better in test situations. Now, in the field of psychometrics, a test would be said to be biased if a 'test design, or the way results are interpreted and used, systematically disadvantages certain groups of students over others.' For now, this typical definition doesn't acknowledge that groups of children who have a greater capacity to learn and capability to learn to learn are a group just like any other. I suggest that it should, and that tests are obviously biased towards these children.
The cartoon below is a tongue in cheek summary of the way in which accountability has changed in the last 45 years or so (thanks to Joining the Debate for the link).