Playing Mystic Meg has made the FFT a household name, at least in the homes of countless teachers and senior managers who have been force-fed its dubious rubbish at taxpayers’ expense. Peddling stories of the past and tales of the future, conjuring up ‘estimates’ and foisting target culture onto an unsuspecting educational world has cost bucket loads of cash and wasted huge amounts of teachers' time and effort.
It has to be said that the ‘estimates’ crunched by the FFT are so loose, so woolly and, even according to the FFS itself, so hedged with caveats the size of Belgium that they are worse than useless. They give the impression of foretelling the future much as any sideshow charlatan might. Worse still, this rubbish is paid for by you and me, at an estimated cost of £15 million over the last 13 years, and is another substantial cog in the money-extracting Data Driven Disaster machine leeching English education.
Take some data and construct Castle Doom
The FFS currently run an entity called FFTLive, a cartoonishly colourful website which looks like this:
There is a fair bit of info on the FFT website about its history and magic, which I suggest you read for yourself. The highlights are briefly:
2001: FFT Founded by Mike Fischer of RM Plc and Mike Treadaway, ICT Advisor
2004: DFE awards National Pupil Database contract to FFT
2005: FFTLive launched
2006: RMFFT win contract to manage NPD and Performance Tables
2013: FFT launch Governor Dashboard
2014: Due to launch FFT Aspire in Autumn
If you’d like to have a look at what FFTLive looks like for a school, you can log in using either of the following usernames 9992004X (Primary) or 9994002X (Secondary)and password ANON. (I found these here and here, by the way, in case you’re interested).
There is far too much stuff available on the FFTLive website for me to go into in too much depth. Feel free to poke around yourself to see quite how much has been wrung out of the data. There are various guides which you can download (often called ‘Quick Start Guides’ accessed through ‘Help’ buttons), which are worth reading, although they don’t tell you anything at all about the methodology behind the data crunching.
Here are some highlights before we get to estimates and target setting, the bit of FFT magic at which every teacher, parent and politician should take a very, very close look.
Dashboards You can find the 4 page Governor Dashboard here, along with enormously data intense ‘self evaluation booklets’, which have an extraordinary 26 pages at KS1, 32 pages at KS2 and 16 pages at KS4 of stuff to plough through.
Explore This has magic such as ‘opportunities and alerts indicators’ and ‘turbulence and context factors’ for which no methodology is given. I assume that we are simply supposed to accept the ‘analysis’ at face value, which I’m fairly sure we shouldn’t.
Interactive reports Here you get into the murky world of ‘Reviewing Past Progress’ and ‘Supporting Target Setting (Estimates)’. ‘Reviewing Past Progress’ borrows the idea of ‘Value Added’ from economics, and, like many Data Disaster proponents, the FFT makes the highly disputed assumption that you can isolate a ‘teacher effect’ or ‘school effect’ from a ‘pupil effect’.
I’ve shown before that most people in schools don’t have the knowledge, skills or understanding to question this assumption, which is entirely unjustified and makes Value Added Not Even Wrong. Suffice to say that it simply makes no sense to assume that a child’s educational development is 100% school and teacher and nothing else, much less to model an individual child's future performance based on the performance of entirely different children in the past, but that’s what happens here.
It’s worth noting at this point that the FFT does two very separate things within FFTLive:
- Assess the past
- Predict the future
The methodology for both of these is highly suspect, and almost entirely opaque. I can make educated guesses about what RMFFT does in each area, but they haven’t made it easy to find out exactly what they do to data. Before looking at these two different but related aims of FFTLive, here are the final things to look at:
Innovate New ideas for crunching data by ‘Reviewing Past Progress’ and ‘Supporting Target Setting (Estimates)’ similar to the current Interactive reports. This shows that the FFT has started to think beyond some of the issues I’ll highlight below, and that they are desperately trying to keep their teeth around the government’s DDD jugular.
You can also export the data to perform more daft analysis yourself or have consultants charge you to ensure that you are a ‘Data confident school’, and the information section tells you a few things before tries to sell you training to become an Operating Data Thetan and explain that we are all actually ruled by lizards (this may not be true).
So, there’s a lot here, but you don’t get to charge the government a lot of money for nothing, even if what you have produced has no value. And speaking of no value, let’s have a look at the Big Daddy of the FFT: reporting the past and guessing the future
Looking back with FFTLive
All schools have to justify themselves to OFSTED when the inspectors come to call. These days, data is just about everything when being judged, and the FFT has been at the vanguard of the Data Driven Disaster. It has pushed a ‘Value Added’ model since its inception in 2001, and now all schools are expected to be solely responsible for the academic development of their pupils, as if children existed in suspended animation for the 80% of their waking hours they aren’t in school each week day.
Value Added is, in essence, a (deeply flawed) measure of how much a school has added to a child’s academic development. It’s far from clear how all the FFT’s Value Added alchemy works. There is an indication of the thinking of the FFT in some of the data which is crunched in FFTLive, however.
In reading at KS2, for example, some children have ‘Actual Levels’ of 5.1, 5.3, 5.7, which may be 5C, 5B and 5A; but then some children have 4.2, 4.3, 4.4, 4.7 and 4.9, which can’t correspond to 4C, 4B and 4A. Some ‘Actual Levels’ are coded in blue, which is apparently ‘lower than estimate by half a level or more’ Some are green for ‘higher than estimate by half a level or more’.
So what are these estimates which these 'actual levels' are measure against? Well, in order to calculate how much ‘value’ a school had ‘added’, the FFT required an estimate of a given pupil’s future test results. This had to be a single number, which could then be compared to what a pupil actually got in the tests at Key Stage 2, 4 or 5.
As far as I can guess, and based on the way in which RMFFT create estimate models for the OSDD, RAISEonline and Performance Tables, data for previous students is crunched to produce a model which has fixed coefficients to produce a linear line of best fit using regression analysis. Deep breath, non-mathematicians. It’s not so bad, really. Basically, this means this:
Showing an ‘Estimated level’ of 4.6 was Not Even Wrong, because the student could get literally anything between in a wide data range and not surprise anyone with a vague idea of how grouped data works. To Mike Treadaway’s credit, he acknowledges this. But then he goes on to use it anyway to assess how well a school has ‘added value’ to children. I’ve demolished the whole ‘estimates’ nonsense before here, but that doesn’t make this any less irritating or wrong.
Predicting the future, or not
Most people probably know the FFT for its futurology, which we’ll look at next. The ‘Supporting Target Setting (Estimates)’ is the FFT data most teachers are presented with when setting targets with their senior management teams.
Until 2009, teachers were given lists of ‘Estimated levels’ a child might get in their Key Stage 2 SATs, as used in the Value Added models above. They looked like this:
Currently, primary schools get FFT Estimates which look like this:
Either way, this stuff shows you two important things:
In Primary, the ‘Estimated Levels’ tell you nothing.
In Secondary, the ‘Estimated Levels’ tell you nothing.
In case it isn’t obvious why this is the case, I’ll repeat: A student could get literally anything between the lowest and highest level available and not surprise anyone with a vague idea of how grouped data works. You might get a B, then again you might not. You might get level 4, then again you might not. The estimate tells you nothing which you, as a child’s teacher or parent, couldn’t work out for yourself.
There are umpteen other things wrong with this model, but here are a few to start with:
- What data is used to produce the regression models for the estimates? All of it? Complete data points only? Partially complete data?
- Is the data in the model, and therefore each estimate, changed each year that a child is in a key stage? If not, why not? If it is, what does it suggest?
- What exactly is the methodology used to produce this magic?
Examining the Educational Tea Leaves
Once again, its hard to know where to start. So much energy has gone into this stuff - and at least £15 million over the years by my reckoning - and it doesn't tell you anything whatsoever that someone working in a school couldn't tell you given the opportunity. The 'Value Added' fiction is just that - the models are so deeply flawed as to be meaningless. The 'Estimates' are so woolly that they add little to the professional judgement of the staff on the ground.
I haven't even gone into the vagaries of FFTA, FFTB and FFTD, as you can find information about them elsewhere. I can't find any criticism of the kind I've made here about the fundamental error of using grouped data analysis to predict individual outcomes, which is why I've written about this here. I hope that this article provokes the debate as to whether using data in the way RMFFT does has any justification, and I'd like to hear your thoughts in the comments below.
Thirteen years of FFT analysis has shown that trying to summarise every diverse school community in England is witchcraft of the highest order and, at individual child level, is little better than examining patterns in tea leaves. The cost, both financially and on the diminished education of children by the limited focus on badly assessed levels, is simply not worth paying. Examining tea leaves is ultimately pointless, because they tell you nothing you couldn't have worked out for yourself. And in this case, having looked closely at the tea leaves, we need to stop throwing our money away on yet more worthless data driven nonsense and completely rethink the way we assess 'achievement' and 'progress' in English schools..