In my first year of deputy headship, I was given the brief of developing the school’s target setting processes. This was right up my street. I had come from a school which had been graded Outstanding in two successive Ofsted inspections. When I asked the Head and his Deputy what they put the success of the school down to, their answer was unequivocal and without hesitation: “aspirational targets”. I remember nodding in agreement.

The philosophy goes like this: set children a target that they can aspire to. Something so high as to be achievable on their best day, with a following wind, with all the stars aligned, like Harry Potter with a draft of Felix Felicis inside him. Not an unattainable target, but one that they can reach with their fingertips if they stretch high and stand on tiptoes. Even if they don’t make that target, they will  achieve so much more than they otherwise would. We call them Challenge Grades. 

There are many arguments against this philosophy. Firstly, that many students will feel a sense of failure if they don’t make it. Secondly, that staff will be judged by the proportion of students who achieve the targets. Thirdly, that it is impossible to accurately gauge what children are capable of achieving.

My responses are fairly consistent.

Firstly, possibly true. But, if we’ve got the targets right, they should feel that, because they’ve failed to achieve their potential. And we should feel disappointed too.

Secondly, no. That’s not what they’re for. The targets are there to motivate students. It would be disingenuous of me to say that senior leaders don’t look at the proportion of students achieving them, but at our school they cannot be used as performance management targets and staff have ownership of them (as will become clear).

Thirdly, it’s not impossible, and here’s how we do it…

=VLOOKUP(AVERAGE(VLOOKUP(‘Base Data’!DG41,gradepoints,2),VLOOKUP(‘Base Data’!DI41,gradepoints,2),VLOOKUP(‘Base Data’!W41,gradepoints,2),VLOOKUP(‘Base Data’!CL41,abclevels,2)+14,VLOOKUP(‘Base Data’!EM41,abclevels,2)+12,VLOOKUP(‘Base Data’!FE41,abclevels,2)+10,VLOOKUP(‘Base Data’!FF41,abclevels,2)+10,VLOOKUP(‘Base Data’!FG41,abclevels,2)+10,VLOOKUP(‘Base Data’!FD41,rawlevels,2)+10)+3,pointsgrades,2)

I generate all the initial Challenge Grades using variants of this formula. It draws on all the available data:

  • KS2 levels: whilst their reliability is sometimes questionable and their variability a certainty, progress from KS2 to 4 is a quantifiable judgement for schools and individual pupils in Raise Online and therefore an important element of the process. 
  • FFT Predictors: we use FFT B predictions (the most likely grade outcome for students with that profile) and the Grade Above, and split the difference. Fischer helpfully code them orange and green for us! There is a case for using FFT D, I know, but it often seems pie-in-the-sky stuff and I have seen too many schools where pupils can parrot back their FFT D targets when asked but will never, ever achieve them.
  •  Teacher Assessment: this is a cornerstone of the formula. I take every teacher assessment I have on record, adding challenge to it at variable rates dependent on the distance from the target destination. By taking the average of every teacher assessment recorded I hope to smooth out variations in progress. 

I look everything up in my Levels and Grades Conversion Table linked to QCA points to find equivalences between national curriculum levels and GCSE grades. (Incidentally, it took me ages to source the data for this, which is madness since all secondary schools are judged on the progress from one to the other.) I add challenge to the relevant source data and average the lot. And then I send them to staff.

Here’s where the important bit happens. Staff who have taught those students all year have just short of a month to review the formula-generated Challenge Grades and tell me if I’ve got them wrong. Are they too high? If this is the case – if they are simply unachievable – they will be demotivating. Are they too low? What would you suggest? Any revision suggested is accepted – no questions asked. That teacher has taught that child all year, assessed their work, watched their progress. Who am I to question their judgement when all I have is a spreadsheet?

Finally, the challenge grades are issued to students. They get a key stage three challenge level at the end of Year 7, a key stage four challenge grade at the start of Year 10 and a sixth form challenge grade at the start of Year 12 (we use ALPS here). At every monitoring point, teachers give a current working level and a red, amber or green traffic-light judgement for “on target”. Green indicates all is well, and they are on track to achieve their challenge. Amber indicates some concerns. Red is a warning of serious underachievement.

If we get them right, no child should ever exceed their challenge grade. If they do, the target was too low. We accept that some children will fall short. There may be health problems, emotional crises, behavioural problems, all manner of circumstances which prevent children from achieving their potential. But there will be a story there, and we must ask ourselves “what could we have done differently to prevent that underachievement?” And then we must act on the answers.


9 thoughts on “Targets

  1. Pingback: Working with data – the narrative in the numbers | Teaching: Leading Learning

  2. Hi Chris,

    Love the Levels and Grades conversion table. But just trying to work out if you can use the points scores as like for like comparisons?

    Are the points score values of KS2 Levels comparable to GCSE points scores?

    Looking at the table it now becomes apparent to me why I need to use a ‘-5’ in my calculations of levels progress. It ‘fixes’ the offset from one grade system to another to ensure that a Level 5 always = a Grade E and so on.

    Hope that makes sense?


    • It makes perfect sense Dan. The conversion table is a few years old and I use it as a “rule of thumb” guide. It was based on DCSF materials at the time so it has statistical provenance as like-for-like comparison, though not in the current regime and with the health warning about the flaws of NC levels which are not comparable across subjects. Still handy, and a good guide, but not necessarily “the answer”.

  3. Pingback: Assessment in the new National Curriculum – what we’re doing | Teaching: Leading Learning

  4. Hi Chris

    Your level conversion table has 27 points at KS2 being equivalent to a level 4c. I understand it to be equivalent to 4b. Am I wrong?

    Lee Hill

    • No, you’re right – I’ve been meaning to update this post for a while! We use the correct conversions now but I haven’t updated the blog. As it happened it wasn’t a problem as the relative distances were still the same, but an error nonetheless. Thanks for spotting it!

  5. Your well-praised conversion table: Any more information on what exactly it might be? Are the KS3 NC levels (with non-existent-and-now-dead-fine-grade-horizontal-chopping) for entry to the KS or on a life-for-like basis?

    It’s good to know the leaders are leading. (We will be measuring…)

  6. Pingback: Implementing Assessment Without Levels | Teaching: Leading Learning

  7. Pingback: Assessment in the new national curriculum – next steps | Teaching: Leading Learning

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s