Working with data – the narrative in the numbers

I covet @Ashley_Loynton’s mug

It is a constant source of amusement and bafflement to my colleagues how much I love working with data. As an English specialist I’m not supposed to enjoy the spreadsheets and pie charts quite as much as I do. However, it’s my firm belief that data helps me to be a better teacher, and that my work with data as a member of SLT helps my school become a better school.

The principles of working with data

When I first started blogging one of the first posts I read was Kev Bartle‘s excellent ‘Doing’ Data as a member of SLT. In this post, Kev lays out some excellent principles for the use of data. If you haven’t read it – read it now, then come back. Finished? Good.

I don’t disagree with any of what Kev says in his post, as I’m sure you will see. But here are mine:

  1. Less is more – but don’t hide the rest
  2. Use hard and soft data equally
  3. Look for the narrative in the numbers
  4. Make the data work for you – don’t work for the data

Working with data as a classroom teacher

When I’m in the classroom I collect and use data that is relevant to me. Primarily, this is assessment data from my own subject. For each “proper” piece of assessed work completed it’s essential that I keep track of their attainment related to the assessment objectives, which I do in a spreadsheet. This helps me plan future lessons and target my teaching effectively. If I can see that particular students need more help with a particular part of the assessment, I will try to meet that need. Essentially I’m looking for answers in the assessments: what have I missed? What didn’t I explain well enough? What didn’t stick? What should I focus on next? Nobody really looks as this data except me, but it makes my lessons better.

I could find out loads more about the students in that class. I could ask SIMS for their attendance going back to Year 7 and beyond, their attainment and achievements and attitudes and intervention history and every letter sent home since they joined the school. Sometimes, where I have a concern, I will have a look and see if there is a similar pattern in other subjects, but if I’m honest this is unusual. The data I collect and record from my classroom helps me to teach better; I use it to inform the whole-school data collection which happens three times a year.

Working with data as a member of SLT

I try to keep my teacher-data in mind when I put my SLT head on. Teachers all around the school have their own systems for tracking ongoing progress on top of what I am asking them to input into SIMS, so it really needs to be worthwhile. Therefore each whole-school data collection we do has two purposes. Firstly, as a staging post to track progress and to report home to keep parents up-to-date. Secondly, to track academic attainment and progress through “working-at grades”. Thirdly, to track attitudes and behaviours for learning.

“Hard” data – academic attainment

Tessa Matthews wrote a depressing blog a couple of days ago called “What should we do about kids who aren’t making progress?”. Depressing, at least in part, because it was completely recognisable in its honesty and, in part because I am the SLT who sends out the emails and convenes the meetings to ask middle leaders the exact question posed in her title. Not, it has to be said, every third Wednesday, but I do ask the question – and I don’t apologise for it. I should be asking that question. For all that National Curriculum Levels are discredited and flawed, if a child at my school arrives in Year 7 with a Level 5 and reaches Year 9 still on a Level 5, we haven’t done enough for that child. It is one of the prime responsibilities of my job to spot those children who are at risk of not making progress and work with middle leaders and classroom teachers to find out what we can do differently, better, or additionally to ensure they do.

I try to take a “no excuses” approach with this process. By this I mean “she has a terrible home life – there’s no family support” might be an explanation as to why a Year 8 girl is not making expected progress, but it is not the end of the conversation. What can we, as a school, do to help her keep her studies on track as best she can? I echo John Tomsett’s sentiments in this regard:

I know that learning is not linear. I’m not worried if a child is a 5b in October and a 4a in January, but I will make sure that we keep an eye on that child’s progress.

“Soft” Data – attitudes, self-image and disposition

We probably do more work with the “soft” data than anything else. Three times a year all teachers report on students’ attitudes, grading them V, G, S or U for homework, classwork, effort and organisation. The judgements and the categories are all clearly defined (see our Monitoring Reports Guidelines) to try and provide consistency. Thanks to a system piloted by Ashley Loynton we track how students’ attitudes to learning are improving or declining between monitoring points and in relation to their peers, and pastoral leaders and tutors work with students individually or in small groups to address issues or trends. This has been a revelation and I always look at the attitude analysis first from any reporting session.

We also use PASS (Pupil Attitude to Self and School) surveys where students complete an online questionnaire to give feedback on their:

  • Feelings about school: “connectedness” whether pupils feel they belong to or are alienated from their school community
  • Perceived learning capability: how successful a young person feels about their learning
  • Self regard: long-term learner self worth
  • Preparedness for learning: whether learners feel they have the ‘tools to do the learning job’
  • Attitudes to teachers: pupil perceptions of their relationships with school staff
  • General work ethic: aspiration & motivation to succeed in life
  • Confidence in learning: pupils’ perseverance in the face of challenge
  • Attitude to attendance: how much a student wants to be in school
  • Response to curriculum demands: motivation to undertake and complete curriculum based tasks

Attitude data can help pick up "under the radar" issues

Attitude data can help pick up “under the radar” issues

Whilst certainly open to flaws on its own, this data is an excellent way to quantify teacher perceptions and identify students who may have barriers to learning which would otherwise remain under the radar.

External data sources – working the filter

filterfailure1

There is a wealth of data available. Aside from the internal assessment data mentioned above, Raise Online, DfE performance tables, Fischer Family Trust, CATs and ALPS are all in regular use at my school. We also have student and parent survey data, and health data from SHEU amongst many others. Any member of staff can access them – some of them are in the public domain and available online, and the others are in the staff drive of the school network. Many wouldn’t go looking for them, however, and my job as Deputy Head is to filter out the key messages and make sure that the relevant staff get those messages without wading through irrelevance and drowning in the tide. This is a major responsibility of my role – I have to know it all. I use a combination of FFT, CATs and teacher assessments to set targets (we call them Challenge Grades – I describe the process here). I prepare a Raise Online summary for Heads of Subject and for SLT and Governors. We use ALPS in discussions with sixth form teaching teams. I make sure to congratulate and praise publicly and privately where we have success, but primarily I’m looking for the gaps and holes. I use the same test as I use in my classroom – what data is relevant? What will help improve teaching? What have we missed? What should we focus on next?

funnel

It is the job of any senior leader to make sure that the data works for the school. I can’t bear the tales I hear of children being able to tell inspectors their FFT D targets – FFT is not intended to be used that way by their own definition. This is a classic case of the data working the school, rather than the school working the data. True leadership means getting the filtering right and funnelling the right data to the right people so that, as a school, we can ask the right questions. Because – as Kev Bartle says – data only ever asks questions. I like looking for the narrative in the numbers which frames and poses those questions, but the answers are always in the classrooms and the relationships and the teaching and learning and care that makes schools a success.

Targets

In my first year of deputy headship, I was given the brief of developing the school’s target setting processes. This was right up my street. I had come from a school which had been graded Outstanding in two successive Ofsted inspections. When I asked the Head and his Deputy what they put the success of the school down to, their answer was unequivocal and without hesitation: “aspirational targets”. I remember nodding in agreement.

The philosophy goes like this: set children a target that they can aspire to. Something so high as to be achievable on their best day, with a following wind, with all the stars aligned, like Harry Potter with a draft of Felix Felicis inside him. Not an unattainable target, but one that they can reach with their fingertips if they stretch high and stand on tiptoes. Even if they don’t make that target, they will  achieve so much more than they otherwise would. We call them Challenge Grades. 

There are many arguments against this philosophy. Firstly, that many students will feel a sense of failure if they don’t make it. Secondly, that staff will be judged by the proportion of students who achieve the targets. Thirdly, that it is impossible to accurately gauge what children are capable of achieving.

My responses are fairly consistent.

Firstly, possibly true. But, if we’ve got the targets right, they should feel that, because they’ve failed to achieve their potential. And we should feel disappointed too.

Secondly, no. That’s not what they’re for. The targets are there to motivate students. It would be disingenuous of me to say that senior leaders don’t look at the proportion of students achieving them, but at our school they cannot be used as performance management targets and staff have ownership of them (as will become clear).

Thirdly, it’s not impossible, and here’s how we do it…

=VLOOKUP(AVERAGE(VLOOKUP(‘Base Data’!DG41,gradepoints,2),VLOOKUP(‘Base Data’!DI41,gradepoints,2),VLOOKUP(‘Base Data’!W41,gradepoints,2),VLOOKUP(‘Base Data’!CL41,abclevels,2)+14,VLOOKUP(‘Base Data’!EM41,abclevels,2)+12,VLOOKUP(‘Base Data’!FE41,abclevels,2)+10,VLOOKUP(‘Base Data’!FF41,abclevels,2)+10,VLOOKUP(‘Base Data’!FG41,abclevels,2)+10,VLOOKUP(‘Base Data’!FD41,rawlevels,2)+10)+3,pointsgrades,2)

I generate all the initial Challenge Grades using variants of this formula. It draws on all the available data:

  • KS2 levels: whilst their reliability is sometimes questionable and their variability a certainty, progress from KS2 to 4 is a quantifiable judgement for schools and individual pupils in Raise Online and therefore an important element of the process. 
  • FFT Predictors: we use FFT B predictions (the most likely grade outcome for students with that profile) and the Grade Above, and split the difference. Fischer helpfully code them orange and green for us! There is a case for using FFT D, I know, but it often seems pie-in-the-sky stuff and I have seen too many schools where pupils can parrot back their FFT D targets when asked but will never, ever achieve them.
  •  Teacher Assessment: this is a cornerstone of the formula. I take every teacher assessment I have on record, adding challenge to it at variable rates dependent on the distance from the target destination. By taking the average of every teacher assessment recorded I hope to smooth out variations in progress. 

I look everything up in my Levels and Grades Conversion Table linked to QCA points to find equivalences between national curriculum levels and GCSE grades. (Incidentally, it took me ages to source the data for this, which is madness since all secondary schools are judged on the progress from one to the other.) I add challenge to the relevant source data and average the lot. And then I send them to staff.

Here’s where the important bit happens. Staff who have taught those students all year have just short of a month to review the formula-generated Challenge Grades and tell me if I’ve got them wrong. Are they too high? If this is the case – if they are simply unachievable – they will be demotivating. Are they too low? What would you suggest? Any revision suggested is accepted – no questions asked. That teacher has taught that child all year, assessed their work, watched their progress. Who am I to question their judgement when all I have is a spreadsheet?

Finally, the challenge grades are issued to students. They get a key stage three challenge level at the end of Year 7, a key stage four challenge grade at the start of Year 10 and a sixth form challenge grade at the start of Year 12 (we use ALPS here). At every monitoring point, teachers give a current working level and a red, amber or green traffic-light judgement for “on target”. Green indicates all is well, and they are on track to achieve their challenge. Amber indicates some concerns. Red is a warning of serious underachievement.

If we get them right, no child should ever exceed their challenge grade. If they do, the target was too low. We accept that some children will fall short. There may be health problems, emotional crises, behavioural problems, all manner of circumstances which prevent children from achieving their potential. But there will be a story there, and we must ask ourselves “what could we have done differently to prevent that underachievement?” And then we must act on the answers.