DATA is NOT a dirty word: A proposal of 4 principles for sourcing and applying data to classroom instruction

Teachers often think of ‘data’ as a dirty word. There is a perception that data is something that has been forced upon us, or used to judge us by leaders and policy makers. ‘Computer says’ target grades, ‘three levels of progress’, performance management ratings and other initiatives across time don’t necessarily help dissuade this opinion.  

I think it’s time for classroom teachers to reclaim data for themselves. Again, this mindset could meet resistance as “taking ownership of data” has often been buzz phrasing from leadership and policy makers distancing themselves from the consequences of a decision they made, causing teachers to ‘own” the effects of the decision that manifest in the student outcomes, in the name of apparent ‘accountability’. 

However, with the decade drive of evidence based classroom practice, and further research into institutional change specific to education, leading to the likes of the EEF’s guidance for implementation (https://d2tic4wvo1iusb.cloudfront.net/production/eef-guidance-reports/implementation/EEF_Implementation_Guidance_Report_2019.pdf?v=1707018810), there has been a broader appreciation for the importance of clear and accurate data as part of decision making processes within the sector. There are still some barriers; understanding of what ‘data’ is available or can be made available for the classroom, providing teachers with the systems and skillset to consolidate and analyse data, developing the skills in the staff body to interpret data appropriately and make accurate inferences in order to adapt implementation and monitor accuracy of intervention.

A quote from Gert Biesta (2009) resonates a lot with regard this; “The danger here is that we end up valuing what is measured, rather than that we engage in measurement of what we value.”. Teachers need to be equipped to identify the aspects within the realm of educational feedback and dissonance that they value, and which provide value to students, and be provided with the systems, tools, time and skills to appropriately measure and interpret these to influence the classroom.

What is data? (In an educational context)

In short, data within education is essentially any systematically collected information, either quantitative or qualitative that can represent any aspect of life in a school. In their 2017 paper Schildkamp et.al identify three uses for data within the schooling system, which I have extended with further examples below;

Accountability: This data is used to either hold an individual (faculty member, student, parent) or institution accountable to stakeholders, this includes the likes of attendance data, exam results, progress 8 measures, Ofsted grading, school reports, off-rolling and exclusion data, retention data in KS5, teacher performance management etc.

School Development: This is data that is used to drive school wide improvement, it can be again based on attendance/punctuality data, exam results and progress 8 measures – however can also include the likes of teaching observations, inclusion zone/behavioural unit referrals/centralised detentions and other behavioural statistics, employer/parent/student surveys etc.

Instruction: This is the data that departments and classroom teachers have that can be used to influence curriculum decisions and classroom practice. Many of the sources are the same as above, but also make use of data relating to additional support needs, formative assessment, homework, (personal) observation of classroom protocol etc.

The first two are often what gives rise to the feelings that data aren’t ‘for’ teachers, but instead are simply metrics about, or for the judgement of teachers and institutions. However the latter is where developing confidence in teachers can be a vital next step in building quality of pedagogical approach of our nations teachers. Data use can form a key component of curriculum intent and sequencing, assessment design and reflection, and the adaptive teaching strategies that are expected by the modern classroom practitioner.

In the aforementioned survey of classroom teachers in the Netherlands (Schildkamp et.al, 2017), responses indicated that  the average teacher was only using many key facets of the application of data to influence instruction, such as ‘investigating why students make particular mistakes’/’identifying needs of and planning and adjusting instruction for gifted students’/‘formulating learning goals for individual students’, as either once a year or a couple of times a year – and possibly more concerning a pocket of responses indicating that they ‘did not know” how they use data to influence classroom instruction.

Sources of Data

National Data and Benchmarks
Generally what it says on the tin. Ofsted reports, exam results and grades, specific question performance from exam boards, value added, attainment/progress 8 etc. Benchmarks, analysis and target setting from the likes of ASP/ALIS/ALPS/DfE Ready Reckoner – I’d also recommend knowing the methodology behind the system used by your institution to monitor value added.
Charity, Wider Governmental and Research Data
This can provide teachers and institutions with data that isn’t necessarily part of general performance measures or benchmarks, but can provide vital context to inform curriculum intent and benchmark school performance. This includes looking across a Multi-Academy Trust as an entity, for the likes of persistent absence data, social deprivation, labour market and specific progression data etc.
Mock Summative Assessment
Mock exams and other high stakes and high control internal assessments.
Internal Systems
Attendance, punctuality, behavioural notes, centralised detentions, student timetable/options/subject combinations. Do the systems always work for what staff may need? For example, in FE there may be a large difference between a students’ attendance percentage and their active engagement in the study programme. As attendance is recorded by session, if a workshop is recorded as one 4 hour session or 3 separate sessions, if a student leaves during a break at hour one, they would have the same engagement but either 100% recorded or 33% recorded for the 4 hours of study. One generates a better attendance figure for accountability data, but being unable to distinguish deeper can have a detrimental effect on the ability of teachers at an instructional level.
VLE and external platform analysis
What is accessed and when? What is the level of engagement? There are a number of systems such as Diagnostic Questions/DrFrostMaths/Carousel Learning that provide analysis of student homework and formative tasks. Microsoft Forms and similar can also provide analysis for internally curated activities.
Formative Assessment
Low stakes assessment in or out of class, preferably targeted and specific toward the needs of the teacher and student(s) – measure of performance and progress, use as a diagnostic for prior knowledge, identify errors and misconceptions, as well as a tracker of engagement.
Observation
Formal teacher observations, learning walks, peer observation and coaching, ‘mock-sted’ style or otherwise consultancies.
Surveys
Student and parental surveys – broad focus or teaching and learning focus, such as the survey provided by Evidence Based Education. Staff surveys at departmental and institutional level.
Classroom Protocols
An area so often overlooked as a lot of this is not registered as ‘data’. This the data you generate in class. Exit tickets, discussions, observations you make, what leads to seating plan modifications? What does the ‘mini-whiteboard activity’ tell you?

A great example of this can be seen in a blog post on Pedagogy Magpie about the ‘reset process’ within the classroom and timings – https://pedagogymagpie.wordpress.com/2023/10/18/looking-harder/.
This is also one of the most versatile areas of data gathering for a specific need – for example I am currently monitoring the times ‘ challenge’ a students’ verbal response involving it/they/that etc, to match academic/elaborated code in line with my previous post – https://themarkscheme.co.uk/the-end-of-it/.

Developing the procedures to codify and record these observations that can often be a barrier.
Table of potential sources of data, including more specific examples where appropriate. Icons from freepik.com.

Proposing 4 Principles for sourcing and applying data to instruction (VAPE)

I’m in the middle of some analysis within my sixth form, looking at assessment data and hoping to broadening scope to look at how assessment data can be used to influence instruction with the curriculum area.

Looking at my own work and in discussion with peers, I’ve provisionally identified four principles to guide sourcing and use of data by teachers to improve instruction, which drive my own practice currently. It’s 2024 so after some tinkering, fittingly the VAPE acronym appeared. I am not claiming anything definitive here, this is merely my work in progress.

Velocity: Data sourced by the teacher should be for driving the quality of instruction and student experience forward. Take for example a teacher using mini-whiteboards as they know they are ‘good to use’. If the data collected in the outcome of the activity does not influence anything subsequently in the instruction that follows, fundamentally the whiteboards provided nothing that having students completing the questions in an exercise book with no oversight would have done. It is a wasted data point.

Accuracy: Is the data used and collected accurate? How confident are you in that? Do mock summative assessments adequately predict actual performance? Are you able to reuse assessment materials to build a stronger inference base? Is your measure actually a conduit for the inference you are trying to make – for example the discussion regarding attendance figures and session engagement in the data sources section.

Purpose: When data is a driving force behind decision making, curriculum design, intervention, assessment – are you able to articulate this purpose to students, parents, department and even to yourself? A grade or percentage in a spreadsheet does very little to provide future support for instruction. Do you want to track content performance over time, AO or question style? Why, what are you hoping to get from this? What do you (and the students get from this?). Are merely valuing what you measure already, or actually measuring the things you value, and will ultimately add value?

Elaboration: The data combines to tell a bigger picture, elaborating beyond the figure or statement with attempt of explanation or inference. For example: an absence may be associated with poor assessment performance in a particular content area, which can be corroborated from formative tasks and by analysing lack of engagement with VLE materials provided to ‘catch up’. This is the area where mathematical analysis will appear with regards data streams for instructional intent and implementation.

Whilst I’ve purposely stopped myself from delving deep into the research on this for this post (after the previous mammoth blog), I am planning more specific posts to build on particular ideas of implementation, particularly built around formalising the VAPE principles outlined above.

Some of this will involve the mathematical side of data analysis, the skills staff may need to get the most out of the data at their disposal.

Others will focus on a more holistic approach to implementation, particularly around assessment – understanding what data you want, how you craft an assessment to do that, then how you use this for adaptive teaching purposes moving forward – highly influenced from EBE’s Four Pillars of Assessment – https://evidencebased.education/wp-content/uploads/sites/3/4-pillars-of-great-assessment.pdf.

Any recommendations of wider reading in this area would be greatly appreciated – a lot I can find seems centred around work involving Dr Kim Schildkamp in the Netherlands and Dr Chris Brown in the UK – so any wider recommendations would be great, or other practitioner practical implementation as well.

Links and Reading

Biesta, G (2009). Good Education in an Age of Measurement: On the Need to Reconnect with the Question of Purpose in Education. Educational Assessment Evaluation and Accountability – February 2009. DOI: 10.1007/s11092-008-9064-9

Educational Endowment Foundation (2021). Putting Evidence to Work: A School’s Guide to Implementation – https://d2tic4wvo1iusb.cloudfront.net/production/eef-guidance-reports/implementation/EEF_Implementation_Guidance_Report_2019.pdf?v=1707018810

Evidence Based Education (2018). The Four Pillars of Assessment – https://evidencebased.education/wp-content/uploads/sites/3/4-pillars-of-great-assessment.pdf

Pedagogy Magpie (2023). Looking Harder – https://pedagogymagpie.wordpress.com/2023/10/18/looking-harder/

Schildkamp, K, Poortman, C, Luyten, H & Ebbeler, J (2017). Factors promoting and hindering data-based decision making in schools, School Effectiveness and School Improvement, 28:2, 242-258, DOI: 10.1080/09243453.2016.1256901