Ofsted: Idealogical or Practical? – Maths Subject Report (June 2023)

,

On Thursday Ofsted released the next two inspection based subject reports on the state of Mathematics and History curriculum in England – following the release of the Science report earlier in the year. As a Maths specialist my focus is going to be on the Maths report. 

BACKGROUND:

These subject reviews are very much a follow-up to the thematic subject reports such as “Made to Measure” which was significant influence around institutions in the early years of my teaching. For the mathematics subject review data is based on 50 schools, with age ranges from reception up to Sixth Form, with data coming from the ‘deep dive’ points of inspections between September 2021 and November 2022. It should be noted that this sample is dramatically smaller than the ‘Made to Measure’ report, which was based on 320 institutional visits and approximately 1700 individual lesson observations.

The reports are a second part of a research review published in May 2021, which were undertaken for all subjects within the national curriculum – the maths review can be found here – https://www.gov.uk/government/publications/research-review-series-mathematics/research-review-series-mathematics

These reviews and modifications to the EIF have seen the emphasis put on evidence and research  based classroom learning principles as the expectation within the classroom. The maths review is essentially a ‘greatest hits’ of concepts of spaced learning, retrieval practice, true assessment for learning etc, however often delved into contradictory statements – for example those based on student progress and moving through the curriculum – where the review suggests practitioners expected to move groups through together, but also hold back individual students when not confident with the material.

There was a lot of criticism at the time of the release, including this from the Association of Mathematics Teachers – https://www.atm.org.uk/write/MediaUploads/Journals/MT278/12.pdf 

Interestingly, on the surface there doesn’t seem much hugely controversial here, quotes such as “Teachers remember that it is not possible for pupils to develop proficiency by emulating expertise, but by emulating the journey to expertise” make a lot of sense, and is a key essence of my earlier post on “MODELLING: Who is the star of your examples?” . However, as pointed out in the AMT rebuttal the review indicates cases of causation where it is inappropriate, there is an oversimplification and overgeneralisation of results to some studies referenced, it makes direct reference to poor quality studies and they have neglected to include significant and recent large scale studies and research. A lot of red flags, when you move beyond the ‘these statements in isolation seem logical’. I mean, small niche studies are the type of thing that are fine for someone like me to draw attention to, but when you are trying to craft national strategy and expectation of what you want to see – not what you want to be coalescing around,

On a personal level one of my biggest gripes is the separation of type 1/type 2 ‘practice’. I can somewhat appreciate why they have done this, particularly as empirically you do often hear how perceived ‘weaker’ students don’t encounter the type of problems that address the type 2 methods, either by ‘design’ or the problems are thrown at the end of a worksheet which the slower students often don’t get to. However, in my opinion these are complimentary in the development of mathematical knowledge and shouldn’t actually be decoupled – I understand the explicit referencing to type 2 as they want to emphasise the need for this with practitioners, but by separating them you’re still missing the core facet and overlapping nature of the ability to ‘perform’ and ‘solve’ with the ability to ‘explain’ and ‘justify’. They shouldn’t be separate concepts in the learning experience, to be practiced separately.

REFLECTION ON THE REPORT:

You can read the report here – https://www.gov.uk/government/publications/subject-report-series-maths/coordinating-mathematical-success-the-mathematics-subject-report

Here are some of the headlines first of all;

  • A resounding positive shift in primary maths and generally notable improvements in secondary
  • This is despite well documented recruitment and retention issues regarding maths specialists
  • More can be done to up-skill non-specialists, including learning support assistants in-class
  • There is positive comments on Maths Hubs, particularly within the primary sector
  • There was critique about foundation/higher tier allocation. Suggestions that schools put this in place too early and stunt the mathematical development of learners
  • As with the evidence review, there was references to both students moving on to other topics before ready, but also statements that could infer the opposite, such as the prior comment about foundation/higher tier allocation
  • Departmental resources are often applied to address major accountability measures – for example in year 6 and in year 11 – with other year groups suffering as a result

When contrasted with ‘Made to Measure’ elements of the report are rather vague. Whilst there are a few specific case studies and comments, they are quite narrow and a lot of the more general comments lack extension and recommendations for how these could be practically addressed. I was going to do a large scale critique of certain points, things I agree with, things I don’t but I’ve decided to instead just focus on one issue – idealogical vs practical, as my major issue with this relates to that.

This is an issue with Ofsted in general. Is Ofsted to make judgements based on ideologies or on practicalities? During inspection reports this seems to almost change at the whim of the inspectors or the specific area they are looking at.

I’m going to use an unconnected example to illustrate what I mean in the battle of idealogical/practical argument. When it comes to the welfare state, you will often see or hear the comment “People should feed their own kids”. This is generally responded with “Some people can’t”. The issue is, these are idealogical arguments, you can actually hold both of these positions ideologically, and neither actually address the practical problem at hand. Ultimately, the practical justification is “Some people DON’T feed their own kids” – whether that is can’t or won’t is actually irrelevant to the issue – people DON’T, so in the practical world it has to be addressed.

So how does this relate to the report? Well there are some very well raised points here, but they often fall into the idealogical and not practical camp. Like in the evidence review of students moving through at the same pace whilst simultaneously recommending holding individual students back.

One aspect brought up was that the vast majority of teachers move on before all students are ready, don’t undertake appropriate hinge point questions to assess knowledge, and instead infer understanding in students because the topic has been covered previously and not because of their ability – and to fit the specification content in. I’ve been guilty of this, I think everybody is guilty of this – “oh they’ve covered that, it will lead to this” or inferring group understanding from a few individual responses.

The above alone makes complete sense, and is an issue to be addressed. However, this then contrasts with other comments within the report, for example that students aren’t being taught enough due to early streaming into foundation and higher tier groups. These are both logical statements. In fact, working within KS5, I have witnessed streaming of students having a detrimental effect on potential progression due to knowledge of mathematics and the potential grade they can achieve. But the latter is clearly a strategy in order to address the former. I appreciate part of the argument is that streaming in that way, that early holds back some individuals in a cohort – but you’re making idealogical and not practical arguments. They treat everything as its own individual idea, not looking at the holistic nature of departments and resources available. Criticise it as a strategy, fair enough, but treating it as it own unconnected issue doesn’t help anyone.

The same for the last bullet point in my list. Departmental resources are generally applied to SATs/GCSE years. This is critiqued as it suggests embedding of knowledge in lower schooling is missing, and it might be better off. The problem is, institutions only have finite resources – so let’s say a secondary school puts their focus on year 7 instead of exam years, embedding the basics in-line with what was suggested in the report. What happens to the 4 academic years before they reach GCSE maturity? Yes they’ve implemented a system that could be better in the long-term, but as the students sitting in yr.8-11 have not been party to that, they’re at a significant disadvantage, especially if this is to be a decision by a singular institution as opposed to a national diktat. Again this returns back to the idealogical argument that learning and achievement in exams are different. This is true. But when individuals and schools are held to the achievement in those exams. That is the practical reality that they have to address. I also think a bigger distinction needs to be made between “teaching for the test” and “training to present a response in line with the demands of the assessed medium”. In a career if you have to give a presentation, you prepare a presentation – you match the medium.

Finally, there has to be an appreciation that teaching and learning is not an exact science and something that works in one school may not work in another, or in one class in another, or for one student for another. But in this report there are elements which just come down to “Well did it work? Well that’s good then!”. So you get again, contradictory ideas about centres being criticised for teachers being left to their own devices, and others criticised for a lack of autonomy and centralised resources. Again it seems like no real practical logical strategy is being recommended here – just “This worked here”.

There are some interesting comments to takeaway – and it’s not all frustrating to me. Notes on ineffective external tutoring and improvements when aligned to curriculum, implementation of assessment, addressing of misconceptions and gaps in knowledge, the need for deliberate practice to be explicitly planned and not just “whatever time is left” etc. 

I will sign off with a nice appreciation of the Bayesian Easter egg in the report. Given that a student receives high quality education, they are likely to achieve a strong exam result. However, given that a student has a strong exam result, this is not an indicator that they are likely to have received a high quality education.