How ResearchEd gets it wrong

It is clear that in spite of many professional researchers and teachers being blocked on X by Tom Bennett’s ResearchEd, (I am one!) many teachers who attended its recent conference in Milton Keynes appeared to value the event highly.

There is much written about the importance of being “research-led”, “research-informed”, although Bennett (who owns ResearchEd) has shown a total lack of interest or respect for educational research itself. The roots and ownership of ResearchEd has been analysed by the educational journalist Warwick Mansell on his campaigning website.

So just how does ResearchEd treat educational research? I will admit I have not attended ResearchEd meetings – indeed it is questionable whether Bennett would allow me to attend. So here I will look at just one example that was posted on social media.

What Counts as Research?

Educational research is, like teaching, a profession – one in which I spent 35 years. It requires many skills and techniques that are not part of teaching. One such skill – which takes time to develop – is interpreting and summarising published research. In fact, the very first module on my OU MA(Education) that I took as a teacher 37 years ago was about critiquing published research about the classroom. This requires more than merely selecting one article and reading the abstract. It is not unusual for the first year of a three-year PhD to be taken up with the literature review.

Many times over the past 35 years I have read posts on social media which claim to present the findings of a single study, as if they were significant enough in themselves to justify a change in practice. This is quite lazy – but seems to be gradually becoming the approach to being “research-informed”. Some time ago I posted a short article that showed where this had occurred over teachers’ use of classroom display (https://teaching-maths.com/articles/1920-2/) where a school research lead had claimed incorrectly that research had shown classroom display was a distraction. Immediately after I published the article, I was blocked on Twitter by the research lead.

Research is accumulative. Rarely is one single piece of research groundbreaking. We evaluate research by looking at the direction of travel, by looking at outliers, by looking at the sampling strategy, by considering the reliabiity and the validity of its findings, particularly the external validity (i.e. where else its findings can be applied).

ResearchEd’s Misrepresentation

So, it was with interest I read this from Peps Mccrea who has a regular posting on X where he looks at research articles. I have communicated with Peps a number of times on this, in the past. This below was from a slide presented at ResearchEd. I will admit, I did not attend Mccrea’s session.

Mccrea is referring to an article published in the journal AERA Open, titled “The Big Problem With Little Interruptions to Classroom Learning” by Matthew Kraft and Manual Monti-Nussbaum. You can access the article here.

“AERA Open is a peer-reviewed, open access journal published by the American Educational Research Association (AERA). AERA Open aims to advance knowledge related to education and learning through rigorous empirical and theoretical study, conducted in a wide range of academic disciplines”

The claim by Kraft and Monti-Nussbaum seemed on the surface, a somewhat questionable claim to me, because losing the odd 2-3 minutes here and there does not equate to losing 10-20 days – which itself seems quite a large range. It didn’t make sense to me to simply accumulate these odd minutes up to 10-20 days. I will say more about this later.

So just what were the authors claiming? What I discovered on reading the article was really interesting. However, what for me is important is how claims in ResearchEd sessions are taken up and run with. One such example came from Adam Robbins who runs a blog in which he reflects upon education, where he discusses Mccrea’s session.

I will admit here that I am not a big consumer of blogs and I have not read Adam’s, but here is what he writes of Mccrea’s post:

There he talks of “disruption, through poor behaviour, environmental design and poor instruction could be robbing our students of significant learning”. I don’t want to disagree with that, indeed I do not have data that would allow me to confirm or reject the claim – I suspect Adam doesn’t either. Some 35 years ago the Elton Report on “Discipline in Schools”, published in 1989, claimed:

Few teachers in our survey reported physical aggression towards themselves. Most of these did not rate it as the most difficult behaviour with which they had to deal. Teachers in our survey were most concerned about the cumulative effects of disruption to their lessons caused by relatively trivial but persistent misbehaviour.”

(https://www.education-uk.org/documents/elton/elton1989.html)

So, this persistent low-level misbehaviour is nothing new. In which case why did Matthew Kraft and Manuel Monti-Nussbaum, academics from Brown University USA, think it was necessary to reexamine pupil behaviour. The short answer is – they didn’t. Here is the abstract from the article, and from which Mccrea appears to have got the text for his slide.

The Original Study

Now Matthew is a very experienced academic, an ex-teacher whose research focusses largely on the economics of education and he has a strong professional reputation. (https://vivo.brown.edu/display/mkraft)

Manuel is a behavioural scientist who works closely with Matthew. He works in the UK for an organisation called The Behaviouralist (which I have to admit I had never heard of). They describe themselves as a group who:

”help solve real world problems through its unique blend of behavioural science expertise, academic insight and applied experience.” (thebehaviouralist.com/about-us/)

Hence, they are both experienced and knowledgeable academic researchers. (Though there are some on on social media who would claim that as they aren’t actually teachers they should not be commenting on the classroom.)

The background to Matthew and Manuel’s study is “external interruptions” because:

Narrative accounts of classroom instruction suggest that external interruptions, such as intercom announcements and visits from staff, are a regular occurrence in U.S. public schools.”

What are these external interruptions?

“We define external interruptions as intrusions from outside the classroom that are not under the direct control of classroom teachers. This definition distinguishes our focus from the large body of literature on internal interruptions caused by off-task student behavior.”

So, they specifically ignore behavioural issues. In fact, one of their suggestions is:

Administrators could start by cutting the cord of the school intercom system or prohibiting unscheduled intercom announcements. Teachers reported that the majority (52%) of intercom announcements were unscheduled. Schools could also substantially circumscribe the type of announcements that are allowed over the intercom system. Distracting hundreds of students to call one student to the front office is educational malpractice. (p. 15)

Now intercoms and public announcements in classrooms is something which I believe is unknown in UK schools. Yet the figures from the article include the time taken for such announcements – many of the 2000 interruptions! This makes the extrapolation of their findings into the UK schools context quite unreliable. Failing to understand the extrapolation of research is another crucial factor in misinterpreting research findings. Put simply, the context of Matthew and Manuel’s study cannot be applied to the UK. Mccrea’s slide fails to make it clear that the figures he quotes apply to US public schools and cannot be generalised to contexsts where key features are not in evidence.

The Data

The data collection took place in 2017 – pre-pandemic. The mixed methods data that Matthew and Manuel collected consisted of two main strands. The first was from an observational study of a number of classrooms where they coded and documented interuptions. The second strand of their data was survey data from an adapted version of the local 2017 annual school survey.

It is well known to experienced researchers that self-reporting survey data is notoriously unreliable – and is used far too often by inexperienced researchers. However, Matthew and Manuel describe in some detail the strategies they used to cross-validate their data.

They describe their data as follows:

We examine interruptions in the Providence Public School District (PPSD)*, working in collaboration with the district to collect original data from school climate surveys and classroom observations. More than 13,800 students, 1,500 teachers, and 70 administrators responded to a range of survey items asking about the frequency of external interruptions and the degree to which they disrupt learning. We complement these survey data with observational data and field notes collected during 63 classroom observations in five PPSD high schools. Using an original observation instrument, our research team timed and cataloged external interruptions across 10 teachers’ classrooms, while also capturing the observable consequences of these interruptions for instruction and the learning environment. (p. 2)

*Providence is a district of Rhode Island.

When presenting the survey they gave teachers a “brief statement describing the focus on interruptions from outside the classroom“.

The statement also clarified that the definition did not include disruptions that originated from inside the classroom due to general student misbehavior such as the use of personal cell phones. (p. 6)

Hence any attempt to use this study to make comments on pupil misbehaviour is completely unjustified.

The Analysis

One point on which I do take issue with Matthew and Manuel is over their accumulation of the loss of instruction time. First, from the observational data:

This involves scaling the total average time lost per 60 minutes of class across a full school day (5.5 hours of actual class time) and academic year (180 days). As reported in Table 4, we project that across an academic year students lose 54.5 hours of instructional time, or nearly 10 days. (p. 12)

Second from the survey data:

We asked teachers to estimate how many minutes in a typical 60-minute class are lost because of outside interruptions. Teachers’ responses suggest that, in the typical PPSD school, an average of almost 7 minutes are lost to external interruptions in each class. Again, we find substantial heterogeneity across schools with a school level standard deviation of almost 2 minutes. These findings translate to an average of 113.9 total hours, or a staggering 20.7 days, of lost instructional time across the school year. (p. 12)

So not only is the self-report data at odds with their observational data, there is a wide difference between schools even within this one district of Rhode Island.

Their calculated average time for the length of each interaction is around 1 minute within each hour lesson (see their Table 4 on page 12). By scaling this up to each hour, then each day, then each year they get 10 or 20. We see here where the reported “between 10 to 20 days” came from – but this claim is quite wrong. One figure (10 days) came from the researchers’ observation data, the other (20 days) came from the teachers’ self-report survey.

However the main error is in the scaling up. This is quite statistically invalid and renders the “10 – 20 days” quite meaningless. It is rather like saying “if you live to 90 you will spend 30 years asleep” – making us all Rip van Winkles. Perhaps the conclusion that “children lose on average approximately one minute per lesson as a result of external interuptions” is not quite as sexy as “per year interruptions and the disruptions they cause result in the loss of between 10 and 20 days of instructional time“.

Scaling up, aggregation and averaging are perhaps three of the most misunderstood, misused and abused statistical techniques by non-statisticians.

The Conclusions

Overall Matthew and Manuel’s analysis on the impact on student attention by these external interruptions is pretty robust; they have looked at student loss of concentration as a result of interuptions and resulting learning loss. But losing concentration for one minute in each lesson is not the same as missing 20 days of schooling.

However they are at pains to point out that these interruptions – and the subsequent learning opportunity loss – is the fault of, and the responsibility of school administrators and teachers, NOT wayward children or feckless parents. As such it does not play well to the conservative authoritarian narrative of Tom Bennett.

We have seen to our cost the way in which misinformation infects the media. The recent 2024 riots alone demonstrated this. But when research is presented as sound bites, by some who do not have a background in doing and critiquing research, the outcome could be equally misguided, because one of the implications that Matthew and Manuel draw from their research for classroom pedagogy is interesting.

“It is possible that frequent interruptions lead teachers to prioritize approaches that are more robust to frequent interruptions, such as individual work, and eschew more enriching whole-class discussion or group work.” (p. 15)

There we have it. Two experienced educational researchers and behavioural scientists talking of the enrichment of whole-class discussion and group work. Now there is an interesting take-away from this article for ResearchEd.

I will finish with a line from The Boxer by Simon and Garfunkle:

Still a man hears what he wants to hear and disregards the rest