Skip to main content
article icon

Why it's caveat emptor when it comes to some educational research

Article | Published 11 April, 2013


Tom Bennett

Every week Tom Bennett will be shouting at the laptop about some damn fool idea in education, or else he’ll be writing about classrooms, students, or why teaching is the most important job in the world. This week, Tom questions the validity of some educational research…

Many of the more brainless ideas trialled in education in the last few decades have been supported by research of apparently impeccable pedigree. Unfortunately it often doesn’t take a lot of poking to realise that some research is better than others. Here are the most common problems I find in social science research:

Poorly phrased hypotheses, or titles

This is when you have a title that nearly reduces you to tears of mirth or sadness, depending on how strong your stomach is: ‘How does emotional intelligence best increase performance in postgraduate studies?’ ‘Using brain gym as a tool to promote multiple intelligences,’ that kind of thing. Papers loaded with so many assumptions and presumptions that you would need a crowbar to separate them all from each other. This is why physical scientists weep when they look at these kinds of papers. If they tried to get funding for ‘The causal relationship between fairies and dream catchers,’ there would be a riot in the Sorbonne. But some social scientists launch into their grand expeditions with, it seems, not a care in the world.

Papers so obviously designed to prove their point that the reader feels clobbered if they presume to disagree

This is one of the most common errors. ‘Research from the Academy of Flutes,’ the article will start, ‘Shows that flute usage, or flutage, adds on average two grades to a pupil’s GCSE outcomes…’ and so on. ‘We asked 200 Cambridge professors from the University flute society if they felt that flute playing was useful to their overall well being. 110% said yes..’ and so on.

Research that is unfalsifiable

Also common. Claims that ‘capitalism is inevitable, but so is Communism,’ may impress them in the shipyards, Mr Marx, but there’s no way of showing this to be false, unless you want to wait until the end of time and see every civilisation ever.

Analysis that reaches past the data

This is the work of the devil; when a paper takes the opinions of, say, 100 school children, and presents it as a fait accompli that these opinions are representative of the whole population. Or worse, that then claims this evidence shows that ‘children are not being listened to enough’ or so on. Which leads me into my next, and least favourite social science trope:

Mistaking facts and values

If I interview 100 teachers and find out that 90% of them would like longer summer holidays, that is a fact - that 90% of them think that. For a paper to then suggest that ‘this means teachers should have longer holidays’ is an absurd leap from fact to value; the writer is peddling the latter, which is fine, have all the opinions you want, but don’t dress them up as research. I think that dogs should shut the hell up when I’m trying to sleep, but I haven’t kept a dream diary to back this up.

Social science is often not science. It is investigation; it is commentary; it often illuminates, and helps provide valuable light and guidance in human affairs. What it does not do is offer reliable predictive powers, nor irrefutable explanatory mechanisms for processes. Merely commentary, case study, opinion, and subjective analysis.

And that’s fine. If we don’t conflate the two. Why?

Lots of reasons. One is the quality of some of the research itself. Leslie K John of Harvard Business School said that ‘1/3 of all academic psychologists admit to questionable research practices.’ For example: stopping data collection at convenient points, when the desired results had been found, and omitting other tested variables. One third! No wonder it gets a rep. There have been a variety of scandals uncovered in social science research that shows how problematic this is for the integrity of the whole subject. Note how difficult it is for scientists to duplicate, and therefore test the claims made by previous researchers in social science. Which makes open release of all data even more important, and when this isn’t done then it’s simply a case of, well, making it all up, and then you can say anything and science is dead.

And that’s barely the surface of it. Next time you read a piece of educational research, see how many of these sins it commits.

Educational research conference

And now I’m putting together a conference in September to talk about exactly these kinds of issues, how to make sure that educational research is as useful as possible, and ways of getting the research and the teaching communities to work together in ways that assist both. It’s called researched 2013, and if you’re interested, please have a look. I think there are areas where research can transform understanding and therefore practice, and areas where research has limited efficacy due to the inherently qualitative nature of education, but the key thing is to discuss where and when these boundaries lie and apply. Many teachers feel that research is dropped on them from high, like a perpetual stream of steaming avian stew. Many teachers are trying to do their own research. Many teachers think research has nothing to teach them. And many are elbow patch deep in research, working with others. They need to be talking to each other a bit more.

http://researched2013.wordpress.com/

 


Read Tom’s previous blogs;

Who is Tom Bennett

Tom is a full-time teacher in an inner-city school and he’ll be blogging for us weekly on pedagogy and classroom management. Tom offers regular behaviour advice on the TES website and runs the TES behaviour forum. He also writes for the TES magazine, trains teachers across the UK and is the author of The Behaviour Guru, Not Quite a Teacher and Teacher.

 


Subscribe to the magazine

as yet unrated

Comment (3)


  • I don't know why you say education is "inherently qualitative". Learning is measurable, and there is a growing field of cognitive science research in which it is routinely measured all the time. And randomized trials of different methods of instruction are performed. Unfortunately this work is unknown to the professors in education schools.

    APS recently brought together some of the main findings from this work here:

    http://www.psychologicalscience.org/index.php/publications/journals/pspi/learning-techniques.html

    Unsuitable or offensive? Report this comment

    18:21
    12 April, 2013

    SmarmyBarmy

  • Lots of good points, though commentor above also has one when pointing on 'educational' research and 'qualitative' research aren't synonymous.

    As for the assertion that social scientists falsify and otherwise cook their evidence, there has been a fair bit of investigation showing 'real' scientists do this too. For a very unacademic reference on this, put #overlyhonestmethods into twitter.

    I'd agree with you that there's nothing wrong with recognising the inherent differences between natural and social sciences, as long as we don't presume the former as some kind of intellectual superiority.

    Unsuitable or offensive? Report this comment

    19:11
    12 April, 2013

    jaffafairy

  • Hi Tom
    Thanks, lots of interesting ideas here. A few thoughts I thought worth sharing, but only off the top of head and late on a Friday night...
    1. Your point about papers designed to prove a point is a good one but reminds me more of policy documents and how policy makers use research to generate a "common sense discourse" via the exclusion of data, arguments and research which contradict; marginalising opponents and their views as out-of-touch and representing vested interest; and using linguistic devices to suggest certainty and authority for their own views.
    2. Be wary of educational research or statements which use quantitative presentations to lend authority and rigour which is perhaps not warranted. Effect sizes is one example; another would be inspection reports and summaries which use possibly spurious numbers to suggest objectivity in an ultimately subjective process.
    3. There are obvious questions relating to how a qualitative methodology such as a case study approach can be generalised more widely, but there are arguments in favour of "comparability" or even "naturalistic generalisation" which might be worth exploring. See Schofield, J., 2002. Increasing the Generalizability of Qualitative Research. In A. Huberman & M. Miles, eds. The Qualitative Researcher's Companion. Thousand Oaks, CA: Sage. pp.171-204.

    Cheers
    David

    Unsuitable or offensive? Report this comment

    22:08
    12 April, 2013

    DavidCameron76

Add your comment

Subscribe to the magazine
Join TES for free now

Join TES for free now

Four great reasons to join today...

1. Be part of the largest network of teachers in the world – over 2m members
2. Download over 600,000 free teaching resources
3. Get a personalized email of the most relevant resources for you delivered to your inbox.
4. Find out first about the latest jobs in education