The 10 Stuff-Ups We All Make When Interpreting Research

Have you ever tried to interpret some new research to work out what the study means in the grand scheme of things? Well maybe you're smart and didn't make any mistakes -- but more likely you're like most humans and accidentally made one of these 10 stuff ups.

Scientist picture from Shutterstock

1. Wait! That's just one study!

You wouldn't judge all old men based on just Rolf Harris or Nelson Mandela. And so neither should you judge any topic based on just one study.

If you do it deliberately, it's cherry-picking. If you do it by accident, it's an example of the exception fallacy.

The well-worn and thoroughly discredited case of the measles, mumps and rubella (MMR) vaccine causing autism serves as a great example of both of these.

People who blindly accepted Andrew Wakefield's (now retracted) study -- when all the other evidence was to the contrary -- fell afoul of the exception fallacy. People who selectively used it to oppose vaccination were cherry-picking.

2. Significant doesn't mean important

Some effects might well be statistically significant, but so tiny as to be useless in practice.

Picture: Frits Ahlefeldt-Laurvig

Associations (like correlations) are great for falling foul of this, especially when studies have huge number of participants. Basically, if you have large numbers of participants in a study, significant associations tend to be plentiful, but not necessarily meaningful.

One example can be seen in a study of 22,000 people that found a significant (p<0.00001) association between people taking aspirin and a reduction in heart attacks, but the size of the result was miniscule.

The difference in the likelihood of heart attacks between those taking aspirin every day and those who weren't was less than 1%. At this effect size -- and considering the possible costs associated with taking aspirin -- it is dubious whether it is worth taking at all.

3. And effect size doesn't mean useful

We might have a treatment that lowers our risk of a condition by 50%. But if the risk of having that condition was already vanishingly low (say a lifetime risk of 0.002%), then reducing that might be a little pointless.

We can flip this around and use what is called Number Needed to Treat (NNT).

In normal conditions if two random people out of 100,000 would get that condition during their lifetime, you'd need all 100,000 to take the treatment to reduce that number to one.

4. Are you judging the extremes by the majority?

Biology and medical research are great for reminding us that not all trends are linear.

We all know that people with very high salt intakes have a greater risk of cardio-vascular disease than people with a moderate salt intake.

Picture: JD Hancock

But hey -- people with a very low salt intake may also have a high risk of cardio-vascular disease too.

The graph is U shaped, not just a line going straight up. The people at each end of the graph are probably doing different things.

5. Did you maybe even want to find that effect?

Even without trying, we notice and give more credence to information that agrees with views we already hold. We are attuned to seeing and accepting things that confirm what we already know, think and believe.

There are numerous example of this confirmation bias but studies such as this reveal how disturbing the effect can be.

In this case, the more educated people believed a person to be, the lighter they (incorrectly) remembered that person's skin was.

6. Were you tricked by sciencey snake oil?

You won't be surprised to hear that sciencey-sounding stuff is seductive. Hey, even the advertisers like to use our words!

But this is a real effect that clouds our ability to interpret research.

In one study, non-experts found even bad psychological explanations of behaviour more convincing when they were associated with irrelevant neuroscience information. And if you add in a nice-and-shiny fMRI scan, look out!

7. Qualities aren't quantities and quantities aren't qualitites

For some reason, numbers feel more objective than adjectivally-laden descriptions of things. Numbers seem rational, words seem irrational. But sometimes numbers can confuse an issue.

For example, we know people don't enjoy waiting in long queues at the bank. If we want to find out how to improve this, we could be tempted to measure waiting periods and then strive to try and reduce that time.

But in reality you can only reduce the wait time so far. And a purely quantitative approach may miss other possibilities.

If you asked people to describe how waiting made them feel, you might discover it's less about how long it takes, and more about how uncomfortable they are.

8. Models by definition are not perfect representations of reality

A common battle-line between climate change deniers and people who actually understand evidence is the effectiveness and representativeness of climate models.

But we can use much simpler models to look at this. Just take the classic model of an atom. It's frequently represented as a nice stable nucleus in the middle of a number of neatly orbiting electrons.

While this doesn't reflect how an atom actually looks, it serves to explain fundamental aspects of the way atoms and their sub-elements work.

This doesn't mean people haven't had misconceptions about atoms based on this simplified model. But these can be modified with further teaching, study and experience.

9. Context matters

The US president Harry Truman once whinged about all his economists giving advice, but then immediately contradicting that with an "on the other hand" qualification.

Individual scientists -- and scientific disciplines -- might be great at providing advice from just one frame. But for any complex social, political or personal issue there are often multiple disciplines and multiple points of view to take into account.

To ponder this we can look at bike helmet laws. It's hard to deny that if someone has a bike accident and hits their head, they'll be better off if they're wearing a helmet.

Picture: Petar

But if we are interested in whole-of-society health benefits, there is research suggesting that a subset of the population will choose not to cycle at all if they are legally required to wear a helmet.

Balance this against the number of accidents where a helmet actually makes a difference to the health outcome, and now helmet use may in fact be negatively impacting overall public health.

Valid, reliable research can find that helmet laws are both good and bad for health.

10. And just because it's peer reviewed that doesn't make it right

Peer review is held up as a gold standard in science (and other) research at the highest levels.

But even if we assume that the reviewers made no mistakes or that there were no biases in the publication policies (or that there wasn't any straight out deceit), an article appearing in a peer reviewed publication just means that the research is ready to be put out to the community of relevant experts for challenging, testing, and refining.

It does not mean it's perfect, complete or correct. Peer review is the beginning of a study's active public life, not the culmination.

And finally …

Research is a human endeavour and as such is subject to all the wonders and horrors of any human endeavour.

Just like in any other aspect of our lives, in the end, we have to make our own decisions. And sorry, appropriate use even of the world's best study does not relieve us of this wonderful and terrible responsibility.

There will always be ambiguities that we have to wade through, so like any other human domain, do the best you can on your own, but if you get stuck, get some guidance directly from, or at least originally via, useful experts.

Will J Grant is Researcher/Lecturer, Australian National Centre for the Public Awareness of Science at Australian National University. Rod Lamberts is Deputy Director, Australian National Centre for Public Awareness of Science at Australian National University. Will J Grant owns shares in a science communication consultancy. He has previously received funding from the Department of Industry. Rod Lamberts has received funding from the ARC in the past. He also holds shares in a science facilitation consultancy.

This article was originally published on The Conversation. Read the original article.


Comments

    "A common battle-line between climate change deniers and people who actually understand evidence is the effectiveness and representativeness of climate models."

    Yes, the typical Conversation technique of polarising/extreming (#notaword) the argument (especially with climate change). If you don't fully agree with everything the climate change people say, you're a "denier"! No possible middle ground, right?

    (I know this is a view endorsed by Angus, but it's still funny to watch in action.)

      Agreed. Having lived through the various 'sky is falling' panics foisted by academics over the decades, I'm reluctant to believe models that can't even get the past right, when the data is all there! The difficulty in something as hugely complex as climate is to tell the difference between the natural random walk and a long term trend, particularly when the scale of interpretation is brought to the table: we're talking about a scale of tens of millions of years during which there have been vast changes in climate...but over the scale of vast periods. The little fluctuations we see at the moment are possibly just that...but no one can tell! And are changes in the tiny amount of CO2 in the atmosphere (and the even tinier bit apparently caused by humanity) germane when there are vast loads of other 'green house' gasses? Not possible to say.

        They could get the models perfect using all of the existing data, but that would be overfitting. I'm more inclined to think you don't understand modelling than think climate modelling is a dramatic failure.

      I thought it was healthy to be sceptical...
      but alas I am treated like Hitler's incarnation for not believing!

    I appreciate you used the terms "we all" instead of "you" - which is commonly done as sensationalism on this network.. But still am not entirely sure the implication is accurate or worthwhile.

    Meh..! Most of my experimentation is empirical..! :)

    Apart from the silly swipe at so-called 'climate change deniers', a good article. Particularly on effect size and statistical significance. The latter is one of the most misunderstood and misused concepts. Stat signficicance has nothing to do with actual real world significance. It is just (and usefully so in stats) an indication of how many times the experiement would give wrong information in a large number of repeats.

    I always ask myself these 4 questions:
    - Who did the research?
    - Why did they do it?
    - Who funded it?
    - Why am I reading about it (especially if it is in the mass media)?
    Then I wonder if that has lead the the researcher to be subjected to point 5 above?
    Call me cynical but it seems to be the case more often than not.

      Or you could read the research itself and judge it on its scientific and methodological merits (assuming you're qualified to do so).

    A common battle-line between climate change deniers and people who actually understand evidence is the effectiveness and representativeness of climate models.

    No you are just being an arse about it.

Join the discussion!