Learn How To Quickly Evaluate The Worthiness Of Scientific Reports

Bananas make you gain weight faster! Mice who were given Advil burn twice as many calories! If you're wondering whether news sites and social media are just messing with you, anyone can head to the scientific paper at the heart of a headline and evaluate them on four criteria.

Photo by Nic's events (Flickr).

Wired Science gives us four things to look at in determining if a scientific report has been woefully overblown or repurposed by the media. Causation and correlation, the "true size of the effect" and statistical power are among them:

Look at two key factors, the n and the p. The n is the number of subjects used in the study. Multifaceted experiments typically have fewer subjects than simple surveys. Genetics studies need a big n. The p value lets you know whether the result is "statistically significant"-it's the probability of something occurring by chance alone. You want to see a p of less than 0.05. (Results can be statistically significant and still only show correlation, or have confounding factors.)

Time and patience are always required in reading and evaluating, but for a quick glance at what you need to know (or not care about), Wired's checklist seems quite apt.

Learn to Read a Scientific Report [Wired Science]


Comments

    Something else to bear in mind is that you can have a high n and a low p but a trivial effect. If bananas make you gain weight faster, but only 0.02% faster, you probably don't care. Also check where the original has been published, It isn't uncommon for the wrong statistical test to be applied, creating the illusion of a significant effect (low p) where there is none. You probably can't tell, but if it's in a reputable journal (high impact factor), the reviewers probably checked.

      Also keep in mind that the chances of a trivial effect becoming statistically significant also increases with n. So if you have a study with say, thousands of participants, chances are that even extremely small effect sizes would become statistically significant. Remember that statistical significance does not always translate to practical significance in the real world. Unfortunately the media doesn't get this concept and has a knack for alarming the general public with badly interpreted scientific results.

    Along the same lines, this is certainly worth a few minutes of your time to learn about how even peer reviewed published research may not be giving a holistic view - http://www.badscience.net/2012/09/i-did-a-talk-at-ted-about-drug-companies-and-hidden-data/

    I thought it was because they used words like "chooglin'" and "fungrified", but I could be wrong here..

    another thing to look out for are the psudeo scientic reports from universities. the media report starts like "a team a x university has found a link between meat pies and left handedness". they are prepaired by the satistics students based off some chunk of data they got somewhere. it was not a double blind crossover trial

    www.mediadoctor.org.au is an excellent Australian website that evaluates the quality of health-related stories in the general media.

    I would like to repost the top comment from the wired article, if I may. I think it should be read by those interested in this topic.
    " Ryan Collapse

    As a scientist who regularly reads (and presents/discusses) journal articles, I gotta say this is gonna be pretty useless information for... well... everyone. The hard part about understanding a journal article is literally understanding the article. You need a dictionary, you need to look up words, you need to cross reference graphs, and generally you need to know what's going on. Understanding how to check citations and find more background, what the difference is between a "review" or "meta" article and a regular study, etc. There are dozens of tips/hints that you could give on that, that might actually help someone understand one of these incomprehensible things.

    Understanding the statistics is pretty nice, but that explanation is so woefully inadequate to understand any part of the statistics in general. Understanding that the p is a percentage representing uncertainty, what n REALLY means (number of subjects is nice, but every study will have a different definition of "subject"), understanding error bars and what the labeling on a graph to show which results are significantly different, etc.

    Of far more import than who wrote is where you find the article. Who wrote it is less important than where it's published. A reliable, peer-reviewed journal (ie Cell, Science, Nature, etc.), rather than something a company put out on their own. Even if the research was financed/done by a company, if its been peer-reviewed it's likely pretty reliable.

    Oh, and a meta-analysis doesn't change correlation to causation by filtering out "noise"... An experiment that proposes and tests a mechanism of causation gives you causation."

Join the discussion!