What Scientists Can Learn From Pseudoscience

Scientists should study pseudoscience – see what the pseudoscientists are up to and perhaps (for a laugh) try a few pseudostudies themselves.

Tinfoil hat picture from Shutterstock

Critically, scientists must learn what really distinguishes science from pseudoscience. We can fall for comforting myths, with pseudoscience being the domain of cat palmists on TV claiming to predict earthquakes with the moon. Amusing, sometimes exasperating, but mostly harmless stuff.

But the most dangerous pseudoscience is not produced amateurish cranks, but by a minority of qualified scientists and doctors. Their pseudoscience is promoted as science by think tanks and sections of the media, with serious consequences.

British doctor Andrew Wakefield's claims about vaccines and autism continue to impact vaccination rates 16 years on, despite Wakefield being deregistered and his research debunked.

Why do a minority of scientists produce pseudoscience? Clearly some pseudoscience is strongly associated with ideological beliefs, and motivated reasoning can overwhelm data, logic and years of training. Perhaps some scientists get complacent, expecting their hunches to always be correct.

But perhaps there's another reason that's closer to home. Is part of the problem how we educate prospective scientists?

Hypothesis

Pseudoscience mimics aspects of science while fundamentally denying the scientific method. A useful definition of the scientific method is:

principles and procedures for the systematic pursuit of knowledge involving the recognition and formulation of a problem, the collection of data through observation and experiment, and the formulation and testing of hypotheses.

A key phrase is "testing of hypotheses". We test hypotheses because they can be wrong.

Hypothesis testing is the first victim of pseudoscience. The conclusions are already known, and the data and analyses are (consciously or unconsciously) chosen to reach the desired conclusion.

Unfortunately, high school and undergraduate science students may have limited exposure to hypothesis testing. A student laboratory exercise may repeat an experiment from decades ago, which has been simplified for teaching, and whose conclusions are well known.

Such an exercise teaches technical skills at the expense of hypothesis testing. Should we expect students to "get" hypothesis testing without real experience? No, and without real experience of hypothesis testing we may undermine years of education.

Time is of the essence

What is the most time consuming aspect of science? Collecting the data? Producing results?

In a school or university laboratory class, much time is devoted to obtaining the relevant results. However, this doesn't truly reflect how scientific research is undertaken.

When undertaking scientific research, obtaining a result can be relatively quick. The painful part is cross checking the validity of the result with different experiments and new data, including comparison with already published studies.

Pseudoscience lacks these cross checks. "Discoveries" of alien life appear every year or so in the "Journal of Cosmology". Inevitably each "discovery" is followed by debunking, showing the "aliens" and "meteorites" have mundane Earthly origins. To a professional scientist, not checking for these obvious and mundane possibilities seems bizarre, but such sloppiness is a hallmark of pseudoscience.

Unfortunately, our teaching laboratory classes don't always emphasise cross checking. Students often spend most of their time obtaining results, with little time and few marks allocated to validating those results.

Journal articles and media reporting of science also emphasise new results (and understandably so). However, this reporting of science doesn't reflect how scientists devote their time and effort.

While "the result" is often the prelude to months of painful verification for scientists, are we actually training our students and the public that "the result" is what science is all about?

Nice fit

Fitting mathematical models to data is fundamental to science and its early history. Johannes Kepler's mathematical laws of planetary motion, developed in the early 17th century, paved the way for Newton's theories of motion and gravity.

Students often learn (or assume) that the smaller the difference between the data and a model, the better the model. This is often encouraged by the R2 statistic, which is provided by Microsoft Excel spreadsheets. Unfortunately, taken to overly simple extremes, this can lead to problems.

When we look at data, we are often looking at a trend with noise superimposed. For example, maximum temperature gradually increases from winter to summer (trend), but from day-to-day it fluctuates up and down (noise).

We can model the trend with time using a relatively simple function (such as a sine curve), but with more complex functions (like high order polynomials) we can reproduce the fluctuations too. This improvement is largely illusory though, as we are fitting to fluctuations that vary from year to year.

In statistics this sin is known as over-fitting, and its dangers are taught in university courses – but I've seen first-hand that students don't always understand the risks. Perhaps the aesthetic appeal of a model following all data is too great.

Pseudoscience embraces over-fitting in a myriad of ways. Overly complex functions (including artificial neural networks), with no basis in physics, are often fitted to data without caution. Data may be shifted, rejected or filtered without justification.

A common consequence of over-fitting is wild "predictions" based on extrapolating functions (into the future). Time and time again, climate change deniers claimed long-term warming will soon be replaced by exceptionally rapid cooling. Such claims did not come to pass, and current claims (promoted by chairman of the Business Advisory Council Maurice Newman, among others) are just as dubious.

Over-fitting isn't merely an abuse of statistics, but can influence public debate about science. If we don't teach students about the risks of over-fitting and statistics abuse, public policy may be damaged.

Go team!

Collaboration is a powerful tool for science, enabling scientists to branch into new disciplines, exchange expertise and reduce errors.

Collaboration is also a powerful weapon against pseudoscience. An astronomer knows that Jupiter and Saturn don't induce meaningful tides on Earth. An oceanographer knows the strengths and weaknesses of tide gauge measurements.

The flaws of pseudoscience can thrive in the absence of collaboration. The errors in Australian geologist Ian Plimer's 2009 book Heaven and Earth indicate that Plimer did not collaborate with experts on radiative transfer and astrophysics.

The absence of collaboration by Ian Plimer may be part of a broader pattern. Studies rejecting anthropogenic climate change have an average of 2.0 authors, while studies with no explicitly stated position or endorsing anthropogenic climate change have 3.6 and 3.4 authors. Those who reject climate change collaborate less than other scientists, which can increase the likelihood of errors.

Unfortunately students may have limited experience of collaboration. Students sometimes work in groups of two or three, but these groups often don't reproduce the dynamics of scientific collaborations.

Students don't always create their own groups, and they often work with students with similar skills. It is rare for students to create new groups with diverse skills from scratch.

Marking schemes that evaluate performance relative to peers may even actively discourage collaboration and sharing of expertise by students. It may discourage the skills students actually need to succeed in science.

Can we fix it?

How can we educate scientists, while reducing the number of trained pseudoscientists?

We need to make science education more like science itself, and this has been recognised by many science teachers. Students need the time to explore and test multiple plausible hypotheses. We may sacrifice some discipline specific skills along the way, but perhaps this is a price worth paying.

We need to recognise and encourage the cross-disciplinary approach to science. Statistics is sometimes relegated to a few of undergraduate subjects, whereas it really has to be learnt (and relearnt) throughout an education and career. Budding scientists also need to learn about decision making, logic and logical fallacies.

We need to find means of making science education reflect the collaborative nature of scientific research. This does happen for many PhD students, but many undergraduate students don't get the opportunity to embrace and be rewarded for collaboration.

If we cannot effectively educate our students about the true nature of science, a harmful byproduct will be a trickle of trained pseudoscientists, who will undermine the effectiveness of science in our society into the future.The Conversation

Michael J. I. Brown is ARC Future Fellow and Senior Lecturer at Monash University. He receives research funding from the Australian Research Council and Monash University, and has developed space-related titles for Monash University's MWorld educational app.

This article was originally published on The Conversation. Read the original article.


Comments

    The autism vaccine is the worst pseudoscience in a long time. It's caused so much harm and entire groups have been created based soley around promoting it. I can't for the life of me understand why normal, otherwise rational people buy into it.

      Because of cognitive dissonance. Mostly fuelled by the likes of Today Tonight/A Current Affair/etc.

      I think it is because of desperation. People who have an autistic child are often desperate to know why. There is frequently quite a bit of guilt, and there is a certain value in having something external to blame.
      Having an external locus of blame also provides opportunities like 'making sure no other child is harmed like this', and taking action on that basis can be very rewarding. Group action leads to social connections, shared attitudes, and so on.

      Similar problems abound for treatment, which is why things like hyperbaric chambers and bleach are variously offered by snake-oil salesmen to 'treat' autism. When offered a choice between no hope of a cure, and any hope of a cure people will often choose the latter. They are desperate to believe that something can work.

      (Note: I don't blame parents for believing in crazy cures or bollocks causes; they are unfortunately swimming in an ocean of information - much of it nonsense - and don't necessarily have the background to interpret quite a bit of it)

        I can't speak for people whose children have autism but if my son had something wrong, it's not hard to work out the batshit crazy from the rational. I'm thinking these people must have already had a short circuit in their head or something, much like the conspiracy theory people.

          Sure, but consider the original vaccine issue: it is perfectly plausible that a vaccine could have a neurological side effect in some children. To someone who doesn't have any science background, the idea of injecting 'mercury' is scary, since people are very familiar with the idea that mercury causes brain damage.
          Some of the treatments are crazy, but it is a continuum from relatively plausible to nuts. For example, a subset of children with autism also frequently have stomach aches and so on (although whether it is higher than the general population I couldn't say). In that context, saying that autism is caused by food allergies and leaky gut is not so crazy as it would otherwise sound.
          The other thing to keep in mind is that there is an entire conference dedicated to this madness: AutismOne. It basically validates treatments (in the sense of endorsing their use, not in the sense that the treatments are at all valid) with a strong semblance of scientific authority and knowledge. There is a whole industry around selling bollocks to the parents of children with autism, so I don't think it is surprising that parents fall for it.

            I find it surprising that people fall for it. There is no way they have not heard the opposing (and correct) view coming from actual authority figures like doctors, other medical professionals, government and medical bodies. Choosing to ignore that and believing in these fringe and sometimes dangerous views blows my mind.

    @darren, because the louder message often drowns out the right message - and the "vaccinations cause autism" message is a pretty damn loud one that continues to be resurrected by layman and / or those who don't bother being objective or researching anything once there is an assumption (even an incorrect one) that it will harm their "little precious".

    The spread of misinformation is the worst aspect of facebook / twitter sadly...

Join the discussion!