How To Spot Bogus Polls In The News

How To Spot Bogus Polls In The News

When you’re reading the news, it can feel reassuring when the journalist backs up their claim with survey results. But not all surveys are equally trustworthy. Thankfully there are a few telltale signs of untrustworthy polls – as well as polls you can kind of trust. Yeah, it’s a spectrum.

We talked to Jon Cohen, one of the most qualified poll experts in America. For the past four years, he’s worked with new organisations to conduct polls as chief research officer at SurveyMonkey. Before that he was vice president of research at the Pew Research Center and (for eight years) director of polling at the Washington Post. According to him, changing technology has actually made polling harder.

The Problem

“We’re in a much tougher position for evaluating polls today than we were even ten years ago,” Cohen says. Calling mobile phones can cost twice as much as calling landlines. So modern pollsters often have to use new methods. Many news outlets can’t afford their own polling, so they rely on outside polls. There are more ways for a sloppy and inaccurate poll to slip into the media.

Meanwhile, experts in the field often disagree on polling methods. Some of the industry’s best standards have grown out of date. The National Council on Public Polls provides a list of “twenty questions a journalist should ask about poll results.” But as Cohen points out, that list hasn’t been updated in over a decade. The number of adults without internet access has halved since then, and the use of mobile phones in polling has tripled, both of which drastically change things for pollsters.

Surveys are now conducted through all kinds of cheaper methods, including robocalls and multiple forms of online polling, all of which were considered suspect back when home calling was the gold standard. SurveyMonkey, which works with news organisations to conduct online polls, randomly selects its respondents from the 3 million daily users taking various surveys on its site. (They explain their methodology here.)

But as Cohen admits, not every polling expert agrees with SurveyMonkey’s methods. FiveThirtyEight’s pollster ratings give SurveyMonkey a grade of C-. Meanwhile Cohen believes that FiveThirtyEight rates some pollsters too highly, sometimes rewarding them for matching election results even when they just got lucky. And this is all between two sites that have partnered up on surveys.

Polling experts also disagree on some basic principles, like whether to try to survey a huge number of people representing the general population (and then filter the results), or whether to start with the specific population you want, like registered voters or property owners. So the definition of a good poll can depend on which expert you ask. Still, experts tend to agree about several important factors, and you can use these to judge the poll results you see in the news.

The Solution

As old as the NCPP’s twenty questions are, they’re still a good starting point. They’re intended for journalists writing about polls – but now that more bad polls make it into media reports, these questions are useful for readers. They include “Who paid for the poll?” and “What is the sampling error?” and questions about who was actually surveyed. (The NCPP elaborates on the implications of each answer.)

Cohen suggests some new questions: Were the respondents chosen randomly, or did they “opt in” to a survey embedded in a CNN article? The latter skews results a lot, and if those respondents didn’t also answer some demographic questions, it’s hard to un-skew the results. (Cohen says that SurveyMonkey’s respondents tend to be more highly educated, which SurveyMonkey adjusts for, but that otherwise their demographics tend to match national census data.) So with any poll, and especially any online poll, check how the respondents were selected.

How many people were polled? While a well-done poll with a small sample size can be valuable, Cohen says, you can’t get reliable results with fewer than 100 respondents – and ideally you want far more.

So if you want to break down results for multiple subgroups, you need to start by polling 1,000 people – or better yet, 10,000. If you poll 500 people, but only 60 of them are black, you can’t get meaningful results about those 60 black respondents and pretend they’re representative of the national average. If a single person changed their minds, that would shift your results by more than a percentage point. (Cohen says that for rare populations, pollsters might settle for as few as 75 respondents.)

What were the actual questions asked? A reputable pollster, Cohen says, will disclose their questions and multiple-choice answers. Knowing these can completely change the meaning of the results.

During the Iraq War, Cohen says, a survey report claimed that three quarters of Americans wanted U.S. troops out of Iraq within a year. But the survey, funded by an anti-war donor, was engineered to get the desired answer. Respondents had four choices: Did they want troops to leave within three months, six months, a year, or to “stay as long as necessary?” There was no option for, say, troops leaving in two or three years. While the poll results weren’t valueless (they did show that many people wanted troops out of Iraq), they didn’t actually mean what the headlines claimed they meant. Which leads to the next question:

How are the results reported? If a poll is reported with very few details, making it impossible to answer all the other questions, be suspicious. The best polling organisations and outlets, such as Pew, the Post, or Quinnipiac University, are open about their methods, their data, and how they adjust that data. At the Post, Cohen’s team ran phone polls side-by-side with online polls to test how the polling method shaped the results. Then when they run a poll with one method, they know how to adjust it.

There are tell-tale signs of an untrustworthy poll. “If a poll is reported with decimal points, I’m very quick to dismiss it,” Cohen says. “No poll, no matter how conducted, can achieve tenth-percentage-point accuracy on how people think.” Check if the polling organisation is trying to draw results from small subsets of the respondent pool, or if they’re overstating the meaning of their findings.

During the 2007 Democratic primaries, Cohen’s team at the Washington Post was the first to report that many black voters were shifting from Clinton to Obama. They saw this in a poll with only 136 black respondents, but because they trusted their methodology, and because the shift was so pronounced, they were confident it wasn’t statistical noise. If the shift had been half as large, they wouldn’t have trusted the data enough to report it.

What do other polls say? Cohen believes that an imperfect poll is still often useful. But it’s useful to compare it to other polls. A well-done phone poll, even by a strong advocate, shouldn’t necessarily be dismissed.

The Republican and Democratic parties run opinion polls, which tend to be biased in their direction about three or four points. But, says Cohen, this doesn’t necessarily mean that the individual polls are skewed – just that the parties aren’t releasing the poll results that don’t confirm their outlooks. Averaged together, these partisan polls can still produce a useful result.

It would be nice to have more polls conducted and reported by unbiased third parties. But thanks to the expense, there just aren’t enough of those. Over the last few decades, it’s become much harder to get people to take a poll, and the response rate for the average poll is under 10 per cent. So we’re left taking what we can from skewed, partially reported, or otherwise flawed polls. FiveThirtyEight will weight its averages to favour good pollsters, but it will still factor in results from bad ones, after trying to adjust them based on known errors and biases. (Out of the hundreds of pollsters the site rates, it has only banned five.) Better to take a flawed poll with the right grain of salt than to ignore it entirely.

Even the U.S. Census, which produces the demographic stats that poll data is measured against, has changing standards – by improving its questions and answer options on race and ethnicity, the census has made it harder to directly compare statistics from decade to decade. The truth is that public opinion isn’t a physical constant that we can precisely measure. It’s nuanced, hard to measure without influencing, and constantly changing. So don’t treat any poll as the infallible, timeless truth.


The Cheapest NBN 50 Plans

Here are the cheapest plans available for Australia’s most popular NBN speed tier.

At Lifehacker, we independently select and write about stuff we love and think you'll like too. We have affiliate and advertising partnerships, which means we may collect a share of sales or other compensation from the links on this page. BTW – prices are accurate and items in stock at the time of posting.

Comments