How To Know If You Can Trust That Headline-Grabbing COVID-19 Study

There has been a lot of talk lately about a few COVID-19 studies that use antibody testing to estimate the prevalence of infection across the population. In theory, antibody tests detect evidence that a person has already had and recovered from COVID-19, which, given the lack of available testing while people are still experiencing symptoms (or may be experiencing only mild symptoms, if they aren’t completely asymptomatic), can be a useful tool for estimating the full scope of the pandemic.

In particular, a study of prevalence rates in Santa Clara County, California, has attracted attention for its conclusion that the prevalence of people with antibodies—meaning people who have been infected and gotten better—is much higher than official estimates.

Antibody studies are garnering interest because they might help us estimate who has immunity against COVID-19, which could be useful as states look to open up again—particularly as we work our way toward potential “herd immunity.” But putting too much faith into a study suggesting a low death rate—particularly one that doesn’t account for flaws and biases in its data—could lead people to think the disease is less dangerous than it really is.

[referenced url=”” thumb=”” title=”” excerpt=””]

In the Santa Clara study, 3,330 people were recruited through Facebook ads and tested for antibodies against the virus that causes COVID-19. Out of these 3,330 people, 50 tested positive. Extrapolating from this number, the authors estimate that the prevalence of people who have recovered from the disease in Santa Clara County is 50 to 85 times what the official numbers suggested. Using this number, authors have estimated the death rate from COVID-19 to be between 0.12-0.20, versus other estimates, which typically range between 1-3%. Assessing the death rate for this disease is tricky, given how new it is, as well as the lack of widespread testing. But overall, in scenarios where there have been higher rates of testing, the death rate seems to be somewhere in that 1-3% range.

The Santa Clara study’s results were startling, and have not held up to the scrutiny that followed the initial, heartening headlines. From the moment this preprint was released, one issue after another with the data has come to light. And the Santa Clara study is not the only antibody test that has sparked controversy—although in the past week, it has become the most notable.

When it comes to analysing these studies to determine how much stock we can place in them, these are the factors scientists are considering:

How were participants recruited?

For the Santa Clara study, participants were recruited through Facebook ads. Although the researchers did attempt to correct for oversampling of some demographics, this is a sampling technique that will introduce certain biases, not least of which is the fact that many of the participants might have been motivated by wanting to know if a previous illness they suffered had been COVID-19 or not. This factor may have significantly skewed the results, especially considering the study’s conclusions are based on only 50 positive results out of 3,330.

Then there’s the problematic discovery that some participants for the Santa Clara study were recruited by the wife of one of the author sending an email to a middle school’s email list alongside the suggestion the testing would determine if they could “return to work without fear,” and falsely claiming the test was FDA approved.

A true sampling would be taken at random. In the real world, this isn’t always possible, but it’s important to note the sampling method a study followed and how it corrected for any possible biases.

[referenced url=”” thumb=”” title=”” excerpt=””]

How accurate is the test?

Tests for antibodies against COVID-19 are still being developed, and there’s still a lot we are learning when it comes to their accuracy. Complicating all of this is the fact that the FDA has relaxed its rules on letting companies sell tests that haven’t been vetted by the government.

When it comes to test accuracy, there is the false negative rate to consider—when the test results say you don’t have COVID-19 antibodies when in fact you do—as well as the false positive rate—which is when the test results say you have antibodies against COVID-19, when in fact you don’t.

In an analysis of 14 different antibody tests, performed by a team of more than 50 scientists, only three produced consistently reliable results. And of these 14 different tests, only one didn’t generate false positives. False positives are especially problematic, as they could offer those tested a false sense of security—a mistaken belief that one is immune, when in fact the test results were faulty. (Of course, it’s important to note that we don’t yet know the degree to which antibodies confer immunity to the virus that causes COVID-19.)

It’s the false positive rate that is especially concerning in the Santa Clara study. The authors’ conclusions don’t account for any false positives, which is especially problematic considering the test they used has a confidence interval of between 0.1% to 1.7% false positive rate, while their reported findings were that 1.5% of their samples were positive. In theory, all of those positive samples could be false positives, a likelihood that the authors didn’t account for. Even if the false positive rate is much lower, with such a small sample size, a slight difference could skew the results by quite a bit.

Do the results line up with reality?

The Santa Clara study uses its findings to estimate a mortality rate between 0.12 to 0.20 per cent. However, as Undark points out, if the death rate were only 0.12 to 0.20, then the number of COVID-19 deaths in New York City would suggest that 12.5 million people were infected, while city’s population is only 8.3 million.

Given that NYC is still recording new cases every day and hasn’t seen an influx of an extra 4 million people in the last month, this suggests that the death rate is not, in fact, between 0.12 to 0.20 per cent.

[referenced url=”” thumb=”” title=”” excerpt=””]

Has this study been through peer review?

Life in a pandemic changes by the day, with new research results being released every hour. Generally speaking, this ongoing research is a very good thing: There is a lot we don’t know yet. That said, given the speed at which all this is happening, as well as our desperate desire for more information, more and more preprints—studies that yet to pass through peer review—are being covered as news.

It’s doubly important to double-check how a study is being covered when it is new and hasn’t been through peer review. What are other scientists saying about the methodology and results? What are they saying about the limitations of the study? No study is perfect. If the news coverage hasn’t included a diversity of voices—including an analysis by experts who weren’t involved in the research, as well as experts who can speak to its limitations—that’s a sign you might want to wait it out before deciding whether the results are ones you can rely on.


The Cheapest NBN 50 Plans

Here are the cheapest plans available for Australia’s most popular NBN speed tier.

At Lifehacker, we independently select and write about stuff we love and think you'll like too. We have affiliate and advertising partnerships, which means we may collect a share of sales or other compensation from the links on this page. BTW – prices are accurate and items in stock at the time of posting.

Comments


Leave a Reply