© 2024 South Carolina Public Radio
Play Live Radio
Next Up:
0:00
0:00
0:00 0:00
Available On Air Stations

Beware Of 'Accuracy'

Getty Images

In a recent post, Adam Frank introduced some key ideas behind Bayesian statistics. He began with the example of a medical test for a disease, asking the question: How likely is it that I have the disease, given a positive result from a test that's 80 percent accurate?

This probability can be calculated using Bayes' theorem, a simple formula that follows from the axioms of probability theory. To calculate the number we want, we'd need to know how common the disease is in the relevant population, how often the test generates a false positive, and how often it generates a false negative.

As Adam's post was making the Internet rounds, I was — by coincidence — teaching Bayes' theorem in my undergraduate class, Sense & Sensibility & Science. We went through examples just like Adam's, including the psychology behind why people so often estimate the relevant probability incorrectly. I also shared an important lesson with my students. The lesson is this: Beware of "accuracy."

The term "accuracy" seems innocuous enough. We often want to know how accurately we responded on an exam, or how "accurate" a statement is. But when it comes to something like a medical test, "accuracy" underspecifies what we're after, because there are two importantly different ways to get things right, and two importantly different ways to get things wrong.

To see this, consider the two possible states of the world (having the disease, not having the disease), and the two possible results of the test (positive, negative). All combinations can occur, and two of these combinations get things right: We both have the disease and get a positive test result, or we don't have the disease and we get a negative test result. But the remaining two combinations get things wrong in very different ways. One way to get things wrong is to have a false positive: A person without the disease gets a positive test result. Another way to get things wrong is to have a false negative: Someone who does have the disease gets a negative test result.

In Adam's example, he uses the "accuracy" of 80 percent to refer to an 80 percent chance of having a positive test result given that a person has the disease. But in many contexts, accuracy is totally ambiguous — it tells us how often the answer is wrong, but not how it is wrong. The example I use in my class is a pregnancy test that advertises "99 percent accuracy." How many of those errors are false positives (telling a non-pregnant woman that she's pregnant) versus false negatives (telling a pregnant woman that she isn't)? Those are very different kinds of errors — a consumer may very reasonably want to know which kind is more likely.

Going beyond "accuracy" is important for a few reasons.

First, we need to go beyond accuracy to do the math. Bayes' theorem takes into account both the false positive and the false negative rate — not undifferentiated "accuracy." That means that if we want to know what a test result means for our probability of having a disease, we need to know how often each type of error occurs. More generally, if we want to know how to update our beliefs in light of some new piece of evidence, we need to know how often that evidence should arise under each hypothesis we're entertaining.

Second, not all errors are equal. For some diseases, a false positive might come at little cost — perhaps it means a follow-up test that reveals the initial error. But a false negative could mean that symptoms go untreated and complications develop; in some cases, it could mean death. The relative costs of a false positive versus a false negative depend on the specific case in question, but it's almost always an error to treat them as one and the same.

Finally — and more speculatively — it could be that a more fine-gained vocabulary for describing the relationship between reality and what we say or think gives us better tools for calling out misstatements of fact, half-truths and fabrications. Invoking a "Bowling Green Massacre" that never occurred is one type of error (call it a "false positive"); failing to mention meetings with Russian ambassadors is another (call it a "false negative").

Just as we need to be wary of "accuracy," we may need to be wary of "inaccuracy," too. When it comes to errors, it matters how we get things wrong.


Tania Lombrozo is a psychology professor at the University of California, Berkeley. She writes about psychology, cognitive science and philosophy, with occasional forays into parenting and veganism. You can keep up with more of what she is thinking on Twitter:@TaniaLombrozo

Copyright 2021 NPR. To see more, visit https://www.npr.org.

Tania Lombrozo is a contributor to the NPR blog 13.7: Cosmos & Culture. She is a professor of psychology at the University of California, Berkeley, as well as an affiliate of the Department of Philosophy and a member of the Institute for Cognitive and Brain Sciences. Lombrozo directs the Concepts and Cognition Lab, where she and her students study aspects of human cognition at the intersection of philosophy and psychology, including the drive to explain and its relationship to understanding, various aspects of causal and moral reasoning and all kinds of learning.