Imagine the following scenario: after learning from your favorite website that 0.3 percent of Americans have HIV, you decide you need to get tested. But when the doctor comes back, it’s bad news – you tested positive. But wait! You haven’t had any blood transfusions or unprotected sex – and you certainly weren’t breastfed recently. There must be a mistake! “I know it’s hard to accept,” your doctor says, “but this test is extremely accurate. It has a specificity of 99 percent.” But maybe you shouldn’t give up hope. Surprisingly, your likelihood of having the disease – despite that 99 percent – is actually less than one in four. Confused? It’s no secret that many of us struggle with statistics. And, according to a study recently published in Frontiers in Psychology, that’s partly to do with how it’s taught in school. Take the example above. Set out as a formal math problem, we get something like this: Bayes' Theorem, baby. Yeah, it's not pretty. But there’s another way of looking at statistics, using something called “natural frequencies”. This is when likelihoods are described using whole numbers rather than percentages – for instance, “one in five people” rather than “20 percent”. And using natural frequencies, the problem gets a lot easier to understand: If 0.3 percent of the population is infected, that means for every 1,000 people in the US, three have HIV. Let’s say your doctor's test is so accurate, it never gives false negatives. So, for every three people who are HIV positive, three tests come back positive. The test has 99 percent specificity, so it gives false positives 1 percent of the time. That means out of the 997 other people who aren’t infected, 10 will nevertheless test positive. So what are the chances that someone who tests positive does, in fact, have HIV? It’s much easier now to see that you’re looking at a chance of only three in 13 – about 23 percent. So clearly, using natural frequencies makes a huge difference to how well we understand statistics. In the study, subjects were faced with two math problems – one presented in probability format, and the other as natural frequencies. Can you figure it out? “[T]he vast majority of people have difficulties solving a task presented in probability format," explained lead author Patrick Weber. “When the task was presented in natural frequency format instead of probabilities, performance rates increased from 4 percent to 24 percent.” But despite natural frequencies being a much more, well, natural way of tackling these problems, the researchers found nearly half of subjects presented with questions framed as natural frequencies would translate them into the more confusing probability format – even though this made them much less likely to answer them correctly. Only 9 percent of those who translated the problems from natural frequency to probability format could solve them correctly. The team believe the problem is something called the Einstellung effect – our predisposition to stick with methods we know, even if better ones present themselves. “In high school and university contexts, natural frequencies are not considered as equally mathematically valid as probabilities," explained Weber. “[W]orking with probabilities is a well-established strategy when it comes to solving statistical problems… [T]he mental sets developed over a long period of time... can make [students] 'blind' to simpler solutions – or unable to find a solution at all.” The answer? According to the researchers, change is needed on a global scale, updating teaching in schools and universities to expose students to a wider range of problem-solving techniques. “We want our findings to encourage curriculum designers to incorporate natural frequencies systematically into school mathematics and statistics,” said Weber. “This would give students a helpful tool to understand the concept of uncertainty.” Source