Misinformation is endemic in our society, but it is not a new problem. Misinformation passed along with or without an ulterior motive has been around for as long as humans have been communicating. What is new is that digital media allows expertly designed misinformation to spread quicker and more ruthlessly exploit individuals’ preconceived notions through confirmation bias. While the latter is most certainly a hot topic, this post is not about media literacy or confirmation bias. Rather, it concerns a separate phenomenon that has been intently researched for decades known as the “continued influence effect” (CIE). What is the continued influence effect? As the name suggests, researchers have found that certain kinds of information following a retraction can be “sticky.” Such information, despite being recognized as false, continues to influence individuals’ reasoning and decision-making abilities. Writing in 1994, Johnson and Seifert observed that the CIE could not simply be ruled out as a simple mistake, since previous studies found that “influence can occur even when subjects have made the connection between the disregard instruction and the information it refers to.” At the core of the CIE are difficulties in editing existing memory with updated and more accurate information. It is less about political tribalism or stubbornness and more about the persistence of misinformation in memory. This begs the question: Why are some kinds of information more memorable than others? The continued influence effect in real life Imagine if your next-door neighbor tells you that a nearby house recently burned down. He says he heard from a friend that the fire department is investigating it as an act of arson. He also informs you that the fire-damaged house is owned by a woman who recently went through a very messy divorce. You agree that it seems plausible that her ex-husband could have set the fire intentionally. The following day, your neighbor tells you that he was mistaken. There was no arson investigation, and the fire department found clear evidence of an electrical malfunction. The ex-husband was not involved with the fire. You understand that there is no evidence to support the belief that the fire was set intentionally and even evidence that explicitly discredits it. However, days later, you catch yourself telling others that you still think the ex-husband was behind it. This is the CIE in action. The continued influence effect in a controlled environment Formal studies into the CIE are typically designed to have one control group and one experimental group. All participants read a story about a fictional event. In the experimental group, one link in the story’s causal chain is later retracted and then replaced with updated information. In the control group, no retraction occurs. Participants in the control group tend to have no problems accurately describing the events in the story. However, in the experimental group, the retraction usually only halves the number of references to the misinformation. This is true even if people remember the retraction and agree with its veracity. Even more surprising, researchers have found that if they strengthen the correction’s language and clarify that the previous information was incorrect, their efforts backfire. Participants become more likely to rely on misinformation. Similarly, if an alternative explanation is more complicated or difficult to understand than the original misinformation, participants also become more likely to rely on the misinformation. Lest we remember A few competing conceptual models have been proposed to explain the CIE: If there is a gap in our retelling of a story, we will reflexively bridge it, even if we know that the bridge is constructed out of misinformation. When we retell a story, invalid and valid memories (the misinformation and its more accurate retraction) compete for automatic activation, and the most seemingly reasonable option will be repeated. Oftentimes, the invalid memory/misinformation wins out. We may store the retraction as simply the original piece of information with a “negation tag” attached to it (e.g., “husband = arsonist—NOT”). That negation tag can sometimes get lost if it is not a familiar part of the story. Regardless of which model is correct, what seems clear is that we tend to remember pieces of information as part of a larger story. We tend to reflexively favor narratives that make sense to us over narratives that are either unfamiliar or incomplete. Like nature, we abhor a vacuum. We are also far more likely to hold dear to pieces of misinformation that we have incorporated into our worldview, regardless of what that worldview is. In a sense, the story containing the misinformation becomes part of a larger puzzle, making it even more likely for us to believe it. Coping with the misinformed When a friend or loved one has a habit of retelling a story that contains pieces of misinformation, your response may run from mildly amused to extremely annoyed. This is natural. It is also natural to become frustrated when the misinformation is repeated frequently, even if it has emerged innocently. It becomes even more frustrating when your attempts to persuade them that they are misinformed only galvanize the subject’s belief. To avoid this backfire effect, some of the recommendations provided in a 2012 paper by Lewandowsky and colleagues include: Consider if your alternative explanation leaves gaps in their narrative, and attempt to fill those gaps with easy to digest alternative explanations. Emphasize the facts that you wish to communicate and avoid repeating—and thereby making more familiar—the misinformation. Use simple and concise language to illustrate your point. Finally, presenting a binary choice architecture that is insulting or condescending is not helpful. For example, telling someone they can either pick the “right” or the “wrong” option will inevitably lead them to double down on whichever choice they’ve already made. Instead, provide a narrative that offers contextualization and incorporates corrective information. Ultimately, the goal is not to “win” an argument but to overcome the influence of misinformation. Source