The “Backfire Effect” Might Not Be Real

Misinformation changes how people behave, and rarely for the better. The debate over what to do about it splits roughly three ways: regulate the platforms that host it and risk a Streisand effect that draws more attention to the very claims you’re trying to suppress, debunk the false claims directly, or leave it alone on the theory that corrections only reinforce the original lie and spread it further. Research in the Journal of Marketing Research lands firmly on the side of debunking. Across three consumer product categories, the authors found that corrections actually work, and they work best on the people who most needed correcting.

We’ve all heard terms like “organic” or “non-GMO,” used to sell food that’s supposedly natural and unmodified by humans. But the techniques behind GMOs boil down to selective breeding, used to create crops with higher yields, meaning more food for everyone at lower cost. “Pesticide-free” gets the same treatment, even though pesticides protect humans from diseases spread by pests. The processes used to feed everyone are under attack by misinformation, and it doesn’t stop at food. Everyday products are marketed as using “non-toxic” materials, and the framing relies on a sleight of hand. Aluminum, used in deodorant, can be toxic if ingested in high enough amounts. Lithium is toxic in large doses too, but in controlled amounts it treats bipolar disorder and lets people live normal lives rather than be institutionalized. Humans are actually really good at measuring things and figuring out the right amount of something to make it useful.

The claims made by marketing may be technically true (minus a few omitted details), but they change human behavior in ways that go beyond which banana company makes the most money. Eating food untreated with pesticides risks exposure to pest-borne disease. Refusing a life-saving medication because it contains trace amounts of a chemical that could cause cancer at much higher doses is a real cost with real consequences.

This is a banana. Or it was, before humans invented modern bananas with selective breeding. Would you eat this? Image: Inside a wild-type banana, via Wikipedia.

The “Backfire Effect” claims that efforts to debunk misinformation only spread it further, and it’s sometimes used as a pretense for stricter regulation. But if social media platforms are required to remove posts spreading false claims, it may result in more people talking about them. (If you’ve ever seen people in a Facebook group telling each other not to use certain words, or to swap in code words instead, you’re witnessing the Streisand effect.) Take the backfire risk at face value and you get a real bind: debunking misinformation often means restating the untrue claim, which could be harmful, and the people who trust these claims may distrust attempts to set the record straight and “warn” others by spreading the misinformation themselves.

Unpublishing is attractive on the surface. No one new sees the claim, so why worry about a few people believing something untrue? But it just pushes the misinformation underground, where it spreads in the dark and the real-world impacts are harder to track. This is a nightmare for policy makers. They have the platform to reach millions quickly, but every option carries risk: regulation breeds suspicion, debunking might backfire, and silence lets the lie grow. It looks like their hands are tied. This study suggests they might not be.

The researchers ran two sets of online experiments through a survey platform, with around 16,000 participants across both waves. The first set measured the average effects of misinformation and debunking. The second set added something the first didn’t have: direct measurements of participants’ beliefs before and after treatment, so the researchers could see who was actually changing their mind versus who was just changing their behavior. In every study, participants saw a mock social media post (formatted as a tweet) about a product, then made a series of purchase decisions in that category. The researchers compared what people chose across different combinations of misinformation and debunking exposure.

Here’s the part that makes the design unusually strong: the study offered a random chance that the participant would actually receive the product they chose, with the rest of a $10 bonus paid out in cash. That sets real stakes. Participants aren’t picking the “correct” answer or the one that makes them look health-conscious, they’re making a decision they might actually have to live with. The question stops being about the money and starts being about a product they’d actually want to use. This matters in a study about misinformation, because the whole point is whether the messages change real-world purchase behavior. Without real stakes, you can’t tell whether participants are revealing their preferences or performing for the survey.

The researchers measured consumers’ willingness to pay (WTP) before and after the mock social media post. For fluoride, WTP dropped by 22%. For aluminum, among consumers who started out believing it was safe, it dropped by 80%. For GMOs, nothing. No statistically significant effect at all. The sharp differences between the three ingredients are the most interesting part. They don’t mean misinformation works on fluoride and not the others. They mean the misinformation about aluminum and GMOs has already done its work. Baseline beliefs going into the study showed that 42% of participants already thought aluminum was harmful, and GMO misinformation has been circulating for decades through labeling fights and public debate. By the time these participants saw the experimental tweet, there wasn’t much room left to move them. The damage was already done before the research began. Measuring what a single tweet does is useful, but the bigger question is what a decade of marketing and social media exposure has already done to our beliefs. The study focused on three ingredients with known misinformation, but this is a much bigger problem than three product types, and it deserves attention.

The researchers didn’t stop at measuring misinformation. They also measured what happened when participants saw a debunking message after the misinformation. The results, for the regulator condition: WTP for aluminum increased by 68%, fluoride by 27%, and GMOs by 18%. Debunking didn’t just undo the damage from the experimental misinformation, it reached past that and corrected misbeliefs participants had brought into the study from years of marketing and social media exposure. And the “Backfire Effect” discussed earlier? No evidence of it. Across three product categories, three sources, and three levels of prior belief, debunking never made misbeliefs stronger. The people who needed correcting the most were the ones who updated the most.

Research shows we can heal the effects of misinformation on consumer beliefs. The catch is that no individual firm has a reason to do it. The simulation in the study lays out the math: when one firm debunks while competitors launch ingredient-free products, the debunker loses money. Correcting consumer beliefs raises demand for the original ingredient, but competitors offering both versions capture the customers who would’ve stayed misinformed and the ones who got corrected. The dominant strategy is to conform. So firms conform. Dove sells aluminum-free deodorant. Tom’s of Maine sells fluoride-free toothpaste. Ensure sells GMO-free meal replacement shakes. Will we sell acetaminophen-free Tylenol next, because consumers think it’s safer? As we remove the active ingredient that makes the product useful, what are we even buying? In a market system where consumer purchasing behavior drives corporate decisions, it paints a picture of the dangers of what direct democracy could look like in practice.

The findings will be helpful in shaping future policies on how to combat misinformation, but they come with limits worth naming. Participants in the study paid close attention to a single tweet because the design forced them to, and they were debunked moments later by a clearly attributed source. Real misinformation doesn’t work like that. It accumulates over years, encountered while scrolling past dozens of other things, from sources of varying credibility. The attention people pay to a misleading claim during a controlled experiment is much higher than the attention they pay during normal social media use. How does seeing a misleading claim two weeks ago affect your monthly grocery trip? How does a decade of accumulated misinformation shape your beliefs? A longitudinal study, while expensive and slow, would tell us a lot more about how this plays out in real life.

Still, the basic finding holds. Corrections work, they work best on the people who need them most, and they don’t backfire. The harder problem isn’t whether to debunk. It’s who’s going to do it when no individual firm has the incentive, and whether regulators have the will to step into a role the market won’t fill on its own.

Leave a comment