Why We Trust (and Fall for) Misinformation
A plain-English guide to how beliefs form—and why false claims can feel rational from the inside. Learn the cues that shape belief and sharing.

Key Points
- 1Distinguish misinformation, disinformation, and malinformation—each needs different responses, from education to enforcement to harm reduction and privacy protection.
- 2Notice the five belief cues: exposure, repetition, coherence, identity pressure, and emotion—misinformation often wins by aligning with these shortcuts.
- 3Add friction before sharing: pause, read past headlines, find originals, and treat urgency as a red flag to reduce impulsive amplification.
A familiar moment: the “of course” click
That moment isn’t stupidity. It’s cognition doing what it evolved to do: make fast, workable judgments with limited time and attention. The modern feed punishes slowness. It rewards confidence. It turns repetition into “common knowledge” and outrage into a distribution strategy.
Public debate tends to treat misinformation as a moral flaw—those people are gullible; those people are malicious. The real story is more uncomfortable and more useful: false beliefs often feel rational from the inside, and people circulate claims for reasons that have little to do with truth.
Misinformation succeeds less by defeating reason than by impersonating it.
— — TheMurrow Editorial
What follows is an explainer for readers who want sharper distinctions and better tools. We’ll separate why a claim feels true from why someone shares it—two related problems that require different fixes.
The vocabulary problem: misinformation, disinformation, malinformation—and the “infodemic”
Three terms people confuse (and why it matters)
- Disinformation: false information shared with intent to deceive. The motive is manipulation, whether political, financial, or ideological.
- Malinformation: true information used to harm. A private address posted to encourage harassment isn’t “false”—it’s weaponized.
Conflating these terms creates policy and social errors. Treating a misinformed person like a deliberate propagandist hardens positions and discourages correction. Treating organized disinformation as an “oops” problem leaves manipulation infrastructure intact. Treating malinformation as merely “speech” ignores foreseeable harm.
The World Health Organization’s “infodemic” frame
The key implication is blunt: even accurate information can fail if the environment is saturated. An infodemic is not only about bad content. It’s about volume, speed, and the collapse of context.
In an infodemic, the enemy isn’t only falsehood. It’s overload.
— — TheMurrow Editorial
Why believing misinformation can feel rational from the inside
A useful way to think about belief is as a sequence:
1. Exposure and attention decide what enters your mind.
2. Fluency and familiarity shape what feels true.
3. Coherence with your mental model determines what seems to “fit.”
4. Identity and social incentives influence what feels safe to accept.
5. Emotion and urgency compress the time you spend thinking.
None of these steps require irrationality. They require only a human brain doing what it does: conserving effort, seeking coherence, and managing social belonging. Misinformation often wins by aligning with those incentives better than careful reporting does.
The editorial trap: confusing sincerity with accuracy
A better question than “Why would anyone believe that?” is “What cues made it feel reliable?” That question points toward interventions that don’t insult the audience.
Key Insight
Step 1: Exposure and attention—what gets into your head
Accuracy prompts: small nudges, measurable effects
That finding carries a quiet rebuke to the loudest explanations for misinformation. If a brief prompt can shift behavior, at least some sharing is not driven by deep conviction. It’s driven by speed.
Practical takeaway: control the first five seconds
- Pause before reacting.
- Read beyond the headline if possible.
- Ask what you’d need to know to verify it.
These are small moves, but they work with the grain of human attention rather than against it.
First-five-seconds checklist
- ✓Pause before reacting
- ✓Read beyond the headline if possible
- ✓Ask what you’d need to know to verify it
Step 2: Fluency and familiarity—how repetition manufactures “truth”
A 2024 review in Current Opinion in Psychology summarizes evidence that repetition increases belief in misinformation and can also affect downstream behaviors such as intentions to share. (https://pubmed.ncbi.nlm.nih.gov/38113667/)
Repetition doesn’t just persuade—it increases confidence
That matters socially. Confidence is contagious. A person who feels certain becomes a stronger node in the network, even when they’re wrong.
A key statistic about the “second exposure”
That’s one of the most sobering numbers in misinformation research. It implies you don’t need a propaganda firehose to change what feels real. You need a few strategically placed repeats.
The second exposure is where “I’ve heard that” becomes “I think that’s true.”
— — TheMurrow Editorial
Step 3: Coherence beats correctness—why “fit” matters more than facts
When prior knowledge backfires
In plain English: knowing a little about a topic can sometimes make you easier to mislead, because your brain supplies connective tissue that makes the claim feel plausible. The lie doesn’t have to be airtight; it only has to be easy to integrate.
Real-world example: the “sounds scientific” problem
Readers can respond by asking coherence questions that are more diagnostic than “Does this sound right?”:
- What exactly is being claimed?
- What would count as evidence against it?
- Does the source specify methods, data, or limitations—or only conclusions?
Coherence is not a sin. It’s a feature of thought. The problem arises when coherence becomes a substitute for verification.
Coherence checks (more diagnostic than “Does this sound right?”)
- ✓What exactly is being claimed?
- ✓What would count as evidence against it?
- ✓Does the source specify methods, data, or limitations—or only conclusions?
Step 4: Identity-protective cognition—when belief becomes belonging
Research associated with Yale’s Dan Kahan is often summarized this way: people tend to resist evidence that threatens their group identity, and they adopt interpretations that protect social belonging. (https://informalscience.org/identity/Dan-Kahan/)
Multiple perspectives: bias, rationality, and social risk
From another perspective, identity-protective cognition can appear rational. If the penalty for dissent is ostracism, “updating your beliefs” isn’t a purely cognitive decision. It’s a negotiation with your social world.
That doesn’t excuse falsehood, but it explains resistance to correction. A fact-check can land as a threat: You’re not only wrong; you’re one of them.
Practical takeaway: debate less, ask more
- Ask what source would change their mind.
- Separate values from claims (“What are you worried might happen?”).
- Offer off-ramps that preserve dignity.
Corrections work better when they reduce social threat.
Editor’s Note
Step 5: Emotion and urgency—fear, anger, disgust, and the share button
The research summarized above already hints at why this works: interventions that redirect attention—like accuracy prompts—can reduce sharing. That implies a significant share of misinformation circulation happens in a high-speed, low-reflection state rather than in calm certainty.
Case study: the “forwarded as received” ethic
This is where the distinction between belief and sharing becomes crucial. A person can be only 60% convinced and still act as if they’re 100% convinced, because sharing is cheap and socially rewarded.
Practical takeaway: treat urgency as a red flag
Belief vs. sharing: why people pass along claims they don’t endorse
- Social bonding (“Look at this—can you believe it?”)
- Status (being first to post)
- Entertainment (irony, dunking, spectacle)
- Signaling (showing allegiance or outrage)
- Anxiety management (warning others)
A person can circulate misinformation while holding it loosely. That doesn’t reduce harm, but it changes the intervention. If the driver is social reward, the fix isn’t only better facts. It’s better friction.
What “accuracy prompts” reveal about sharing motives
That’s good news, in a narrow sense. Inattention is easier to influence than identity.
Implications for platforms and readers
- Don’t share from screenshots; find the original source.
- Don’t share claims that you haven’t read fully.
- Don’t outsource verification with “not sure if true.”
These norms won’t eliminate disinformation campaigns. They can reduce the ambient spread that gives campaigns oxygen.
Self-imposed friction before sharing
- ✓Don’t share from screenshots; find the original source
- ✓Don’t share claims that you haven’t read fully
- ✓Don’t outsource verification with “not sure if true.”
The WHO’s “infodemic management” idea—applied beyond outbreaks
Infodemic management, as the WHO frames it, is systematic and evidence-based. That matters because ad hoc responses—shaming, panicked takedowns, performative “debunking”—often backfire by increasing attention to the false claim or reinforcing identity defenses.
A disciplined approach looks less heroic and more infrastructural:
- Improve information quality and accessibility.
- Reduce overload and increase clarity.
- Build trust through consistent, transparent communication.
Trust is slow capital. Overload burns it fast.
A realistic expectation: you’re managing risk, not “solving” truth
That framing respects human limits. It also respects the reality that the information environment is engineered—by platforms, incentives, and attention markets—not merely inhabited.
Conclusion: the most useful question isn’t “Who’s dumb?” but “What made this feel true?”
Seeing those mechanics doesn’t make you immune. It makes you harder to manipulate—and more charitable in the right way. Not “anything goes” charity, but the kind that helps you correct without humiliating, pause without moralizing, and build norms that reduce harm.
The next time a claim hits your feed with the force of inevitability, resist the easy story about gullible people. Ask instead: What cue is this exploiting—familiarity, coherence, identity, or urgency? That question won’t just change what you believe. It will change what you choose to amplify.
The most useful question isn’t “Who’s dumb?” but “What made this feel true?”
— — TheMurrow Editorial
Frequently Asked Questions
What’s the difference between misinformation and disinformation?
Misinformation is false or inaccurate information shared without intent to deceive—often by people who think they’re helping. Disinformation is false information shared with intent to mislead, such as coordinated propaganda. The distinction matters because correcting misinformation often requires education and clarity, while countering disinformation may require tracking networks, incentives, and deliberate manipulation.
What is malinformation, and why does it matter if it’s true?
Malinformation is true information used to cause harm—like leaking private data to encourage harassment. Truth alone doesn’t guarantee ethical use. The malinformation category matters because responses focused only on “fact-checking” miss the point; the core issue is harm, privacy, and weaponization, not accuracy.
What does the WHO mean by an “infodemic”?
The WHO defines an infodemic as an overabundance of information—some accurate, some not—during an outbreak that creates confusion and harmful behaviors and undermines trust in public health response. The WHO frames “infodemic management” as systematic, evidence-based work to reduce those harms, not just a reactive battle against individual false claims.
Why does repetition make false claims feel true?
Research on the Illusory Truth Effect finds that repeated statements are more likely to be judged true. Repetition increases familiarity and processing fluency—the ease with which your brain handles a claim. Your mind often interprets “easy to process” as “probably accurate,” even when the claim conflicts with what you already know.
Can people spread misinformation without believing it?
Yes. Sharing can be driven by social bonding, status, entertainment, anxiety, or signaling group loyalty. Research on accuracy prompts suggests some people share less misinformation when nudged to think about truth first, implying that circulation often reflects inattention or social incentives rather than deep belief.
What’s an “accuracy prompt,” and does it actually work?
An accuracy prompt is a small cue that asks people to consider whether content is true before sharing. A 2022 study in Nature Communications found that such prompts can shift sharing behavior by redirecting attention to accuracy. The broader implication is that slowing people down—even briefly—can reduce impulsive amplification.















