TheMurrow

Why We Trust (and Fall for) Misinformation

A plain-English guide to how beliefs form—and why false claims can feel rational from the inside. Learn the cues that shape belief and sharing.

By TheMurrow Editorial
February 19, 2026
Why We Trust (and Fall for) Misinformation

Key Points

  • 1Distinguish misinformation, disinformation, and malinformation—each needs different responses, from education to enforcement to harm reduction and privacy protection.
  • 2Notice the five belief cues: exposure, repetition, coherence, identity pressure, and emotion—misinformation often wins by aligning with these shortcuts.
  • 3Add friction before sharing: pause, read past headlines, find originals, and treat urgency as a red flag to reduce impulsive amplification.

A familiar moment: the “of course” click

A friend texts you a screenshot: a headline about a “new study” that proves a familiar fear. You feel the quick internal click—of course. It fits what you’ve suspected for months. Before you can even weigh it, your thumb hovers over “share.”

That moment isn’t stupidity. It’s cognition doing what it evolved to do: make fast, workable judgments with limited time and attention. The modern feed punishes slowness. It rewards confidence. It turns repetition into “common knowledge” and outrage into a distribution strategy.

Public debate tends to treat misinformation as a moral flaw—those people are gullible; those people are malicious. The real story is more uncomfortable and more useful: false beliefs often feel rational from the inside, and people circulate claims for reasons that have little to do with truth.

Misinformation succeeds less by defeating reason than by impersonating it.

— TheMurrow Editorial

What follows is an explainer for readers who want sharper distinctions and better tools. We’ll separate why a claim feels true from why someone shares it—two related problems that require different fixes.

The vocabulary problem: misinformation, disinformation, malinformation—and the “infodemic”

Arguments about “misinformation” often collapse three different phenomena into one accusation. The distinctions matter because each calls for different responses—education, enforcement, or harm reduction.

Three terms people confuse (and why it matters)

- Misinformation: false or inaccurate information shared without intent to deceive. Think of a well-meaning relative reposting a wrong health tip because it sounds plausible.
- Disinformation: false information shared with intent to deceive. The motive is manipulation, whether political, financial, or ideological.
- Malinformation: true information used to harm. A private address posted to encourage harassment isn’t “false”—it’s weaponized.

Conflating these terms creates policy and social errors. Treating a misinformed person like a deliberate propagandist hardens positions and discourages correction. Treating organized disinformation as an “oops” problem leaves manipulation infrastructure intact. Treating malinformation as merely “speech” ignores foreseeable harm.

The World Health Organization’s “infodemic” frame

The World Health Organization describes an infodemic as an overabundance of information—including false and misleading material—during an outbreak, which fuels confusion, harmful behavior, and distrust in public health response. The WHO also uses the term infodemic management for systematic, evidence-based ways to reduce these harms. (WHO: https://www.who.int/health-topics/infodemic/19)

The key implication is blunt: even accurate information can fail if the environment is saturated. An infodemic is not only about bad content. It’s about volume, speed, and the collapse of context.

In an infodemic, the enemy isn’t only falsehood. It’s overload.

— TheMurrow Editorial

Why believing misinformation can feel rational from the inside

Most people don’t “choose” falsehood because they enjoy being wrong. Belief formation relies on shortcuts that work well in ordinary life. The problem is that the modern information environment—high volume, algorithmic repetition, rapid context switching—pushes those shortcuts beyond their design limits.

A useful way to think about belief is as a sequence:
1. Exposure and attention decide what enters your mind.
2. Fluency and familiarity shape what feels true.
3. Coherence with your mental model determines what seems to “fit.”
4. Identity and social incentives influence what feels safe to accept.
5. Emotion and urgency compress the time you spend thinking.

None of these steps require irrationality. They require only a human brain doing what it does: conserving effort, seeking coherence, and managing social belonging. Misinformation often wins by aligning with those incentives better than careful reporting does.

The editorial trap: confusing sincerity with accuracy

A person can be sincere and wrong. A person can share a claim while feeling “responsible” because they believe they’re warning others. That sincerity makes misinformation harder to correct, not easier—because corrections can feel like accusations.

A better question than “Why would anyone believe that?” is “What cues made it feel reliable?” That question points toward interventions that don’t insult the audience.

Key Insight

False beliefs often feel rational from the inside. The more useful diagnostic isn’t who’s gullible—it’s which cues made the claim feel reliable.

Step 1: Exposure and attention—what gets into your head

Beliefs begin with what you notice. Feeds are designed to capture attention, and attention is a scarce resource. High-volume scrolling forces constant context switching, which increases reliance on mental shortcuts—what psychologists call heuristics—because deep evaluation is too costly to perform on every item.

Accuracy prompts: small nudges, measurable effects

Some of the most promising interventions target attention rather than ideology. A 2022 study published in Nature Communications tested “accuracy prompts”—simple cues that nudge people to think about whether content is true before sharing. The point is not to turn readers into fact-checkers. The point is to change the mental mode from “react” to “evaluate.” (https://www.nature.com/articles/s41467-022-30073-5)

That finding carries a quiet rebuke to the loudest explanations for misinformation. If a brief prompt can shift behavior, at least some sharing is not driven by deep conviction. It’s driven by speed.
2022
A Nature Communications study tested “accuracy prompts,” showing small attention nudges can reduce misinformation sharing by shifting users from “react” to “evaluate.”

Practical takeaway: control the first five seconds

If you want to reduce your own susceptibility, focus on the moment of exposure:
- Pause before reacting.
- Read beyond the headline if possible.
- Ask what you’d need to know to verify it.

These are small moves, but they work with the grain of human attention rather than against it.

First-five-seconds checklist

  • Pause before reacting
  • Read beyond the headline if possible
  • Ask what you’d need to know to verify it

Step 2: Fluency and familiarity—how repetition manufactures “truth”

Repetition is not merely annoying. It is persuasive in a specific, measurable way. The Illusory Truth Effect describes a consistent finding: repeated statements are more likely to be judged true than new ones. Research shows the effect across trivia, ads, and headlines—and crucially, it can persist even when the statements contradict prior knowledge. (https://doaj.org/article/5192951adada480aac84727e1ef1fa97)

A 2024 review in Current Opinion in Psychology summarizes evidence that repetition increases belief in misinformation and can also affect downstream behaviors such as intentions to share. (https://pubmed.ncbi.nlm.nih.gov/38113667/)

Repetition doesn’t just persuade—it increases confidence

Newer work complicates the picture further. A 2024 paper in Psychological Research reports that repetition can increase not only perceived truth but also confidence in those judgments. People may feel more certain, not just more convinced. (https://link.springer.com/article/10.1007/s00426-024-01956-7)

That matters socially. Confidence is contagious. A person who feels certain becomes a stronger node in the network, even when they’re wrong.

A key statistic about the “second exposure”

Experimental work suggests the biggest jump in perceived truth often arrives early—frequently by the second exposure—with gains tapering after that. (https://pmc.ncbi.nlm.nih.gov/articles/PMC8116821/)

That’s one of the most sobering numbers in misinformation research. It implies you don’t need a propaganda firehose to change what feels real. You need a few strategically placed repeats.
2nd exposure
Research suggests the biggest jump in perceived truth often happens by the second exposure—meaning a few repeats can be enough to shift what feels real.

The second exposure is where “I’ve heard that” becomes “I think that’s true.”

— TheMurrow Editorial

Step 3: Coherence beats correctness—why “fit” matters more than facts

People don’t evaluate claims in a vacuum. They evaluate them against mental models—informal maps of how the world works. If a claim matches your schema, it feels coherent. Coherence reduces mental friction, and low friction often gets misread as accuracy.

When prior knowledge backfires

The same 2024 Psychological Research article suggests a counterintuitive mechanism: even nonspecific prior knowledge can make a claim feel more fluent and coherent, boosting perceived truth and confidence—despite the claim being wrong. (https://link.springer.com/article/10.1007/s00426-024-01956-7)

In plain English: knowing a little about a topic can sometimes make you easier to mislead, because your brain supplies connective tissue that makes the claim feel plausible. The lie doesn’t have to be airtight; it only has to be easy to integrate.

Real-world example: the “sounds scientific” problem

You’ve seen the pattern: a claim wrapped in scientific language, or a chart with an authoritative look, or a “study says” framing with no clear citation. It feels coherent because it resembles legitimate knowledge formats. The illusion is structural.

Readers can respond by asking coherence questions that are more diagnostic than “Does this sound right?”:
- What exactly is being claimed?
- What would count as evidence against it?
- Does the source specify methods, data, or limitations—or only conclusions?

Coherence is not a sin. It’s a feature of thought. The problem arises when coherence becomes a substitute for verification.

Coherence checks (more diagnostic than “Does this sound right?”)

  • What exactly is being claimed?
  • What would count as evidence against it?
  • Does the source specify methods, data, or limitations—or only conclusions?

Step 4: Identity-protective cognition—when belief becomes belonging

Some misinformation is sticky not because it’s familiar but because it is socially functional. On charged topics, beliefs can act as signals: loyalty, values, status, in-group membership. That’s where motivated reasoning and identity-protective cognition enter.

Research associated with Yale’s Dan Kahan is often summarized this way: people tend to resist evidence that threatens their group identity, and they adopt interpretations that protect social belonging. (https://informalscience.org/identity/Dan-Kahan/)

Multiple perspectives: bias, rationality, and social risk

It’s tempting to reduce this to “people are biased.” The more precise framing is social: changing your mind can carry costs. In some communities, publicly conceding a point doesn’t look like intellectual honesty; it looks like betrayal.

From another perspective, identity-protective cognition can appear rational. If the penalty for dissent is ostracism, “updating your beliefs” isn’t a purely cognitive decision. It’s a negotiation with your social world.

That doesn’t excuse falsehood, but it explains resistance to correction. A fact-check can land as a threat: You’re not only wrong; you’re one of them.

Practical takeaway: debate less, ask more

If you’re trying to correct someone:
- Ask what source would change their mind.
- Separate values from claims (“What are you worried might happen?”).
- Offer off-ramps that preserve dignity.

Corrections work better when they reduce social threat.

Editor’s Note

Corrections often fail when they raise social threat. Questions, value-separating, and dignity-preserving “off-ramps” can work better than debate.

Step 5: Emotion and urgency—fear, anger, disgust, and the share button

Misinformation frequently arrives with an emotional payload. Fear, outrage, and disgust narrow attention and create urgency. Urgency shortens deliberation. A reader might detect that a claim is dubious and still share it because it feels like a warning.

The research summarized above already hints at why this works: interventions that redirect attention—like accuracy prompts—can reduce sharing. That implies a significant share of misinformation circulation happens in a high-speed, low-reflection state rather than in calm certainty.

Case study: the “forwarded as received” ethic

You’ve likely encountered the rhetorical shield: “I don’t know if it’s true, but…” followed by a share. The intent is framed as care: Better safe than sorry. In practice, it outsources verification costs to the audience while multiplying reach.

This is where the distinction between belief and sharing becomes crucial. A person can be only 60% convinced and still act as if they’re 100% convinced, because sharing is cheap and socially rewarded.

Practical takeaway: treat urgency as a red flag

Before sharing, ask one question: If this weren’t urgent, would I still believe it? Emotional charge is not proof of importance. It’s often a distribution mechanism.

Belief vs. sharing: why people pass along claims they don’t endorse

People often assume sharing equals belief. The internet makes that assumption reasonable—and frequently wrong. Sharing can be motivated by:
- Social bonding (“Look at this—can you believe it?”)
- Status (being first to post)
- Entertainment (irony, dunking, spectacle)
- Signaling (showing allegiance or outrage)
- Anxiety management (warning others)

A person can circulate misinformation while holding it loosely. That doesn’t reduce harm, but it changes the intervention. If the driver is social reward, the fix isn’t only better facts. It’s better friction.

What “accuracy prompts” reveal about sharing motives

The Nature Communications study on accuracy prompts is instructive precisely because it targets the moment of sharing, not the depth of belief. If a simple reminder to consider truth reduces willingness to share, then some misinformation behavior reflects inattention rather than ideology. (https://www.nature.com/articles/s41467-022-30073-5)

That’s good news, in a narrow sense. Inattention is easier to influence than identity.

Implications for platforms and readers

Platforms can change defaults—adding prompts, slowing virality, increasing context. Readers can adopt habits that act like self-imposed friction:
- Don’t share from screenshots; find the original source.
- Don’t share claims that you haven’t read fully.
- Don’t outsource verification with “not sure if true.”

These norms won’t eliminate disinformation campaigns. They can reduce the ambient spread that gives campaigns oxygen.

Self-imposed friction before sharing

  • Don’t share from screenshots; find the original source
  • Don’t share claims that you haven’t read fully
  • Don’t outsource verification with “not sure if true.”

The WHO’s “infodemic management” idea—applied beyond outbreaks

The WHO’s language is focused on outbreaks, but the conceptual tool travels. An infodemic dynamic can emerge around elections, wars, natural disasters, and public safety events—any context where uncertainty is high and the cost of being wrong feels immediate.

Infodemic management, as the WHO frames it, is systematic and evidence-based. That matters because ad hoc responses—shaming, panicked takedowns, performative “debunking”—often backfire by increasing attention to the false claim or reinforcing identity defenses.

A disciplined approach looks less heroic and more infrastructural:
- Improve information quality and accessibility.
- Reduce overload and increase clarity.
- Build trust through consistent, transparent communication.

Trust is slow capital. Overload burns it fast.

A realistic expectation: you’re managing risk, not “solving” truth

No society eliminates misinformation entirely. The goal is harm reduction: fewer people misled, fewer harmful behaviors triggered, fewer moments where confusion becomes a public-health or civic crisis.

That framing respects human limits. It also respects the reality that the information environment is engineered—by platforms, incentives, and attention markets—not merely inhabited.
5 steps
Belief formation often follows a sequence—exposure, fluency, coherence, identity, emotion—shortcuts that can work in daily life but break under feed dynamics.
200 wpm
Estimated reading time uses ~200 words/minute as a baseline; this explainer is designed for careful, section-by-section reading rather than speed-scrolling.

Conclusion: the most useful question isn’t “Who’s dumb?” but “What made this feel true?”

Misinformation thrives when ordinary mental shortcuts meet extraordinary distribution. Repetition makes claims feel familiar. Familiarity feels like truth. Coherence feels like understanding. Identity makes doubt costly. Emotion makes urgency contagious.

Seeing those mechanics doesn’t make you immune. It makes you harder to manipulate—and more charitable in the right way. Not “anything goes” charity, but the kind that helps you correct without humiliating, pause without moralizing, and build norms that reduce harm.

The next time a claim hits your feed with the force of inevitability, resist the easy story about gullible people. Ask instead: What cue is this exploiting—familiarity, coherence, identity, or urgency? That question won’t just change what you believe. It will change what you choose to amplify.

The most useful question isn’t “Who’s dumb?” but “What made this feel true?”

— TheMurrow Editorial
T
About the Author
TheMurrow Editorial is a writer for TheMurrow covering explainers.

Frequently Asked Questions

What’s the difference between misinformation and disinformation?

Misinformation is false or inaccurate information shared without intent to deceive—often by people who think they’re helping. Disinformation is false information shared with intent to mislead, such as coordinated propaganda. The distinction matters because correcting misinformation often requires education and clarity, while countering disinformation may require tracking networks, incentives, and deliberate manipulation.

What is malinformation, and why does it matter if it’s true?

Malinformation is true information used to cause harm—like leaking private data to encourage harassment. Truth alone doesn’t guarantee ethical use. The malinformation category matters because responses focused only on “fact-checking” miss the point; the core issue is harm, privacy, and weaponization, not accuracy.

What does the WHO mean by an “infodemic”?

The WHO defines an infodemic as an overabundance of information—some accurate, some not—during an outbreak that creates confusion and harmful behaviors and undermines trust in public health response. The WHO frames “infodemic management” as systematic, evidence-based work to reduce those harms, not just a reactive battle against individual false claims.

Why does repetition make false claims feel true?

Research on the Illusory Truth Effect finds that repeated statements are more likely to be judged true. Repetition increases familiarity and processing fluency—the ease with which your brain handles a claim. Your mind often interprets “easy to process” as “probably accurate,” even when the claim conflicts with what you already know.

Can people spread misinformation without believing it?

Yes. Sharing can be driven by social bonding, status, entertainment, anxiety, or signaling group loyalty. Research on accuracy prompts suggests some people share less misinformation when nudged to think about truth first, implying that circulation often reflects inattention or social incentives rather than deep belief.

What’s an “accuracy prompt,” and does it actually work?

An accuracy prompt is a small cue that asks people to consider whether content is true before sharing. A 2022 study in Nature Communications found that such prompts can shift sharing behavior by redirecting attention to accuracy. The broader implication is that slowing people down—even briefly—can reduce impulsive amplification.

More in Explainers

You Might Also Like