How to Tell Whether a Claim Is True
A step-by-step guide to evaluating evidence without being an expert—so you can slow down, verify smarter, and stop amplifying what you can’t prove.

Key Points
- 1Define the exact claim first, then ask what evidence would actually settle it before you like, repost, or argue.
- 2Verify faster with lateral reading: leave the page, check ownership, reputation, and conflicts, and demand primary evidence over screenshots.
- 3Untangle mixed posts into facts, interpretations, values, and predictions—then confirm with two independent sources before treating anything as true.
A friend drops a link into the group chat. The headline is furious, the clip is grainy, the caption reads like a verdict. You feel the familiar tug: if you don’t share it, are you letting something slide? If you do share it and it’s wrong, you’ve helped it travel.
The modern web is built to reward speed—speed of reaction, speed of outrage, speed of certainty. But truth rarely moves at that pace. Facts come with footnotes. Evidence takes time to gather. Context doesn’t fit neatly into a screenshot.
The good news is that you don’t need a PhD, a newsroom budget, or a fact-checking desk to make better calls. You need a workable method: a way to separate what can be checked from what can’t, and a set of habits that keep you from being played.
“The first skill of digital literacy isn’t knowing more. It’s pausing long enough to ask what’s actually being claimed.”
— — TheMurrow Editorial
Start with the right mindset: not everything online is a “fact claim”
A practical definition helps. A checkable claim is something specific that could be verified with evidence: numbers, dates, who did what, “X caused Y,” “a study shows,” “a video proves.” If you can imagine what proof would settle the matter—an official dataset, the full report, a court filing, the original video—then it’s probably checkable.
Many posts, though, are built on uncheckables: “This policy is disastrous,” “They’re lying to you,” “You can’t trust anyone.” Those are arguments, not facts. Treating an opinion like a fact claim sends you chasing “proof” that doesn’t exist; treating a fact claim like “just an opinion” lets misinformation skate by as one more viewpoint.
That distinction matters even more in science coverage, where uncertainty is often weaponized. A 2025 National Academies committee offered a useful grounding: it defined misinformation about science as information that asserts or implies claims inconsistent with the weight of accepted scientific evidence at the time—and it emphasized that what counts can change as scientific knowledge evolves. The same report defined disinformation about science as misinformation circulated by agents who know it is false. (National Academies, 2025)
Those definitions do two things for readers. First, they place science claims in the realm of evidence, not vibes. Second, they separate being wrong from being deceptive—an editorial line that matters when you decide how to respond.
The two questions that prevent most mistakes
1. What exactly is being claimed? Write it out in one sentence.
2. What kind of evidence would settle it? A dataset? A transcript? A full study? A primary document?
If you can’t answer those, you’re not ready to pass it along.
Two questions to ask before sharing
- 1.What exactly is being claimed? Write it out in one sentence.
- 2.What kind of evidence would settle it? A dataset, transcript, full study, or primary document?
Step 1: Stop—because speed is the easiest way to get fooled
Many media-literacy classrooms still teach the CRAAP checklist—Currency, Relevance, Authority, Accuracy, Purpose—and it has value as a prompt. But it has been criticized for encouraging vertical reading: staying on a single page and judging credibility from surface cues like a clean design, a .org domain, or an “About” page that reads well. Librarian and digital literacy writers have argued that this approach fails because it treats appearance as evidence. (School Library Journal commentary on CRAAP’s limits)
Bad actors know how to manufacture credibility. A polished logo is cheap. A sincere “Our Mission” paragraph is cheaper.
A better habit is pause + plan:
- Pause: Notice your emotional reaction. Anger, triumph, disgust, fear—those are not proof, but they are signals that you’re being prompted to act fast.
- Plan: Name the claim and your uncertainty. Decide what would count as decisive evidence.
- Proceed: Verify before amplifying.
Pause + plan (a better habit than judging by vibes)
- ✓Pause: Notice your emotional reaction—anger, triumph, disgust, fear—and treat it as a cue to slow down.
- ✓Plan: Name the claim and your uncertainty; decide what would count as decisive evidence.
- ✓Proceed: Verify before amplifying.
“If a post makes you feel certain before it makes you informed, treat that certainty as a warning sign.”
— — TheMurrow Editorial
A real-world example: the ‘video proves it’ trap
But a clip is not context. It may be edited, cropped, stripped of time markers, or paired with a false caption. Your pause doesn’t require technical expertise—only the willingness to admit what you don’t yet know: where it was filmed, when, and what happened immediately before and after.
Step 2: Investigate the source—fast, not deep
Stanford researchers have described this as lateral reading: opening new tabs to learn who’s behind an unfamiliar site or claim, rather than scrutinizing the site itself for trust signals. Their work highlights a stark gap between trained fact-checkers and typical readers. Many readers “stay and stare”; fact-checkers “leave and learn.” (Stanford News, 2020)
Lateral reading is efficient because it answers a basic question quickly: Should this source be taken seriously enough to spend time on? When the answer is no, you save yourself twenty minutes of unproductive analysis.
High-yield checks you can do in minutes
- Ownership and funding: Who owns it? Who pays for it? An “About” page can be propaganda; verify externally.
- Reputation: What do credible, independent sources say about the outlet or author? If the only praise comes from itself or from a tight ecosystem of allies, that’s information.
- Conflicts of interest: Does the author have a financial or political stake? Are they selling a supplement, running for office, fundraising, or operating as an advocacy group?
This isn’t about purity. Plenty of honest work comes from advocates. The point is to understand incentives before you outsource your trust.
Three quick source checks
- ✓Ownership and funding: Who owns it and who pays for it? Verify externally.
- ✓Reputation: What do credible, independent sources say about the outlet or author?
- ✓Conflicts of interest: Financial or political stakes—selling, fundraising, campaigning, advocacy—change incentives and deserve scrutiny.
What newsrooms and fact-checkers model
Step 3: Separate primary evidence from commentary
The fastest way to cut through the noise is to distinguish:
- Primary evidence: original studies, official datasets, court filings, full transcripts, complete videos, direct records.
- Secondary reporting: journalism that interprets primary sources, ideally with links and clear sourcing.
- Commentary: analysis and opinion, sometimes valuable, often partisan, rarely “settling” a factual dispute.
When a post cites “a study,” don’t stop at the quote card. Look for the study itself. When a post cites “the data,” ask: which dataset, from whom, and what time period? When a post cites “experts,” ask: named experts, or anonymous authority?
Primary evidence vs. secondary reporting vs. commentary
Before
- Primary evidence (original studies
- official datasets
- court filings
- full transcripts
- complete videos
- direct records)
After
- Secondary reporting (journalism interpreting primary sources with links and clear sourcing); Commentary (analysis/opinion—sometimes valuable
- often partisan
- rarely settles disputes)
The science-specific pitfall: evidence changes, but that isn’t a free pass
Responsible skepticism sounds like: “What does the best available evidence say right now, and how strong is it?” Irresponsible cynicism sounds like: “No one knows anything, so I’ll believe whatever I want.”
“Science revises. Propaganda exploits revision to argue that nothing is knowable.”
— — TheMurrow Editorial
Step 4: Watch for mixed posts: facts, opinions, and moral arguments in one package
For example, a post might contain:
- A number (checkable)
- A cause-and-effect claim (“X caused Y”) (often checkable, sometimes difficult)
- A value judgment (“therefore they’re evil”) (not checkable)
- A prediction (“soon this will happen everywhere”) (not checkable in the present)
If you try to “debunk” the moral claim with evidence, you’ll fail. Evidence doesn’t disprove a value judgment; it informs it. But if you ignore the checkable pieces because the post is “just an opinion,” you may miss the part that’s actually false.
A practical way to untangle the braid
- Fact claim (verify)
- Interpretation (assess reasoning)
- Value judgment (agree/disagree on principles)
- Prediction (treat as speculative)
Then verify the fact claims first. Often the entire rhetorical structure collapses when the “one hard fact” turns out to be soft.
Label the parts of a mixed post
- ✓Fact claim (verify)
- ✓Interpretation (assess reasoning)
- ✓Value judgment (debate principles)
- ✓Prediction (treat as speculative)
Key Insight
Step 5: Use a workflow that fits real life (not a classroom)
A newsroom-style workflow can be adapted to daily life:
A five-minute verification routine
2. Open two new tabs (lateral reading): one for the source’s reputation/ownership, one for independent coverage of the claim.
3. Look for primary evidence: the full study, the official record, the complete clip.
4. Check whether independent sources converge. AFP’s practice of two independent sources is a good mental model for everyday verification.
5. Decide what you know. If you can’t verify, downgrade your certainty—and don’t share as fact.
This is what “media literacy” looks like when it’s not an academic slogan: small, repeatable habits that scale.
Five-minute verification routine
- 1.Name the claim in one sentence.
- 2.Open two new tabs (lateral reading): one for the source’s reputation/ownership, one for independent coverage.
- 3.Look for primary evidence: the full study, official record, or complete clip.
- 4.Check whether independent sources converge; use “two independent sources” as a mental model.
- 5.Decide what you know; if you can’t verify, downgrade certainty—and don’t share as fact.
What to share when you’re unsure
- Phrase uncertainty clearly (“I haven’t verified this yet—does anyone have the original source?”).
- Avoid declarative captions that launder rumors into “facts.”
- Prefer links to solid reporting over screenshots and clips.
Editor’s Note
Step 6: Science misinformation vs. disinformation—why intent changes the response
- Misinformation: someone spreads a claim inconsistent with the weight of accepted scientific evidence at the time, but may believe it. Response: correct gently, provide evidence, offer context.
- Disinformation: an agent spreads falsehoods knowingly. Response: treat as manipulation. Don’t “debate” in ways that amplify it; focus on warning others, reporting, and redirecting to credible sources.
That difference matters because the same correction strategy doesn’t work for every situation. A relative sharing a misleading health claim may be reachable. A coordinated network pushing a narrative may be trying to provoke engagement—your quote-tweet becomes their oxygen.
Misinformation vs. disinformation (science)
Before
- Misinformation (may be believed; inconsistent with accepted evidence at the time; correct gently with evidence and context)
After
- Disinformation (known falsehood; manipulation; avoid amplifying debates—warn
- report
- redirect)
Multiple perspectives: skepticism isn’t the enemy
The answer is not to abandon standards, but to clarify them. The National Academies’ emphasis on the weight of accepted evidence at the time is a standard that can be debated and revised. It’s not an appeal to authority as such; it’s an appeal to a transparent, collective method.
Healthy skepticism asks for evidence. Cynicism treats evidence as theater.
Step 7: Make your feed harder to poison
You can’t control the internet, but you can control some of your inputs.
Small changes with outsized payoff
- Be stingy with amplification: when a claim is incendiary but unsourced, resist the urge to “share to ask if true.” That still spreads it.
- Cultivate a second opinion habit: for high-stakes topics (health, safety, elections), require at least two independent sources before you accept a claim.
Even in professional fact-checking, rigor is procedural. AFP’s stated requirement of at least two independent sources is a reminder that verification is not a vibe—it’s a method.
Feed-hardening habits
- ✓Reward good sourcing: engage with work that shows links, documents, named experts, and clear methods.
- ✓Be stingy with amplification: don’t “share to ask if true” when it’s unsourced and incendiary.
- ✓Cultivate a second-opinion habit: for high-stakes topics, require at least two independent sources.
The internet doesn’t require you to be omniscient. It does require you to be deliberate. The most effective defense against manipulation is not a library of trivia in your head—it’s a habit of mind: identify the claim, seek the evidence, and refuse to outsource your certainty to a caption designed to travel faster than the truth.
“The internet doesn’t require you to be omniscient. It does require you to be deliberate.”
— — TheMurrow Editorial
Frequently Asked Questions
What counts as a “checkable claim” online?
A checkable claim is specific enough to be verified with evidence: numbers, dates, who did what, cause-and-effect statements, “a study shows,” or “a video proves.” Opinions (“this is terrible”) and moral judgments (“they’re evil”) aren’t checkable in the same way. Start by rewriting the post as a single factual sentence you could prove or disprove.
What is lateral reading, and why do Stanford researchers recommend it?
Lateral reading means leaving an unfamiliar page to learn about it from other sources—opening new tabs to check ownership, reputation, and independent reporting. Stanford researchers highlighted it because professional fact-checkers use it to avoid being fooled by surface credibility cues on a single page. It’s faster and often more accurate than staring at an “About” page.
Is the CRAAP checklist outdated?
The CRAAP checklist (Currency, Relevance, Authority, Accuracy, Purpose) is still widely taught, but critics argue it encourages vertical reading—judging credibility from design, domains, and on-page claims. Those cues are easy to fake. The more practical upgrade is “pause + plan,” then lateral reading: identify the claim, decide what evidence would settle it, and verify beyond the original page.
What’s the difference between misinformation and disinformation in science?
The National Academies (2025) defines misinformation about science as claims inconsistent with the weight of accepted scientific evidence at the time, acknowledging that scientific knowledge evolves. Disinformation about science is misinformation spread by agents who know it is false. The distinction matters because misinformation may be corrected through evidence; disinformation often aims to manipulate and exploit attention.
How many sources do I need before I trust a claim?
There’s no universal number, but adopting a newsroom-style habit helps. AFP has said it requires at least two independent sources for the central claim of a fact-check and shows readers the evidence and steps used. For everyday use, aim for independent confirmation—especially for high-stakes claims—rather than repeated citations within the same ideological ecosystem.
What should I do if I can’t verify something quickly?
Don’t share it as fact. If you share at all, label uncertainty plainly and ask for primary evidence (the full study, original document, complete video). Avoid screenshots and clipped media that remove context. Unverified “just asking questions” posts still amplify the claim, which is often the point.















