TheMurrow

The Trust Stack

AI summaries now sit between you and the open web—often delivering conclusions before evidence. Here’s how to verify what you’re told in a summarized, synthetic media world.

By TheMurrow Editorial
January 6, 2026
The Trust Stack

Key Points

  • 1Recognize the shift to conclusions-first AI summaries—and counter it by opening primary sources, checking dates, and hunting for missing caveats.
  • 2Track predictable failure modes like hallucinated specifics, source laundering, attribution blur, and false consensus that make “answer-shaped” misinformation feel authoritative.
  • 3Defend against deepfakes by slowing down, verifying via a second channel, and asking context-only questions—simple habits that break urgency-driven scams.

A few years ago, “checking a source” meant clicking a link. Now it often means deciding whether you’ll click at all.

Across the biggest consumer platforms, AI-generated summaries have moved from novelty to default interface—appearing above the web pages they compress. The result is a subtle shift in how we read: fewer steps, fewer tabs, fewer moments where we pause to ask, “Where did that come from?” Technology coverage

The public learned the stakes the messy way. In 2024, Google’s early AI summaries drew backlash after the system surfaced advice that included putting glue on pizza—a snippet widely traced to a Reddit joke. The episode became a meme, then a warning: when the web gets distilled into a single “answer,” the weakest input can become the headline.

The bigger story is not whether one product embarrassed itself. It’s that a new information environment is taking shape—one that rewards consuming conclusions over inspecting evidence. And once that becomes habit, trust has to be rebuilt from the ground up.

When the web is squeezed into a single answer, provenance becomes the missing ingredient.

AI summaries are becoming the front door to the internet

AI summaries are no longer a side feature. They sit between readers and original reporting, framing what counts as relevant before anyone sees the underlying material.

OpenAI made the direction explicit when it announced ChatGPT Search on October 31, 2024, describing a web-search experience that returns answers with links to web sources. OpenAI’s rollout notes show expansion to more logged-in users on December 16, 2024, and to everyone (in regions where available) on February 5, 2025. The design choice matters: the “answer” arrives first; the sources arrive second.

Google moved in the same direction with AI Overviews and similar summary-style placements. Early public criticism in 2024—crystallized by the “glue pizza” fiasco—turned into a broader question: what happens when a system can speak with the confidence of a reference book, while borrowing its facts from anywhere?

The new default: conclusions first, context optional

The interface change looks small. The cultural change isn’t.

In the old model, readers moved from search results to a publisher’s page and then to supporting evidence inside the article. The new model often flips that order:

- A system provides a consolidated response.
- A handful of citations appear, sometimes.
- The reader decides whether to open anything else.

That last step is where trust now lives—or dies. Trust is no longer only about whether a site is reputable. Trust now includes whether a summary accurately represents its sources, whether it fuses multiple sources into one narrative, and whether it signals uncertainty or masks it.

The interface is training us to treat the conclusion as the product and the evidence as an upsell.

The click collapse: what “zero-click” does to verification

The most measurable impact of AI summaries is also the simplest: people click less. Less clicking means less verification. Less verification means more room for confident nonsense.

A Pew Research Center analysis, reported by Ars Technica, examined panel browsing behavior (March 2025). The reported results were stark:

- Searches without an AI answer had a 15% click rate.
- Searches with AI Overviews fell to 8%.
- Only about ~1% of AI Overviews resulted in a click on a cited source.
- About 1 in 5 searches included AI Overviews (per the same reporting).

Those numbers are not just a problem for publishers’ traffic. They alter the reader’s ability to evaluate quality. Clicking through is how you learn whether an article is fresh, who wrote it, and what evidence it relies on. Without that step, readers are more likely to accept whatever the summary presents as settled.
15% → 8%
Pew (reported by Ars Technica) found clicks fell from 15% without AI answers to 8% with AI Overviews (March 2025 panel behavior).
~1%
Only about ~1% of AI Overviews reportedly led to a click on a cited source in the same Pew analysis.

What readers lose when they don’t click

When you stay on the summary, you often miss:

- Publication date and whether the information is outdated
- Author credentials and editorial standards
- Methodology (how a claim was tested or reported)
- Caveats and limitations that don’t fit in a short answer
- The difference between reporting and commentary

Bain has framed this as an acceleration of “zero-click” behavior, suggesting many users increasingly rely on in-page answers and summaries for a substantial share of searches, with downstream effects on organic traffic. Whatever the marketing implications, the civic implication is plain: a public that doesn’t inspect sources becomes easier to mislead.

A necessary nuance: not every study finds the same pattern

The click story is real, but it’s not a single universal number. Different research methods produce different pictures. Semrush, for example, has reported more complex patterns around AI Overview prevalence and behavior across 2025, suggesting the effect may vary by query type and context.

The point isn’t that every AI summary reduces clicking in every case. The point is that the default incentive has shifted. The friction that once nudged readers toward primary sources is thinning.

The failure modes: hallucinations, laundering, and false consensus

If AI summaries only shortened what sources said—accurately—they would still raise questions about context. The sharper problem is that summaries can generate “answer-shaped” misinformation through predictable failure modes.

The most common breakdowns look like this:

1. Hallucinated specifics: fabricated names, citations, statistics, dates, or study results.
2. Source laundering: a claim takes on the tone of an encyclopedia entry even if it began as a joke, speculation, or an anonymous post.
3. False consensus: multiple weak or derivative sources merge into something that appears independently confirmed.
4. Temporal mismatch: outdated information written in present tense; older events framed as current.
5. Attribution blur: readers can’t tell which claim came from which source when multiple sources are fused.

The “glue pizza” episode remains instructive because it wasn’t a sophisticated deception. It was a context failure. A low-quality snippet got elevated into actionable advice once it was wrapped in the authority of a synthetic summary.

Why these errors feel so persuasive

Traditional misinformation often carries tells: sloppy writing, odd URLs, or inflammatory framing. AI-generated errors can arrive with clean prose, calm tone, and structure that mimics reliable reference material.

When a system blends multiple sources, it can also produce a dangerous illusion: if several outlets appear in the citation list, the reader may assume each supports every sentence. In practice, a summary may borrow one detail from one source, another from a second, and fill gaps with inference.

A list of citations is not the same thing as traceable evidence.

Trust becomes a design problem, not a personal virtue

For years, media literacy advice focused on individual responsibility: be skeptical, check sources, read laterally. That guidance still matters. Yet the interface now does much of the persuading before readers have a chance to apply those habits.

AI summaries put pressure on an old model of trust: “I trust this publication.” Increasingly, the relevant question is: “Do I trust this synthesis?” That’s a different kind of trust—more fragile, because errors can come from mixing rather than malice. more explainers

What platforms get right—and what they still can’t solve

Proponents of summaries will argue, fairly, that they can improve access. A well-made summary can help readers navigate complexity, especially for routine questions. Citations can create a path to primary sources, and product teams have strong incentives to reduce obvious errors after public backlash.

Business Insider’s later reflection on Google’s AI Overviews described perceived improvements over time, even as the early “glue on pizza” episode remained a defining cautionary tale. Improvement is plausible. The core issue remains: the interface still encourages acceptance before inspection.

Even when an AI summary is accurate, it can smooth over disagreement. The web is full of contested claims, evolving science, and uncertain timelines. Compressing that into a single voice can make ambiguity disappear.

The most honest summary would show its seams

Readers don’t only need links. They need signals that reflect how knowledge actually works:

- Where sources agree
- Where they diverge
- What is unknown or based on limited evidence
- How recent the underlying reporting is

Without those cues, a fluent synthesis can feel more certain than the world it describes.

Key Insight

AI summaries don’t just change what people read—they change the order of trust: conclusion first, evidence second, and verification often never happens.

Deepfakes go mainstream: verification is no longer just about text

The trust problem doesn’t stop at search. The next wave is already familiar: audio that sounds like your boss, video that looks like a public figure, a frantic call that seems to come from someone you love.

A corporate survey commissioned by identity verification firm Regula reported that in 2024, 49% of surveyed businesses said they experienced deepfake fraud incidents involving audio and video scams. That’s not a niche threat. It’s a mainstream operational risk.

Another signal came from identity verification provider Sumsub, which reported a 303% year-over-year increase in deepfake instances in the U.S. (Q1 2024 context) and pointed to election-year spikes in other countries. Whether you view those figures as a measure of increased attacks, improved detection, or both, the direction is hard to ignore.
49%
Regula’s commissioned corporate survey reported 49% of surveyed businesses experienced deepfake fraud incidents (2024).
303% YoY
Sumsub reported a 303% year-over-year increase in deepfake instances in the U.S. (Q1 2024 context).

Why deepfakes change the psychology of trust

A forged article can be doubted. A forged voice can bypass skepticism.

Deepfakes exploit the shortcuts people rely on under stress: urgency, familiarity, authority. A convincing voice note triggers instinct before analysis. In a workplace, a fake “CEO” request can pressure staff to act quickly. In a family, a fake “relative” call can turn panic into payment.

The practical question readers ask is refreshingly blunt: If it sounds real, what do I do? That question deserves equally practical answers—habits that don’t depend on perfect detection tools.

Practical verification habits that still work in the AI-summary era

Readers can’t individually solve platform incentives. Readers can protect themselves with a few routines that make summaries less dangerous and deepfakes less effective. Science reporting

When you see an AI summary, do these three things

Readers can’t individually solve platform incentives. Readers can protect themselves with a few routines that make summaries less dangerous and deepfakes less effective.

When you see an AI summary, do these three things

  1. 1.1. Open at least one cited source—preferably the most primary one.
  2. 2.If the summary cites multiple outlets, pick the one most likely to contain original reporting or direct data. The goal is not to read everything. The goal is to confirm the claim exists as described.
  3. 3.2. Check the date and the framing.
  4. 4.Temporal mismatch is a common failure mode. Verify whether the source is current and whether the summary is presenting old information as present fact.
  5. 5.3. Look for the “missing caveat.”
  6. 6.Ask: what limitation would a careful writer include? Summaries often omit uncertainty, sample size limits, or exceptions. If the original source includes a major caveat, the summary may not.

When you receive a suspicious audio/video message

These steps aren’t glamorous. They are effective precisely because they exploit what deepfake operations often lack: time, access, and genuine context.

When you receive a suspicious audio/video message

  • Slow down on purpose. Deepfake scams feed on urgency.
  • Verify using a second channel. Call a known number, message via a separate app, or ask for a pre-agreed code word.
  • Ask a question a stranger wouldn’t answer easily. Something that requires personal context, not biographical trivia.

The publisher problem: what happens when the summary replaces the article

The ethics of AI summaries don’t belong solely to readers. They reshape the publishing ecosystem that produces reliable information in the first place.

If Pew’s reported behavior holds broadly—8% click rate with AI Overviews versus 15% without—publishers face a world where their work is read less often, even when it is cited. And if only ~1% of AI Overviews lead to clicks on cited sources, the economic link between reporting and revenue weakens further.

A reader might reasonably ask: So what? I got my answer. Yet the answer exists because someone paid reporters, editors, researchers, and lawyers to do work that a summary cannot replicate. Summaries are downstream of a system they may quietly starve. Business & Money

A fair counterpoint: summaries can distribute attention, too

Supporters argue that citations can send readers to outlets they wouldn’t otherwise find, and that the web has always evolved in ways that upset business models. They also argue that better summaries can reduce misinformation by steering users away from dubious sites.

That case deserves airtime. It is also incomplete. If summaries reduce the habit of source-checking, the public becomes more dependent on whichever system sits on top. Trust shifts upward—away from institutions with visible standards and toward interfaces that feel neutral but are not.

The healthiest information ecosystem would treat summaries as maps, not destinations.

Key Takeaway

The trust question is shifting from “Do I trust this publisher?” to “Do I trust this synthesis?”—and the answer depends on traceability, context, and uncertainty.

Where this leaves us: a new literacy for a summarized world

The question is no longer whether you trust a website. The question is whether you trust the layer that now speaks for websites.

AI summaries can be useful. They can also be wrong in ways that feel oddly convincing: a hallucinated statistic, a laundered joke, a false consensus assembled from echoes. Meanwhile, deepfakes make “seeing” and “hearing” less reliable than they used to be, with businesses reporting real-world harm—49% hit by deepfake fraud in Regula’s survey—and detection firms describing sharp increases, including 303% YoY in the U.S. in Sumsub’s reporting.

The antidote is not panic. It’s a reset of habits—and a demand for better design. Readers should click strategically, verify dates, and treat citations as starting points rather than proof. Platforms should make uncertainty legible and attribution traceable, instead of smoothing everything into a single confident voice.

A summarized internet can still be a knowable one. The price is vigilance—shared between the people reading and the systems doing the summarizing.
T
About the Author
TheMurrow Editorial is a writer for TheMurrow covering trends.

Frequently Asked Questions

Are AI search summaries always unreliable?

No. Many summaries are accurate, especially for straightforward questions. The reliability problem comes from known failure modes—hallucinated specifics, attribution blur, and temporal mismatch—and from the tendency to hide uncertainty. Treat summaries as a quick orientation, then open at least one source to confirm key claims, dates, and caveats.

What’s the strongest evidence that AI summaries reduce clicking?

A Pew Research Center analysis reported by Ars Technica (panel browsing behavior, March 2025) found a 15% click rate when searches lacked an AI answer versus 8% when they included AI Overviews, with only ~1% leading to clicks on cited sources. Other studies can differ, but the reported Pew numbers show a meaningful shift in behavior.

Why is “source laundering” such a big deal?

Source laundering happens when a claim that began as a joke, speculation, or an anonymous post gets restated in a calm, authoritative tone. The “glue on pizza” episode illustrates how context can evaporate when snippets get synthesized. The reader sees polished advice, not the messy origin story—and that makes bad information easier to believe and repeat.

If an AI summary includes citations, isn’t that enough?

Citations help, but they don’t guarantee that every sentence is supported by every linked source. Summaries can blend sources, cherry-pick, or fill gaps with inference. Use citations as a path to verification: open one or two, confirm the specific claim exists, and check the date and any limitations the summary may have omitted.

How common is deepfake fraud right now?

A survey commissioned by Regula reported that in 2024, 49% of surveyed businesses experienced deepfake fraud incidents involving audio and video scams. Sumsub reported a 303% YoY increase in deepfake instances in the U.S. (Q1 2024 context). The exact rates depend on detection and reporting, but the trend points to mainstream risk.

What should I do if I get a voice message that sounds like my boss or relative?

Assume urgency is a tactic. Verify through a second channel: call a known number, message through a different app, or use a pre-agreed code word. Ask a question that requires real shared context. These steps are simple, fast, and often enough to break a scammer’s script—even when the audio sounds convincing.

More in Trends

You Might Also Like