TheMurrow

Enough with the ‘AI summit’ hype—prove it works for regular people

Summits keep producing principles, coalitions, and photo-ops. Regular people keep living with hallucinations, deepfakes, and automated denials that can’t be appealed.

By TheMurrow Editorial
February 16, 2026
Enough with the ‘AI summit’ hype—prove it works for regular people

Key Points

  • 1Demand auditable outcomes—not communiqués—so AI summit promises translate into fewer denials, scams, deepfakes, and biased automated decisions.
  • 2Track the tradeoffs: “safety” is increasingly framed as “security,” which can sideline everyday harms like discrimination, fraud, and opaque bureaucracy.
  • 3Insist on governance with teeth: thresholds, independent audits, disclosures, budgets, timelines, and real consequences when companies or states fail.

The invitations arrive with the same promise: a room full of leaders, a shared sense of urgency, and a communiqué that will “shape the future of AI.” The venue changes—Bletchley Park, Seoul, Paris—but the choreography is familiar. Cameras click. Declarations are signed. New coalitions are announced.

Meanwhile, the public lives with the blunt end of automation: a model that hallucinates a legal citation, a deepfake that poisons a local election, a benefits system that can’t explain why someone was denied help. None of that fits neatly on a summit stage. None of it looks good in a group photo.

Governments insist the gatherings matter because AI now sits at the center of economic strategy and geopolitical power. Critics reply with a simpler question: where are the auditable results? If these summits are supposed to make AI safer and more useful, everyday people should be able to point to the change.

“AI summits keep producing principles. The public keeps living with outcomes.”

— TheMurrow Editorial

The backlash against the “AI summit” boom is not anti-cooperation. It’s anti-theater. And the gap between spectacle and measurable benefit is getting harder to ignore.

At issue: spectacle vs. measurable benefit

The article’s central demand is straightforward: if summits claim to improve AI safety and usefulness, the public should be able to audit the outcomes—through clearer standards, verified compliance, and enforceable consequences.

The Summit Boom: Why Everyone Wants a Seat at the Table

AI summits and expos have proliferated as governments and companies treat AI as both an economic engine and a strategic asset. Hosting a high-profile event signals a country is “in the race,” capable of convening allies, and open for investment. The same stage can serve three agendas at once: safety leadership, national security positioning, and industrial policy.

The pressure to convene is partly reputational. When other capitals host summits, non-hosts risk looking passive. A summit offers a visible answer to a complicated problem: a declaration is easier to produce than enforceable oversight, and a pledge is easier than a budget.

Skeptics, however, focus on what summits reliably generate:
- Communiqués and declarations
- Voluntary coalitions and “principles”
- Investment claims and headline numbers
- Photo-ops that convey momentum

Those outputs can be useful, especially when the alternative is diplomatic silence. Yet the core complaint remains the same: the ratio of ceremony to proof is off. When AI harms people—through biased outputs, fraudulent content, or bureaucratic mistakes—ordinary users don’t experience a “principle.” They experience a denial, a scam, or a reputational hit.

Axios captured a key shift in recent policy language: “AI safety” is increasingly framed as security and strategic competition, emphasizing threats such as cyber and biological misuse. That framing can be justified, but it can also narrow attention away from mundane harms like discrimination, consumer fraud, misinformation, and the quiet accumulation of administrative errors. The fight is not only about what AI can do, but what governments choose to measure.

“When ‘safety’ becomes ‘security,’ everyday harms start to look like footnotes.”

— TheMurrow Editorial

Safety vs. Security vs. Competitiveness

The recurring tension inside these events is not subtle. Policymakers want safer systems. Security agencies want to prevent catastrophic misuse. Economic ministries want growth. A summit can hold all three in one sentence—until tradeoffs appear.

A regulatory push that slows deployment may be framed as weakness in global competition. A security-first posture can justify secrecy, narrowing public scrutiny. And a competitiveness-first stance can treat accountability as friction. The summit format often smooths over those conflicts rather than resolving them.

Key Insight

Summits are optimized for consensus and signaling. Measurable safety and public benefit require naming tradeoffs—budgets, authority, audits, and penalties—where consensus is hardest.

Bletchley Park 2023: The Blueprint for the “AI Safety Summit” Era

The modern phase of AI summits effectively began at Bletchley Park, where the UK hosted the first major AI Safety Summit on Nov 1–2, 2023. The headline output was the Bletchley Declaration, published by the UK government.

The declaration deserves credit for naming where AI already operates in daily life. It explicitly referenced domains including housing, employment, transport, education, health, accessibility, and justice. That specificity matters because it avoids the fantasy that AI is only a laboratory concern or a distant future problem. It is already embedded in institutions that decide who gets hired, housed, treated, and heard.

At the same time, the Bletchley Declaration was designed to be broad. It called for AI to be human-centric, trustworthy, responsible, and developed safely. Those are defensible goals, but the document does not read like a program plan. It contains no budgets, deadlines, or enforcement mechanisms—no details that make progress measurable.

What Bletchley Got Right

Bletchley’s strongest contribution was diplomatic: it created a shared vocabulary and a recurring process. It also made an important admission: AI is being deployed at speed, including in high-stakes sectors. A declaration that names real-world domains reduces the temptation to treat harm as hypothetical.

Reporting around the summit also highlighted immediate public-interest risks—misinformation, deepfakes, biased outputs, plus calls for public education and transparency. Those are not abstract debates. They are the front lines of democratic trust and consumer protection.

The Accountability Problem

The criticism that followed Bletchley became a recurring theme: voluntary commitments can slide into performative governance without independent evaluation, disclosure, and enforcement. The Guardian later echoed that concern in coverage of companies signing AI safety standards, noting the weakness of non-binding arrangements.

Summits can clarify intent, but intent is not a control system. Without audits, reporting duties, and consequences, even well-written principles risk becoming commemorative.

“A declaration can recognize risk. It can’t, by itself, reduce it.”

— TheMurrow Editorial

Seoul 2024: Risk Thresholds, Voluntary Promises, and the Question of Who Decides

The AI Seoul Summit in May 2024 continued the Bletchley process with additional commitments from governments and companies. Coverage from AP described renewed pledges, including commitments that companies could pause or halt development under extreme risk conditions. The summit also moved toward greater policy specificity by emphasizing shared “risk thresholds” for “frontier AI.”

That shift matters. “Principles” are easy to endorse because they rarely force a choice. A threshold implies a trigger: if a system crosses a line, action follows. The Guardian highlighted the kinds of severe risks under discussion, including capabilities that could help malicious actors with chemical or biological weapons or enable systems to evade human oversight through manipulation or deception.

The Progress: Turning Fear Into Criteria

Even critics should acknowledge the conceptual improvement. A threshold approach hints at a governance model that can be tested:
- Define a capability level or misuse potential
- Assess whether a system meets it
- Apply restrictions, pauses, or other interventions

In other words, it gestures toward operational policy rather than moral aspiration.

The Weakness: Voluntary Compliance and Missing Audits

The core weakness remained visible: most commitments were voluntary. A threshold is only as strong as the institution that defines it and the mechanism that verifies compliance. Seoul put the “what” on the table but left major questions unresolved:
- Who sets the thresholds? Governments, companies, or a joint body?
- Who audits the claims? Internal teams, third parties, or regulators?
- What happens when firms refuse? Public criticism is not a sanction.

A pause pledge sounds dramatic, but without transparency and verification, the public is asked to accept a promise from the very actors under competitive pressure to keep shipping.

Editor's Note

In this framing, “thresholds” only matter if they are defined by accountable institutions and verified through audits; otherwise they function like upgraded slogans.

Paris 2025: Big Money, Bigger Politics, and the “Inclusive AI” Split

By February 2025, the summit format had matured into something more openly political. The Paris AI Action Summit produced a dramatic headline: the United States and the United Kingdom declined to sign a declaration endorsed by roughly 60–61 countries promoting “open/inclusive/ethical” AI, according to reporting from The Guardian and AP.

The refusal became a symbol of a wider transatlantic divide. Europe has leaned toward regulation and trust frameworks. US leadership has signaled a more innovation-first posture and warned against “excessive regulation.” Paris made that disagreement impossible to miss.

The Numbers That Made Paris Feel Different

Paris also arrived with big financial claims and new institutions. Fortune reported the creation of a new foundation with a $400 million endowment and a stated target of $2.5 billion within five years, aimed at public-interest datasets and smaller AI models. Separately, the European Commission launched InvestAI, aiming to mobilize €200 billion, including a €20 billion fund for AI “gigafactories.”

Those are four concrete statistics that summits rarely deliver at once:
- ~60–61 countries endorsing a declaration (minus the US and UK)
- $400 million endowment for a new foundation
- $2.5 billion target within five years
- €200 billion mobilization claim, including €20 billion for gigafactories

Money matters because it can become infrastructure: compute, talent, datasets, procurement pipelines. Yet the critique raised in Fortune’s coverage is the familiar one—unclear targets and roadmaps can reduce headline funding to a form of prestige spending. Investment claims do not automatically translate into public benefit.
~60–61 countries
Countries endorsing the Paris “open/inclusive/ethical” AI declaration—while the US and UK declined to sign.
$400 million
Reported endowment for a new foundation in Paris, aimed at public-interest datasets and smaller AI models.
$2.5 billion
The foundation’s stated fundraising target within five years, underscoring the scale of summit-linked institution-building claims.
€200 billion (incl. €20B)
The EU’s InvestAI mobilization claim, including a €20B fund for AI “gigafactories”—big numbers that still require roadmaps and accountability.

What the US/UK Refusal Signaled

The refusal to sign wasn’t just a diplomatic spasm. It illustrated how “inclusive AI” can be read in two ways:
- As a commitment to broad public benefit, access, and ethical deployment
- As a potential vehicle for regulatory constraints that some governments see as strategically costly

Readers should treat both interpretations seriously. Regulation can protect the public; it can also be clumsy, poorly targeted, or captured by incumbents. The Paris split wasn’t a morality play—it was a governance fight.

The Spectacle vs. the Scorecard: What Outcomes Would Actually Prove Progress?

The strongest critique of the summit boom is the mismatch between what gets announced and what gets measured. Declarations and pledges can be useful, but they are not outcomes. Outcomes are changes that can be audited.

A practical scorecard for summit success would focus on ordinary experiences:
- Faster benefit processing without opaque automated denials
- Fewer wrongful denials and a clear appeal path when automation is involved
- Better public information about where AI is used and how decisions are made
- Safer consumer products with clear accountability when AI causes harm
- Cheaper, more reliable services where automation boosts productivity
- Clearer liability when deepfakes, fraud, or discrimination are enabled by AI systems

None of those goals require science fiction. They require governance that behaves like governance: standards, audits, disclosures, and enforcement.

Why Summits Struggle to Deliver Measurable Results

Summits are designed for consensus. Measurable outcomes require conflict—over budgets, mandates, inspection authority, and penalties. A communiqué can avoid naming who pays and who is punished. A regulatory framework cannot.

Summits also blur the line between diplomacy and marketing. When investment announcements share the stage with safety language, readers should ask whether “safety” is being used as a reputational shield for industrial strategy.

“Safety” Rebranded as “Security”: What Gets Lost When the Frame Narrows

Axios noted the growing tendency to frame AI safety as security, emphasizing threats such as cyber misuse or biological and chemical risks. Those dangers are real and deserve attention, especially around “frontier AI.” Seoul’s discussion of severe risks and thresholds reflects that emphasis.

The risk is not that security concerns are wrong. The risk is that security becomes the only lens that matters. Many of the most common harms are not national-security events; they are cumulative injuries that rarely make headlines:
- Discrimination in housing and employment
- Consumer fraud and impersonation
- Misinformation and deepfakes that corrode trust
- Administrative errors in public services that people cannot contest

Bletchley’s declaration explicitly named everyday domains—housing, employment, education, health, justice. That breadth is a reminder: AI governance must cover more than catastrophic scenarios. A country can secure its borders and still fail its citizens if automated systems quietly degrade fairness and accountability at home.

A Fair Counterpoint

Security framing can sometimes force seriousness. National security bureaucracies can move resources and set requirements faster than civilian agencies. Security priorities can also justify cross-border cooperation. Readers should acknowledge that strategic competition is one reason governments show up at all.

The challenge is balance: a governance agenda that is only about worst-case threats may neglect the present-day, high-frequency harms already shaping people’s lives.

Practical Takeaways: How to Read the Next Summit Without Being Sold a Story

Readers don’t need to dismiss AI summits. They need a way to interpret them like adults: as political events with mixed motives and uneven accountability.

Use a simple checklist.

What to Look For in Summit Announcements

  • Audits, not just pledges: Who verifies compliance, and how often?
  • Definitions with teeth: If “risk thresholds” are mentioned, who sets them and what triggers action?
  • Disclosure requirements: Will the public learn where AI is used in housing, employment, health, education, and justice?
  • Budgets and timelines: Are commitments funded, scheduled, and assigned to named agencies?
  • Consequences: What happens when a company or government fails to meet the commitment?

What to Treat as Public Relations Until Proven Otherwise

  • Broad principles without enforcement
  • Coalitions without reporting duties
  • Investment claims without project-level roadmaps
  • Safety language bundled with competitiveness messaging, with no mention of accountability

What Ordinary People Can Reasonably Ask For

Even if readers never attend a summit, they can demand outcomes in the services they touch:
- When AI makes or shapes a decision about you, you deserve an explanation and a way to contest it.
- When a company claims safety, you deserve evidence—testing, audits, and incident reporting.
- When governments claim leadership, you deserve measurable improvements in public systems, not just international positioning.

Conclusion: The Future of AI Governance Won’t Be Photographed

Bletchley Park gave the world a starting vocabulary and a diplomatic template. Seoul moved the conversation closer to operational concepts like risk thresholds while leaning heavily on voluntary commitments. Paris made the geopolitics unmistakable—roughly 60–61 countries signing a declaration while the US and UK refused, alongside headline funding numbers like $400 million, a $2.5 billion target, and the EU’s €200 billion mobilization claim with €20 billion for gigafactories.

Each summit showed motion. None guaranteed traction.

The next phase of AI governance will be judged less by declarations than by what changes on the ground: whether deepfakes become easier to punish, whether automated decisions become easier to appeal, whether bias becomes harder to hide, whether public services become faster and fairer rather than merely cheaper. Summitry can help coordinate that work, but only if it becomes comfortable with measurement and consequences.

A serious public doesn’t need fewer meetings. It needs fewer untestable promises.
T
About the Author
TheMurrow Editorial is a writer for TheMurrow covering opinion.

Frequently Asked Questions

What is the Bletchley Declaration, and why does it matter?

The Bletchley Declaration came out of the UK’s AI Safety Summit at Bletchley Park (Nov 1–2, 2023). It matters because it explicitly recognized AI’s rapid deployment across daily-life domains such as housing, employment, transport, education, health, accessibility, and justice. Critics note it was intentionally broad—more a statement of shared concern than an enforceable plan.

Why do critics call AI summits “performative”?

Critics argue many summits produce non-binding principles, communiqués, and voluntary coalitions without audits, enforcement, budgets, or deadlines. The complaint isn’t that cooperation is bad, but that public-facing spectacle often outpaces measurable improvements—like reducing wrongful automated denials, improving transparency, or clarifying accountability when AI harms people.

What did the Seoul AI Summit add beyond earlier meetings?

The Seoul summit (May 2024) advanced the conversation from broad principles toward shared “risk thresholds” for frontier AI—criteria for when a system’s capabilities could pose severe risks. Reporting highlighted concerns such as enabling chemical/biological misuse or evading human oversight. The major limitation: many company commitments were still voluntary, leaving open who decides thresholds and who audits compliance.

What happened at the Paris AI Action Summit in 2025?

At the Paris AI Action Summit (Feb 2025), the US and UK declined to sign a declaration endorsed by roughly 60–61 countries promoting “open/inclusive/ethical” AI, reflecting a regulatory and political split. Paris also featured major funding and investment claims, including a new foundation with a reported $400 million endowment aiming for $2.5 billion in five years, and the EU’s InvestAI plan to mobilize €200 billion including €20 billion for AI gigafactories.

Why is “AI safety” increasingly framed as “AI security”?

Policy language has shifted as governments worry about AI enabling high-impact misuse—cyber threats and potential chemical/biological risks among them. That security framing can speed up government action and coordination. The downside is narrower focus: everyday harms like discrimination, consumer fraud, misinformation, and bureaucratic errors can get less attention if they’re not treated as strategic threats.

How can the public tell if a summit actually improved AI safety?

Look for auditable outcomes rather than inspirational language: independent testing requirements, clear reporting on incidents, enforceable standards, and transparency about where AI is used in high-stakes settings. If a summit announces “thresholds,” it should also specify who sets them, how systems are evaluated, and what happens when a company fails to comply.

More in Opinion

You Might Also Like