Enough with the ‘AI summit’ hype—prove it works for regular people
Summits keep producing principles, coalitions, and photo-ops. Regular people keep living with hallucinations, deepfakes, and automated denials that can’t be appealed.

Key Points
- 1Demand auditable outcomes—not communiqués—so AI summit promises translate into fewer denials, scams, deepfakes, and biased automated decisions.
- 2Track the tradeoffs: “safety” is increasingly framed as “security,” which can sideline everyday harms like discrimination, fraud, and opaque bureaucracy.
- 3Insist on governance with teeth: thresholds, independent audits, disclosures, budgets, timelines, and real consequences when companies or states fail.
The invitations arrive with the same promise: a room full of leaders, a shared sense of urgency, and a communiqué that will “shape the future of AI.” The venue changes—Bletchley Park, Seoul, Paris—but the choreography is familiar. Cameras click. Declarations are signed. New coalitions are announced.
Meanwhile, the public lives with the blunt end of automation: a model that hallucinates a legal citation, a deepfake that poisons a local election, a benefits system that can’t explain why someone was denied help. None of that fits neatly on a summit stage. None of it looks good in a group photo.
Governments insist the gatherings matter because AI now sits at the center of economic strategy and geopolitical power. Critics reply with a simpler question: where are the auditable results? If these summits are supposed to make AI safer and more useful, everyday people should be able to point to the change.
“AI summits keep producing principles. The public keeps living with outcomes.”
— — TheMurrow Editorial
The backlash against the “AI summit” boom is not anti-cooperation. It’s anti-theater. And the gap between spectacle and measurable benefit is getting harder to ignore.
At issue: spectacle vs. measurable benefit
The Summit Boom: Why Everyone Wants a Seat at the Table
The pressure to convene is partly reputational. When other capitals host summits, non-hosts risk looking passive. A summit offers a visible answer to a complicated problem: a declaration is easier to produce than enforceable oversight, and a pledge is easier than a budget.
Skeptics, however, focus on what summits reliably generate:
- Communiqués and declarations
- Voluntary coalitions and “principles”
- Investment claims and headline numbers
- Photo-ops that convey momentum
Those outputs can be useful, especially when the alternative is diplomatic silence. Yet the core complaint remains the same: the ratio of ceremony to proof is off. When AI harms people—through biased outputs, fraudulent content, or bureaucratic mistakes—ordinary users don’t experience a “principle.” They experience a denial, a scam, or a reputational hit.
Axios captured a key shift in recent policy language: “AI safety” is increasingly framed as security and strategic competition, emphasizing threats such as cyber and biological misuse. That framing can be justified, but it can also narrow attention away from mundane harms like discrimination, consumer fraud, misinformation, and the quiet accumulation of administrative errors. The fight is not only about what AI can do, but what governments choose to measure.
“When ‘safety’ becomes ‘security,’ everyday harms start to look like footnotes.”
— — TheMurrow Editorial
Safety vs. Security vs. Competitiveness
A regulatory push that slows deployment may be framed as weakness in global competition. A security-first posture can justify secrecy, narrowing public scrutiny. And a competitiveness-first stance can treat accountability as friction. The summit format often smooths over those conflicts rather than resolving them.
Key Insight
Bletchley Park 2023: The Blueprint for the “AI Safety Summit” Era
The declaration deserves credit for naming where AI already operates in daily life. It explicitly referenced domains including housing, employment, transport, education, health, accessibility, and justice. That specificity matters because it avoids the fantasy that AI is only a laboratory concern or a distant future problem. It is already embedded in institutions that decide who gets hired, housed, treated, and heard.
At the same time, the Bletchley Declaration was designed to be broad. It called for AI to be human-centric, trustworthy, responsible, and developed safely. Those are defensible goals, but the document does not read like a program plan. It contains no budgets, deadlines, or enforcement mechanisms—no details that make progress measurable.
What Bletchley Got Right
Reporting around the summit also highlighted immediate public-interest risks—misinformation, deepfakes, biased outputs, plus calls for public education and transparency. Those are not abstract debates. They are the front lines of democratic trust and consumer protection.
The Accountability Problem
Summits can clarify intent, but intent is not a control system. Without audits, reporting duties, and consequences, even well-written principles risk becoming commemorative.
“A declaration can recognize risk. It can’t, by itself, reduce it.”
— — TheMurrow Editorial
Seoul 2024: Risk Thresholds, Voluntary Promises, and the Question of Who Decides
That shift matters. “Principles” are easy to endorse because they rarely force a choice. A threshold implies a trigger: if a system crosses a line, action follows. The Guardian highlighted the kinds of severe risks under discussion, including capabilities that could help malicious actors with chemical or biological weapons or enable systems to evade human oversight through manipulation or deception.
The Progress: Turning Fear Into Criteria
- Define a capability level or misuse potential
- Assess whether a system meets it
- Apply restrictions, pauses, or other interventions
In other words, it gestures toward operational policy rather than moral aspiration.
The Weakness: Voluntary Compliance and Missing Audits
- Who sets the thresholds? Governments, companies, or a joint body?
- Who audits the claims? Internal teams, third parties, or regulators?
- What happens when firms refuse? Public criticism is not a sanction.
A pause pledge sounds dramatic, but without transparency and verification, the public is asked to accept a promise from the very actors under competitive pressure to keep shipping.
Editor's Note
Paris 2025: Big Money, Bigger Politics, and the “Inclusive AI” Split
The refusal became a symbol of a wider transatlantic divide. Europe has leaned toward regulation and trust frameworks. US leadership has signaled a more innovation-first posture and warned against “excessive regulation.” Paris made that disagreement impossible to miss.
The Numbers That Made Paris Feel Different
Those are four concrete statistics that summits rarely deliver at once:
- ~60–61 countries endorsing a declaration (minus the US and UK)
- $400 million endowment for a new foundation
- $2.5 billion target within five years
- €200 billion mobilization claim, including €20 billion for gigafactories
Money matters because it can become infrastructure: compute, talent, datasets, procurement pipelines. Yet the critique raised in Fortune’s coverage is the familiar one—unclear targets and roadmaps can reduce headline funding to a form of prestige spending. Investment claims do not automatically translate into public benefit.
What the US/UK Refusal Signaled
- As a commitment to broad public benefit, access, and ethical deployment
- As a potential vehicle for regulatory constraints that some governments see as strategically costly
Readers should treat both interpretations seriously. Regulation can protect the public; it can also be clumsy, poorly targeted, or captured by incumbents. The Paris split wasn’t a morality play—it was a governance fight.
The Spectacle vs. the Scorecard: What Outcomes Would Actually Prove Progress?
A practical scorecard for summit success would focus on ordinary experiences:
- Faster benefit processing without opaque automated denials
- Fewer wrongful denials and a clear appeal path when automation is involved
- Better public information about where AI is used and how decisions are made
- Safer consumer products with clear accountability when AI causes harm
- Cheaper, more reliable services where automation boosts productivity
- Clearer liability when deepfakes, fraud, or discrimination are enabled by AI systems
None of those goals require science fiction. They require governance that behaves like governance: standards, audits, disclosures, and enforcement.
Why Summits Struggle to Deliver Measurable Results
Summits also blur the line between diplomacy and marketing. When investment announcements share the stage with safety language, readers should ask whether “safety” is being used as a reputational shield for industrial strategy.
“Safety” Rebranded as “Security”: What Gets Lost When the Frame Narrows
The risk is not that security concerns are wrong. The risk is that security becomes the only lens that matters. Many of the most common harms are not national-security events; they are cumulative injuries that rarely make headlines:
- Discrimination in housing and employment
- Consumer fraud and impersonation
- Misinformation and deepfakes that corrode trust
- Administrative errors in public services that people cannot contest
Bletchley’s declaration explicitly named everyday domains—housing, employment, education, health, justice. That breadth is a reminder: AI governance must cover more than catastrophic scenarios. A country can secure its borders and still fail its citizens if automated systems quietly degrade fairness and accountability at home.
A Fair Counterpoint
The challenge is balance: a governance agenda that is only about worst-case threats may neglect the present-day, high-frequency harms already shaping people’s lives.
Practical Takeaways: How to Read the Next Summit Without Being Sold a Story
Use a simple checklist.
What to Look For in Summit Announcements
- ✓Audits, not just pledges: Who verifies compliance, and how often?
- ✓Definitions with teeth: If “risk thresholds” are mentioned, who sets them and what triggers action?
- ✓Disclosure requirements: Will the public learn where AI is used in housing, employment, health, education, and justice?
- ✓Budgets and timelines: Are commitments funded, scheduled, and assigned to named agencies?
- ✓Consequences: What happens when a company or government fails to meet the commitment?
What to Treat as Public Relations Until Proven Otherwise
- ✓Broad principles without enforcement
- ✓Coalitions without reporting duties
- ✓Investment claims without project-level roadmaps
- ✓Safety language bundled with competitiveness messaging, with no mention of accountability
What Ordinary People Can Reasonably Ask For
- When AI makes or shapes a decision about you, you deserve an explanation and a way to contest it.
- When a company claims safety, you deserve evidence—testing, audits, and incident reporting.
- When governments claim leadership, you deserve measurable improvements in public systems, not just international positioning.
Conclusion: The Future of AI Governance Won’t Be Photographed
Each summit showed motion. None guaranteed traction.
The next phase of AI governance will be judged less by declarations than by what changes on the ground: whether deepfakes become easier to punish, whether automated decisions become easier to appeal, whether bias becomes harder to hide, whether public services become faster and fairer rather than merely cheaper. Summitry can help coordinate that work, but only if it becomes comfortable with measurement and consequences.
A serious public doesn’t need fewer meetings. It needs fewer untestable promises.
Frequently Asked Questions
What is the Bletchley Declaration, and why does it matter?
The Bletchley Declaration came out of the UK’s AI Safety Summit at Bletchley Park (Nov 1–2, 2023). It matters because it explicitly recognized AI’s rapid deployment across daily-life domains such as housing, employment, transport, education, health, accessibility, and justice. Critics note it was intentionally broad—more a statement of shared concern than an enforceable plan.
Why do critics call AI summits “performative”?
Critics argue many summits produce non-binding principles, communiqués, and voluntary coalitions without audits, enforcement, budgets, or deadlines. The complaint isn’t that cooperation is bad, but that public-facing spectacle often outpaces measurable improvements—like reducing wrongful automated denials, improving transparency, or clarifying accountability when AI harms people.
What did the Seoul AI Summit add beyond earlier meetings?
The Seoul summit (May 2024) advanced the conversation from broad principles toward shared “risk thresholds” for frontier AI—criteria for when a system’s capabilities could pose severe risks. Reporting highlighted concerns such as enabling chemical/biological misuse or evading human oversight. The major limitation: many company commitments were still voluntary, leaving open who decides thresholds and who audits compliance.
What happened at the Paris AI Action Summit in 2025?
At the Paris AI Action Summit (Feb 2025), the US and UK declined to sign a declaration endorsed by roughly 60–61 countries promoting “open/inclusive/ethical” AI, reflecting a regulatory and political split. Paris also featured major funding and investment claims, including a new foundation with a reported $400 million endowment aiming for $2.5 billion in five years, and the EU’s InvestAI plan to mobilize €200 billion including €20 billion for AI gigafactories.
Why is “AI safety” increasingly framed as “AI security”?
Policy language has shifted as governments worry about AI enabling high-impact misuse—cyber threats and potential chemical/biological risks among them. That security framing can speed up government action and coordination. The downside is narrower focus: everyday harms like discrimination, consumer fraud, misinformation, and bureaucratic errors can get less attention if they’re not treated as strategic threats.
How can the public tell if a summit actually improved AI safety?
Look for auditable outcomes rather than inspirational language: independent testing requirements, clear reporting on incidents, enforceable standards, and transparency about where AI is used in high-stakes settings. If a summit announces “thresholds,” it should also specify who sets them, how systems are evaluated, and what happens when a company fails to comply.















