America’s Real Security Threat Isn’t Abroad—it’s Our Slow Surrender to Unaccountable AI
The danger isn’t just hostile states or deepfakes. It’s government decisions quietly delegated to systems the public can’t inspect, challenge, or even identify.

Key Points
- 1Reframe “national security” as democratic resilience when opaque AI systems shape rights, safety, benefits, and due process inside government.
- 2Track the scale: 1,700+ federal AI use cases (Dec. 16, 2024), including 227 rights/safety-impacting systems—206 received compliance extensions.
- 3Demand enforceable accountability: inventories, audits, human appeals, and clear liability—because voluntary standards can’t secure high-impact decisions.
Americans are getting used to hearing that artificial intelligence is a “national security” issue. The usual script is familiar: hostile states, cyberattacks, deepfakes, election interference. Those dangers are real. But the more immediate security question is less cinematic and more domestic.
It starts when the government—quietly, routinely—delegates consequential decisions to systems that most citizens cannot inspect, challenge, or even identify. In that scenario, “security” stops meaning protection from foreign threats and starts meaning democratic resilience: the durability of public trust, due process, and institutional competence under pressure.
The federal government has already crossed the threshold from experimenting with AI to operationalizing it at scale. As of December 16, 2024, the consolidated federal inventory reported over 1,700 AI use cases across agencies. Among them were 227 systems classified as “rights-impacting and/or safety-impacting.” Those aren’t chatbots drafting emails. Those are tools potentially shaping access to benefits, safety decisions, and core rights.
“When AI becomes the principal basis for a government decision, accountability isn’t a nice-to-have—it’s the security perimeter.”
— — TheMurrow Editorial
AI as a “security” issue: the quiet shift in what we’re defending
Modern national-security doctrine already treats critical infrastructure resilience, cybersecurity, and public trust in information systems as security concerns. AI touches all three. A system that can be manipulated, that fails in brittle ways, or that automates errors at scale doesn’t merely create a “tech problem.” It can destabilize institutions that people rely on when everything else goes wrong.
Unaccountable AI introduces vulnerabilities that look a lot like traditional security risks, just routed through bureaucracy:
- Model manipulation and data poisoning can undermine decision integrity.
- Automation of fraud can overwhelm public programs.
- Scaled disinformation can corrode the shared facts democracy depends on.
- Opaque decision systems can erode legitimacy when people can’t understand or contest outcomes.
Policy debates often define “unaccountable AI” in practical terms: no transparent standards, no independent auditing, unclear legal liability, and weak due process when an AI output becomes a “principal basis” for decisions about rights and safety. The national-security frame fits because each missing safeguard becomes an exploitable weak point—whether by adversaries, criminals, or simple institutional failure.
The democratic risk: governance by black box
Key Insight
The federal government’s AI footprint is bigger—and more measured—than many realize
The consolidated 2024 Federal AI Use Case Inventory is a rare attempt to quantify the scale of AI in federal agencies. Its headline numbers should reset the debate:
- 1,700+ AI use cases reported across federal agencies (as of Dec. 16, 2024).
- 227 of those identified as rights-impacting and/or safety-impacting.
- 206 of those high-impact uses received an extension—up to one year—to meet minimum risk-management requirements.
Those numbers carry two messages at once. First, AI is not a pilot project in Washington; it’s infrastructure. Second, even under a formal accountability memo, agencies needed more time for a large share of the systems deemed most sensitive.
“The inventory is progress—and also a warning label. Government AI is widespread enough to require bookkeeping.”
— — TheMurrow Editorial
What “rights-impacting” and “safety-impacting” implies
What the inventory still can’t show
That doesn’t negate the inventory’s value. It clarifies what transparency can look like—and where it stops.
The accountability architecture that actually existed: OMB Memorandum M-24-10
On March 28, 2024, OMB issued Memorandum M-24-10, creating a governance and risk-management regime for federal agency AI use. It required agencies to build internal accountability structures and to treat high-impact AI differently than low-risk experimentation.
Key elements included:
- Appointment of Chief AI Officers (CAIOs) and governance structures.
- Requirements to inventory AI use cases and identify those that are rights-impacting and/or safety-impacting.
- Adoption of “minimum risk management practices” for those high-impact uses.
The sharpest edge of the memo was its deadline enforcement. For AI already deployed, agencies were required to implement minimum practices by December 1, 2024 (or receive an extension). Without compliance—or an extension—agencies were expected to stop using the system.
That’s a major policy signal: the government recognized that some AI uses are too consequential to operate on trust alone. M-24-10 tried to translate that recognition into operating rules.
The extensions tell a story
Both can be true. Either way, the extensions underscore an uncomfortable reality: compliance is hard, and the systems are already in motion.
“A compliance deadline is only as strong as the government’s willingness to pause systems that miss it.”
— — TheMurrow Editorial
Editor's Note
EO 14110: ambitious, then rescinded—yet not erased
That arc—build, then rollback—matters for readers trying to understand whether the U.S. is governing AI or improvising. But the more nuanced point is what happens after a rescission. Executive orders can be reversed quickly; the machinery set in motion by them doesn’t always vanish on command.
Processes, inventories, guidance, and standards work that agencies already developed can persist in practice unless they are actively replaced. Even a rescinded order can leave behind bureaucratic infrastructure—sometimes quietly, sometimes inconsistently.
Multiple perspectives: flexibility vs. fragility
The counterargument is institutional: frequent reversals create fragility. If agencies and the public can’t predict the rules, accountability becomes optional—a matter of political weather. For systems that affect rights and safety, that volatility is itself a security vulnerability.
DHS as a real-world case study: transparency with caveats
DHS describes its 2024 AI inventory as aligned to OMB requirements and explains key exclusions, including limits tied to classified or intelligence-community elements and national security systems. That disclosure is both informative and sobering: even when an agency is transparent, it signals areas where transparency cannot—or will not—reach.
DHS reports it identified 39 safety- and/or rights-impacting AI use cases. It also states it achieved full compliance with required minimum practices for deployed systems, with OMB-approved extensions for some items.
For the public, DHS’s approach shows what “responsible adoption” can look like when an agency tries to count, categorize, and manage AI rather than treat it as scattered IT projects.
Why this matters beyond DHS
Just as importantly, DHS’s disclosures reinforce a key point: inventories are not merely administrative paperwork. They are the beginning of due process. People cannot contest what they cannot see.
Standards without teeth: the promise and limits of voluntary frameworks
The research points to the National Institute of Standards and Technology’s AI Risk Management Framework (AI RMF 1.0), released January 26 (as noted in the source material). NIST’s role matters: it is a trusted convener of technical standards, and it provides a shared language for risk identification and mitigation.
But NIST frameworks are generally voluntary. That voluntary nature is not a flaw; it’s part of how NIST operates. The problem arises when voluntary frameworks are treated as substitutes for enforceable safeguards in high-impact contexts.
Where voluntary guidance helps—and where it can’t
- Provide common definitions for evaluating AI risk.
- Encourage consistent documentation and testing.
- Raise the baseline of responsible practice among agencies and vendors.
Voluntary standards cannot, on their own:
- Create legal liability for harms.
- Guarantee independent auditing.
- Ensure due process when AI affects rights or safety.
- Prevent agencies from adopting tools without robust evaluation.
That’s why M-24-10 mattered: it attempted to translate a risk-management ethos into required practices for high-impact government AI.
What “accountable AI” should mean in practice (and what readers should demand)
Policy debates commonly treat “unaccountable AI” as a package of missing elements: weak transparency, lack of independent audits, unclear liability, and minimal due process when AI becomes a principal basis for a decision. Turning that into a democratic security posture means insisting on practical guardrails.
Practical takeaways: a citizen’s checklist for government AI
Citizen’s checklist for consequential government AI
- ✓An inventory entry: Is the system publicly listed, at least at a high level?
- ✓Impact categorization: Is it labeled rights-impacting or safety-impacting?
- ✓Risk-management practices: Has the agency documented testing, monitoring, and failure handling?
- ✓Human review and appeal: Can a person challenge or override an AI-influenced decision?
- ✓Clear responsibility: Is there a named office (such as a CAIO structure) accountable for the system?
Readers don’t need to become machine-learning experts to demand intelligible governance. A democracy runs on knowable procedures. If an agency cannot explain what a system does, why it is reliable, and how a citizen can contest it, then that agency is asking the public to trade rights for convenience.
Multiple perspectives: efficiency vs. due process
Skeptics counter that efficiency gains mean little if errors scale, if bias becomes institutionalized, or if the public is forced into a maze of automated decisions with no meaningful recourse. In high-impact settings, due process is not an administrative burden. It is the system.
“No one should lose rights, safety, or opportunity to a machine they cannot question.”
— — TheMurrow Editorial
The security question ahead: will government treat AI as infrastructure or as a shortcut?
The 2024 inventory data—1,700+ use cases, 227 high-impact, 206 extensions—suggests a government both serious about governance and strained by the effort. OMB M-24-10 offered something rare: a structure that connects risk categories to concrete obligations and deadlines. EO 14110, even after its rescission, demonstrated how quickly AI policy can expand—and how quickly it can be reversed.
That volatility is the crux of the national-security argument. A state that relies on opaque systems for consequential decisions without stable accountability rules becomes easier to disrupt—from the outside and from within. Democratic resilience isn’t secured by rhetoric about “trustworthy AI.” It’s secured by boring things: inventories, audits, deadlines, appeals, and liability.
The next phase of AI governance will test whether the federal government can sustain those boring things when politics shifts and vendors promise speed. A public that understands the stakes should insist that whatever rules come next, they preserve one basic principle: no one should lose rights, safety, or opportunity to a machine they cannot question.
Bottom line
Frequently Asked Questions
Why is AI governance considered a national security issue?
National security increasingly includes critical infrastructure resilience, cybersecurity, and trust in information systems. AI can introduce systemic vulnerabilities—automation of fraud, scalable disinformation, brittle decision tools, and opaque systems in essential services. When government relies on unaccountable AI in high-impact areas, it can weaken democratic legitimacy and institutional stability.
How much AI is the federal government using?
The consolidated 2024 Federal AI Use Case Inventory (current as of Dec. 16, 2024) reported over 1,700 AI use cases across agencies. It also identified 227 as rights-impacting and/or safety-impacting. Those figures indicate AI is not peripheral—it’s widely embedded in federal operations.
What does “rights-impacting” or “safety-impacting” AI mean?
These labels are used in federal governance to flag AI systems that may affect core rights or public safety—such as decisions tied to benefits, enforcement, or essential services. The designation signals that stronger risk-management practices should apply, because errors or opacity can cause real harm and undermine due process.
What is OMB Memorandum M-24-10, and why does it matter?
Issued March 28, 2024, OMB M-24-10 set a concrete accountability structure for federal agency AI use. It required Chief AI Officers, AI inventories, and minimum risk-management practices for high-impact systems. For already-deployed tools, agencies had to comply by Dec. 1, 2024 (or secure an extension) or stop using them.
What do the compliance extensions mean?
The inventory reports 206 extensions for rights- and/or safety-impacting systems. Extensions can indicate serious implementation work—building documentation, testing, and oversight. They can also signal that agencies deployed high-impact AI faster than they could govern it. Either way, extensions show accountability is difficult at scale.
What happened to Executive Order 14110?
EO 14110 (“Safe, Secure, and Trustworthy AI”) was signed Oct. 30, 2023 and rescinded Jan. 20, 2025, per NIST’s documentation. Even so, some downstream work—agency processes, inventories, and standards efforts—may persist unless replaced, because bureaucratic systems don’t always disappear when a legal directive changes.















