TheMurrow

85% of Tech Leaders Say Speed Beats AI Governance—So Why Are CFOs About to Treat ‘Shadow AI’ Like Off‑Balance‑Sheet Debt?

EY’s March 2026 poll shows speed-first AI is becoming standard operating procedure—even as 52% of department AI runs without oversight and leaks mount. Shadow AI isn’t a fringe behavior; it’s a structural incentive problem.

By TheMurrow Editorial
March 21, 2026
85% of Tech Leaders Say Speed Beats AI Governance—So Why Are CFOs About to Treat ‘Shadow AI’ Like Off‑Balance‑Sheet Debt?

Key Points

  • 1EY’s March 2026 poll finds 85% of tech leaders choose speed-first AI releases, pushing governance into production under pressure.
  • 252% of department AI initiatives run without formal oversight, while 78% say adoption is outpacing risk capacity—shadow AI becomes structural.
  • 3Leaks are already showing up: EY cites 45% sensitive data leak signals and 39% IP leak signals tied to unauthorized genAI tools.

Speed has always been a Silicon Valley virtue. What’s changed is the thing it’s outrunning.

EY’s poll shows speed is beating governance—by a lot

In a February 2026 poll of 500 U.S. technology-industry leaders at companies with 5,000+ employees, EY found that 85% prioritize speed-to-market and iterative innovation, choosing to manage regulatory and ethical risk “in a real-world environment” as products evolve. Only 15% said they prefer exhaustive pre-launch vetting and total regulatory alignment. EY published the results on March 4, 2026.

That single statistic would be provocative on its own. EY’s accompanying numbers make it more unsettling—and more useful—because they show what “speed-first” looks like inside large organizations right now: 52% of department-level AI initiatives are operating without formal approval or oversight, while 78% of leaders say AI adoption is outpacing their organization’s ability to manage business risks.

The result is a familiar phenomenon in an unfamiliar form: teams doing what they feel they must, with tools they can get, under deadlines they can’t move. The old term was shadow IT. The new one is shadow AI—and it’s less about unapproved software than unapproved decisions made with untraceable data flows.

“Shadow AI isn’t a side effect of innovation; it’s what innovation looks like when governance is optional.”

— TheMurrow Editorial
85%
In EY’s February 2026 poll, 85% of tech leaders prioritized speed-to-market and iterative innovation over exhaustive pre-launch governance alignment.
52%
EY found 52% of department-level AI initiatives are operating without formal approval or oversight—an organizational-scale signal, not a rounding error.
78%
78% of leaders told EY AI adoption is outpacing their organization’s ability to manage business risks—governance capacity is lagging behind deployment.

The 85% headline isn’t hype—it’s a roadmap to the governance gap

EY’s “85%” figure lands because it doesn’t read like a slogan. It reads like a decision framework. Leaders are not saying governance doesn’t matter; they are saying governance will be handled later, in production, under pressure.

That posture carries an internal logic. AI tools—and especially generative AI—can be evaluated in lab conditions, but their real impact emerges when they are embedded into workflows: customer support scripts, marketing copy, internal search, code reviews, pricing analysis, HR screening, contract summarization. The promise is compounding productivity. The risk is compounding error.

EY’s numbers suggest many organizations are betting they can steer after accelerating. Consider two data points in the same release:

- 52% of department-level AI initiatives operate without formal approval or oversight.
- 78% of leaders say adoption is outpacing risk management capacity.

Those are not abstract concerns. They describe a system where adoption is a managerial KPI and governance is a backlog item.

CIO Dive picked up the story on March 9, 2026, summarizing the trend as leaders choosing speed over governance and tying it directly to “shadow AI.” The word “shadow” matters: it implies activity outside formal visibility, not merely outside formal policy.

“When speed becomes strategy, shadow AI becomes structure.”

— TheMurrow Editorial

A fair reading: speed isn’t reckless—until it is

A speed-first approach isn’t automatically negligent. Software has long been built iteratively, and many industries manage risk through monitoring, incremental rollout, and post-launch controls.

The difference with AI is that iterative release can mean iterative exposure: exposure of sensitive data, exposure to regulatory scrutiny, and exposure to downstream decisions made from model output. The “move fast” reflex meets a technology that can quietly ingest more than intended—and produce outputs that look authoritative even when they’re wrong.

What “shadow AI” actually means—and why it’s riskier than shadow IT

The International Bar Association describes shadow AI as the use of AI tools—often generative AI—without formal approval, oversight, or governance, outside IT, security, and compliance controls. That definition is broad enough to cover a lot of everyday behavior, including the kind employees don’t even think of as “using AI”:

- Drafting client emails or proposals in a consumer chatbot
- Summarizing internal meeting notes using a personal account
- Pasting source code or error logs into a public model for debugging
- Uploading documents to an AI tool via a browser extension or connector

Shadow AI is often framed as an extension of shadow IT, but the risk profile is different. Shadow IT historically raised issues like unpatched software, unmanaged licenses, and unsupported integrations. Shadow AI adds something more direct: data exfiltration by copy/paste and derivative outputs that can be hard to trace back to the inputs.

The new hazard: leakage and “ghost provenance”

Shadow AI risk has two phases.

First comes data movement. Employees paste or upload internal documents, customer information, or proprietary code into tools that may not be governed by corporate controls. The IBA notes the core danger is not just unapproved software—it’s the ease with which valuable or regulated data can leave the organization.

Second comes decision-making. AI outputs—summaries, recommendations, rewritten clauses, proposed pricing, draft performance reviews—can flow into real work. Later, when someone asks “Why did we do that?”, the provenance may be vague: a prompt, an output, a copied paragraph, a hunch that “the model said so.”

“Shadow AI turns corporate memory into paste—and accountability into fog.”

— TheMurrow Editorial

How widespread is shadow AI? The numbers vary—and that’s the point

If you’re looking for one definitive prevalence figure, you won’t find it. Estimates differ because studies measure different populations (employees vs. executives), use different methods (self-report vs. organizational reporting), and define “unapproved” differently.

That variance is not a weakness; it’s a clue. Shadow AI is not a single behavior. It’s a spectrum of informal AI use, from harmless drafting help to high-stakes data exposure.

Employee self-report: high usage, high sensitivity

A Cybernews survey reported that 59% of surveyed U.S. employees admitted using unapproved AI tools at work. More striking: 75% of those using shadow AI admitted to sharing sensitive data.

Self-reported surveys have limitations—respondents may misunderstand what counts as “approved,” and sample composition matters. Still, the pattern is consistent with what many security teams observe: policies may exist, but convenience wins under deadline.

At the other end of the range, Fujitsu’s corporate blog cited a Deloitte AI Institute survey in which 20% of 2,000 employees reported using shadow AI. That lower figure doesn’t contradict the higher one so much as show how definition and context change outcomes. A strict definition of “shadow AI” will produce a lower number than a broad definition that includes any personal-account use.

Forbes Tech Council (citing Gusto) adds another lens: 45% of U.S. workers have used AI at work without disclosing it, and 69% were not fully transparent. That’s grey literature, not peer-reviewed research, but it fits the broader narrative: hidden use is common even when workers believe they are being pragmatic rather than deceptive.

Executive reporting: shadow AI at organizational scale

EY’s data provides a different kind of signal: not what workers admit, but what leaders see in the aggregate.

- 52% of department-level AI initiatives operating without formal approval or oversight is not a rounding error.
- 78% saying adoption outpaces risk management indicates leaders know the train is moving faster than the track maintenance.

The overlap between employee behavior and executive posture is where shadow AI becomes durable: employees adopt because tools are accessible; leaders tolerate because speed is rewarded.

The consequences are already visible: leaks, IP loss, and the quiet cost of “unauthorized”

Shadow AI discussions often drift into hypothetical catastrophe. EY’s findings bring the topic back to documented damage—damage that appears, notably, to be tied to the most mundane behavior: employees using unauthorized third-party tools.

According to EY, in the past 12 months:

- 45% of tech executives reported confirmed or suspected sensitive data leaks due to employees using unauthorized third-party genAI tools.
- 39% reported confirmed or suspected proprietary IP leaks for the same reason.

Those are executive-level acknowledgments of harm, not speculative risk registers. Even the phrasing—“confirmed or suspected”—is telling. It suggests many organizations cannot fully see what left, when, and through which channel. Suspicion itself becomes a cost: more audits, more legal review, more restrictions, more time spent disentangling what’s now intertwined.
45%
EY reports 45% of tech executives saw confirmed or suspected sensitive data leaks tied to unauthorized third-party genAI tool use in the past 12 months.
39%
EY reports 39% of tech executives saw confirmed or suspected proprietary IP leaks due to unauthorized third-party genAI tool use in the past 12 months.

Case study: the everyday “helpful” prompt that becomes a reportable incident

A common shadow AI scenario looks like this:

1. An employee faces a deadline and uses a personal AI account to summarize a document, debug code, or draft customer messaging.
2. They paste in context to improve output quality—sometimes far more context than necessary.
3. The output is shared internally or sent to a customer.
4. Later, someone realizes sensitive data may have been included in prompts or uploads, and the organization cannot confidently reconstruct what was exposed.

No melodrama. No malice. Just a workflow that turns speed into risk.

Another cost: bad outputs that look “good enough”

Not all harm is leakage. Some is decision quality.

Shadow AI outputs can slide into business processes without validation: a persuasive summary that omits a key clause, a rewritten policy that subtly changes meaning, a draft performance review that introduces bias-laden phrasing, or an analysis that sounds quantitative but rests on assumptions nobody reviewed.

Organizations often discover these failures only after something breaks: a customer complains, a contract dispute emerges, or a regulator asks for documentation. Shadow AI turns basic questions—“Who wrote this?” “What sources were used?”—into expensive detective work.

Why speed keeps winning: incentives, friction, and the myth of “we’ll govern later”

Executives don’t prioritize speed because they dislike governance. They prioritize speed because they are competing—often with peers who are also deploying quickly and learning publicly.

EY’s data is effectively a snapshot of incentives. If 85% of leaders choose iterative release over exhaustive pre-launch alignment, governance becomes something you do while moving. That can work if governance is built into the motion. It fails when governance is treated as a checkpoint you can revisit once the product is “stable.”

The friction problem: policies without paths

Cybernews noted a common organizational mismatch: companies may publish AI policies but fail to provide approved tools or clear workflows. That creates predictable outcomes:

- Employees still need to do the work.
- Consumer tools are a click away.
- The employee’s intent may be productivity, not defiance.

When compliance reads as “no,” and delivery reads as “now,” people route around the system.

Leaders are using shadow AI too

CIO.com reported that “roughly half of employees are using unsanctioned AI tools,” adding a pointed detail: enterprise leaders are major culprits. That matters because shadow AI isn’t merely grassroots behavior; it can be cultural. If senior staff casually paste sensitive content into consumer tools, “approved use” becomes a fiction.

The uncomfortable truth is that governance cannot succeed as a document. It needs to be a product: usable, fast, and aligned with how people actually work.

Key Insight

Shadow AI persists where incentives reward shipping, approved tools lag real workflows, and governance is treated as a later-stage checkpoint instead of an embedded operating system.

A pragmatic governance model: speed with guardrails, not speed versus guardrails

The most useful response to shadow AI is not a blanket ban. Bans tend to drive the behavior further underground, where visibility and learning drop even as usage persists.

A better approach is to make the approved path easier than the unapproved path. That sounds simple. It isn’t. But it’s achievable when governance is treated as an operational discipline rather than a legal afterthought.

Practical steps that reduce shadow AI without killing momentum

Organizations serious about speed and safety typically focus on three levers:

- Access: Provide approved AI tools that meet security requirements and are easy to use. When employees have a sanctioned option that works, fewer will reach for personal accounts.
- Clarity: Define what data can and cannot be used with AI tools. Policies should be short, specific, and tied to examples employees recognize.
- Accountability: Require visibility for AI initiatives—especially those embedded in business processes—so “department-level projects” aren’t invisible until something goes wrong.

EY’s “52% without oversight” statistic is a gift, in a way: it tells leaders exactly where to start. If half of departmental projects are outside formal approval, governance must move closer to the department level. Centralized oversight alone won’t keep up.

The counterargument: governance slows learning

Some leaders will object that heavier governance kills experimentation. They’re not wrong—bad governance does.

The goal is not to eliminate iteration. It’s to shorten the path from iteration to acceptable risk. That often means pre-approving certain low-risk use cases (drafting non-sensitive text, summarizing public information) while putting stricter controls around high-risk categories (customer data, regulated information, proprietary code, HR decisions).

Speed and governance are not opposites. They’re competing timelines—and the job is to align them.

Editor’s Note

The article’s core tension isn’t “move fast and break things” versus bureaucracy. It’s whether governance is designed to match the speed of modern AI-enabled workflows.

What tech leaders should do next: a sober checklist for the next 90 days

Shadow AI thrives in ambiguity. The next quarter is long enough to create clarity without pretending you can redesign everything.

For executives: measure what you’re currently tolerating

Use EY’s framing as a benchmark. Ask:

- How many AI initiatives are running at the department level, and how many have formal approval?
- Do teams have sanctioned tools that meet their needs, or are policies effectively “bring your own AI”?
- When a leak is suspected, can you trace what data was shared and where?

If 78% of leaders say risk management is behind adoption, then “we’ll handle it later” is no longer a plan—it’s a confession.

For security and compliance: stop aiming only at prohibition

Focus on visibility, not just restriction. Shadow AI becomes manageable when the organization can answer basic questions quickly: who used which tool, with what data category, for what purpose.

For managers: treat AI use as a workflow, not a moral issue

Employees using unapproved tools are often responding to incentives you control: deadlines, staffing, review cycles, and expectations of responsiveness. If the approved route is slow or unclear, they will choose the route that lets them ship.

The most responsible move many managers can make is to ask their teams a direct, non-punitive question: “Where are you already using AI to get your work done?” Then fix the path that made secrecy feel necessary.

A 90-day visibility-and-control checklist

  • Inventory department-level AI initiatives and flag anything running without formal approval or oversight
  • Publish sanctioned tools that are as fast to access as consumer alternatives—and meet security/compliance requirements
  • Define short, specific data rules with examples employees recognize (what’s allowed, what’s never allowed)
  • Add lightweight accountability so AI embedded in business processes has an owner, purpose, and review path
  • Test incident response: if a leak is suspected, verify you can trace tool usage and data categories quickly

Conclusion: The real choice isn’t speed or governance—it’s whether shadow AI runs your company

EY’s March 2026 poll doesn’t reveal a sudden ethical collapse in tech. It reveals something more ordinary and more actionable: leadership incentives still reward delivery, and AI makes delivery easier—until it makes risk invisible.

The headline number—85% choosing speed-to-market over exhaustive pre-launch alignment—matters because it sets the tone for everything that follows. If speed is the default, then ungoverned use is not an anomaly. It is the predictable, structural outcome.

Shadow AI will not be solved by scolding employees or drafting another policy memo. It will be solved when organizations make the safe path fast, and the fast path safe—so innovation doesn’t depend on secrecy.
T
About the Author
TheMurrow Editorial is a writer for TheMurrow covering business & money.

Frequently Asked Questions

What does “shadow AI” mean in practice?

Shadow AI typically refers to employees or teams using AI tools without formal approval, oversight, or governance. The International Bar Association describes it as AI use outside IT/security/compliance controls. In practice, it often looks like workers using consumer chatbots on personal accounts for business tasks and pasting internal documents or data into prompts.

Is shadow AI really widespread, or is it just media hype?

Multiple surveys suggest it’s common, though estimates vary. Cybernews reported 59% of surveyed U.S. employees used unapproved AI tools at work, while a Deloitte AI Institute survey cited by Fujitsu found 20% of 2,000 employees reported shadow AI use. Differences reflect methodology and definitions, but the pattern—meaningful unsanctioned use—is consistent.

Why is shadow AI considered riskier than shadow IT?

Shadow IT often creates risks through unmanaged software and integrations. Shadow AI adds a more direct risk: data can leave the organization via copy/paste, uploads, or connectors. The IBA also highlights that AI outputs can be reused in business processes, making it hard to trace how decisions were made and what data shaped them.

What evidence shows real harm from shadow AI already happening?

EY reported that 45% of tech executives saw confirmed or suspected sensitive data leaks in the last 12 months due to employees using unauthorized third-party genAI tools, and 39% reported confirmed or suspected proprietary IP leaks for the same reason. Those figures indicate the risk is not theoretical.

What does the “85% of tech leaders” statistic actually say?

EY’s Technology Pulse Poll (conducted February 2026, published March 4, 2026) found 85% of technology leaders prioritize speed-to-market and iterative innovation, managing regulatory/ethical risk as technology evolves in real-world use. 15% prioritize exhaustive pre-launch vetting and total regulatory alignment.

Does stronger AI governance automatically slow innovation?

Not necessarily. Poorly designed governance can slow teams down, but practical governance can enable faster, safer iteration by clarifying approved tools, allowed data use, and oversight for high-risk deployments. The aim is to make the compliant path the easiest path, reducing the incentive for employees to work around controls.

More in Business & Money

You Might Also Like