TheMurrow

One in 4 Americans Say They’ve Heard a Deepfake Voice Call—So Why Are Banks Still Asking for ‘One Last Verification’ Like It Works?

AI voice fraud isn’t just getting better—it fits perfectly into the phone workflows banks still rely on. When voice becomes contestable, “verification” becomes the weakest link.

By TheMurrow Editorial
April 5, 2026
One in 4 Americans Say They’ve Heard a Deepfake Voice Call—So Why Are Banks Still Asking for ‘One Last Verification’ Like It Works?

Key Points

  • 1Recognize the scale shift: a Hiya survey says 1 in 4 Americans encountered a deepfake voice call in 12 months.
  • 2Understand the workflow: vishing plus SMS, spoofed caller ID, and pressure tactics to extract OTPs, resets, or transfer approvals.
  • 3Demote voice as proof: hang up, call back via trusted numbers, never share OTPs, and expect more bank friction as defenses tighten.

A bank representative calls to confirm a transfer. The voice is calm, patient, vaguely familiar—professional in the way call-center voices tend to be. You answer a few “security questions,” then hear the phrase customers have been trained to accept as routine: one last verification.

Except the person on the line might not be a person at all.

1 in 4
Hiya’s State of the Call 2026 survey (March 2026) says one in four Americans report an AI-generated deepfake voice call in the past 12 months.

Hiya’s State of the Call 2026 consumer survey, published in March 2026, reports that one in four Americans say they received or experienced an AI-generated deepfake voice call in the past 12 months. That’s not a government count, and it shouldn’t be treated as one. Still, the number tracks with an unmistakable shift: voice impersonation is no longer a niche trick reserved for spectacular scams. It’s now common enough that regulators have moved from curiosity to action—and banks are quietly rethinking how they authenticate you.

The deeper problem isn’t only that synthetic voices are getting better. It’s that the phone network and the workflows built on top of it were designed for a world where a voice was a reasonable proxy for a person. That assumption has expired.

The most unsettling part of AI voice fraud isn’t the technology. It’s how well it fits the way phone systems already work.

— TheMurrow

The “1 in 4” deepfake-call claim—and what it does (and doesn’t) prove

Hiya’s survey is the clearest recent source for the headline-friendly framing: “one in four Americans” reporting a deepfake voice call in the last year. Hiya says its research covers 12,000+ consumers across multiple countries, with the U.S. statistic representing a national slice, as reported in coverage of the survey’s release.

Readers should hold two thoughts at once. First: vendor surveys have limits. Methodology matters—how the sample was recruited, how respondents interpreted “deepfake,” and how well they can distinguish an AI voice from a human voice after the fact. Second: the statistic lands because it matches what institutions are seeing in practice. Phone-based social engineering has been accelerating for years; AI simply lowers the cost and raises the hit-rate.

Tech coverage of the report highlights a related, more corrosive reality: many people aren’t sure they could reliably tell an AI voice from a real person on a call. That uncertainty is a gift to fraudsters. A scammer no longer needs to be perfect; they just need you to hesitate.
12,000+
Hiya says its State of the Call 2026 research covers 12,000+ consumers across multiple countries, with a U.S. national slice.

Why this number matters even if it’s imperfect

Even a conservative reading implies scale. If a quarter of Americans say they encountered a deepfake voice call, then:

- Suspicion becomes the default, even for legitimate calls from banks, hospitals, and schools.
- Call centers face higher verification burdens, slowing service and raising costs.
- Victims become harder to identify, because shame and uncertainty reduce reporting.

Fraud thrives in the gap between what we can prove and what we’re willing to trust.

— TheMurrow

How deepfake voice scams actually work: vishing plus a workflow

“Deepfake call” can sound like science fiction—some villain generating celebrity-quality audio in real time. The real version is often more mundane and more effective: vishing, or voice phishing, bolstered by AI voice cloning and supported by other channels like SMS.

A typical workflow looks less like a single magic call and more like a small campaign:

- A text arrives first to create urgency or legitimacy (“fraud alert,” “account locked,” “payment failed”).
- A call follows from a spoofed number, with a voice that sounds like a bank agent, a supervisor, or sometimes someone you personally know.
- The caller asks for one-time passcodes (OTPs), or for you to “confirm” details that can later be used for account takeover.

The FBI has described this multi-channel pattern directly. In a May 15, 2025 PSA from its Internet Crime Complaint Center (IC3), the bureau warned that since April 2025, malicious actors have used AI-generated voice messages plus text messages to impersonate senior U.S. officials, build rapport, and then attempt access to personal or government accounts. The point isn’t that most consumers will be targeted like senior officials. The point is that the playbook is now mainstream: mix channels, impersonate authority, and push for credentials.
May 15, 2025
FBI IC3 warned about AI-generated voice messages paired with texts to impersonate officials and attempt account access.

Why phone calls remain such a powerful attack channel

Phone calls still carry an aura of legitimacy, and the infrastructure hasn’t caught up to modern impersonation:

- Caller ID is weak proof. Spoofing remains common.
- Calls create time pressure. Real-time conversation outpaces careful verification.
- Organizations train scripts for speed. Many agents are rewarded for resolution time, not adversarial skepticism.

Real-world examples: from political robocalls to targeted impersonation

Deepfake voice fraud can be hard to “see,” which is why public incidents matter. They show what’s possible—and how quickly the technology moves from novelty to tool.

In early 2024, a fake “Biden” robocall incident drew national attention and was followed by FCC action, widely reported at the time. Political robocalls are not the same as account-takeover scams, but they demonstrate a crucial point: synthetic audio can be deployed at scale, cheaply, and with the intent to manipulate behavior.

A second example underscores how targeted these attacks can be. In July 2025, AP reporting described a State Department cable warning about an impostor using AI to impersonate (then-)Sen. Marco Rubio via voice-related channels (including voicemail) and messaging apps to contact officials. That kind of impersonation doesn’t rely on fooling everyone. It relies on fooling the right person once.

The uncomfortable truth: “How little audio is needed?”

Readers often ask how much audio someone needs to clone a voice. The practical answer, suggested by how these schemes operate, is: not much. A voicemail greeting, a social video, a recorded meeting—any of it can provide raw material. The result doesn’t need to be a flawless Hollywood replica; it only needs to be persuasive long enough to get a code, a password reset, or a bank transfer approved.

A deepfake voice doesn’t have to sound exactly like you. It has to sound credible to someone who’s in a hurry.

— TheMurrow

Why banks still ask for “one last verification”

It’s tempting to blame banks—or to assume frontline workers are careless. The reality is more structural. Many financial institutions operate on reasonable assurance, not absolute proof, because absolute proof is expensive, slow, and sometimes impossible.

Most banks still use layered controls designed for older threats:

- Knowledge-based prompts (date of birth, address, last transactions)
- Device and account history checks
- Callbacks to numbers on file
- OTPs via SMS, email, or an authentication app for step-up verification

None of these controls is automatically “broken.” Deepfake voice changes where the weak points are. If a fraudster has breached personal data or gathered it from previous leaks, knowledge-based questions become less protective. If a fraudster can keep you on the phone while triggering a password reset, an OTP becomes a tool of the scam rather than a defense.

The three-way tradeoff: fraud loss, friction, and accessibility

Banks must balance:

1. Fraud losses (direct costs, reimbursement disputes, regulatory risk)
2. Customer friction (abandonment, complaints, churn)
3. Accessibility (not every customer can use the latest device-based tools, and not every issue can be resolved in person)

The most secure option—forcing in-person verification for high-risk actions—doesn’t scale and can lock out legitimate customers. So call centers implement checklists and step-up triggers. That’s where the phrase “one last verification” comes from: an agent finishing a defined workflow, often under time pressure.

Tech debt and the slow grind of change

Authentication systems in large banks are messy by necessity: multiple vendors, legacy telephony, fraud platforms, and compliance constraints. Replacing them isn’t like swapping an app. It’s a regulated, multi-year integration project where failures can lock customers out or create new fraud openings.

Key Insight

The deeper shift isn’t “AI voices are convincing.” It’s that voice is no longer a reliable proxy for a person inside existing phone workflows.

Voice biometrics: from frictionless promise to spoofable reality

For years, voice biometrics were marketed as a smoother alternative to passwords and security questions. You speak; the system recognizes your “voiceprint.” In theory, it’s elegant. In practice, the industry is more cautious now—especially with static voice biometrics, where the system matches your voice to a stored template.

In May 2023, the U.S. Senate Banking Committee highlighted concerns about AI voice cloning bypassing bank voice authentication, citing multiple reports, including a widely discussed Wall Street Journal account in which a reporter fooled a bank’s voice biometric system. The committee’s attention signals something important: the question is no longer academic. Policymakers view it as a consumer protection issue.

Industry reporting has echoed that shift. American Banker (published around August 2025 and later crawled in 2026) described banks tightening ID checks as deepfakes improve, reflecting a broader debate sparked by public remarks that AI can defeat voiceprint systems.
May 2023
U.S. Senate Banking Committee highlighted concerns that AI voice cloning could bypass bank voice authentication, pushing the issue into consumer protection.

Voiceprints aren’t “dead,” but the marketing era is over

A fair read is not that voice biometrics are useless, but that they can’t be treated as a standalone “you are who you sound like” credential. Systems may need:

- Additional liveness or challenge-response checks
- Stronger device and session signals
- Higher scrutiny for high-risk transactions (new payees, large transfers)

The deeper lesson is about identity: a voice is now a malleable artifact, not a stable signature.

What regulators and law enforcement are signaling

Regulators rarely move quickly unless the harm is persistent. Over the last few years, their messaging has become clearer: synthetic media is being weaponized, and the public should expect more of it.

The FBI’s May 2025 IC3 warning is especially telling because it describes not merely “deepfakes,” but operational campaigns: AI-generated voice messages paired with texts, used to build trust and then push toward account access. That’s exactly the pattern consumers report in everyday scams—authority, urgency, and a request that seems routine until it’s too late.

Meanwhile, high-profile incidents like the 2024 fake “Biden” robocall show regulators are willing to treat AI audio as a public harm, not just a private annoyance. These actions won’t stop fraud by themselves, but they shape how platforms, carriers, and banks prioritize defenses.

The practical implication for readers

When authorities warn about a method, it often means two things are already true:

- The technique is cheap enough to use broadly.
- The technique is working often enough to justify the effort.

Treat that as a cue to upgrade your own habits, especially around codes, transfers, and requests that arrive by phone.

Editor’s Note

Regulatory warnings are lagging indicators: they usually appear after a tactic is already scalable—and already succeeding.

What you can do now: a practical playbook for deepfake voice calls

The goal isn’t paranoia. The goal is to remove the scammer’s advantage: speed and authority. Most deepfake voice scams still need one of a few things—an OTP, a password reset, a transfer, or access to an account.

Non-negotiables for consumers

Keep these rules simple and consistent:

Deepfake-call rules that hold up under pressure

  • Never read an OTP aloud to anyone who called you. Banks use OTPs to verify you; scammers use them to verify themselves.
  • Hang up and call back using a trusted number (the back of your card, your bank app, or the institution’s official site). Don’t redial from the recent-calls list if you suspect spoofing.
  • Slow down any “urgent” request involving money movement, new payees, wire transfers, or gift cards.
  • Use app-based authentication when available, rather than SMS, for high-value accounts. (SMS can be intercepted or socially engineered.)
  • Create a family passphrase for emergencies. If someone calls claiming to be a relative in trouble, ask for the phrase.

What banks and employers can do without waiting for perfect tech

Institutions don’t need magical deepfake detectors to improve outcomes. They can:

- Train agents to expect social engineering, not just “identity verification.”
- Make callback procedures more robust and less discretionary.
- Reduce reliance on knowledge-based questions that may be compromised by breaches.
- Give customers clear guidance: “We will never ask you to read a code to us.”

The most effective interventions often sound boring. Boring is good; boring is resilient.

Key Takeaway

The scammer’s edge is urgency. Your edge is policy: fixed rules that don’t change mid-call, even when the voice sounds legitimate.

Where this goes next: trust, friction, and the future of phone identity

The phone call used to be the human channel—the place where nuance lived. Deepfake audio turns that assumption inside out. The future won’t be a world where every call is fake. It will be a world where every call is contestable, and where trust must be established through multiple signals, not a confident voice.

Banks will continue tightening authentication, but customers will feel the cost as friction: more step-up checks, more app prompts, more “we need to verify one more thing.” Some of that friction will be clumsy, because the underlying systems were built for a different era. Some of it will be necessary.

The broader social cost is harder to measure. When people can’t trust what they hear, they retreat to channels they can verify—or they disengage. That shift will touch everything from elder fraud to customer service to political messaging.

The smart response is not to abandon voice communication, but to demote it as proof. A voice can start a conversation. It can’t finish an authentication.

1) What does “one in four Americans heard a deepfake voice call” actually mean?

It comes from Hiya’s State of the Call 2026 consumer survey (March 2026), which reports one in four Americans say they experienced an AI-generated deepfake voice call in the past 12 months. It’s a survey result, not a government statistic. Its value is as a signal of perceived prevalence and growing public exposure.

2) How do deepfake voice scams usually steal money or accounts?

Most rely on vishing: a call designed to extract something actionable—one-time passcodes, password reset approvals, account details, or authorization for transfers. The FBI warned in May 2025 that criminals combined AI-generated voice messages with texts to build rapport and then attempt account access. The scam succeeds when you act quickly under pressure.

3) Are banks’ phone security questions useless now?

Not entirely, but they’re less reliable than they used to be. Knowledge-based questions can be defeated if personal data has been exposed, and OTPs can be defeated if you’re convinced to read the code aloud. Banks still use layered security because they must balance fraud prevention with speed and accessibility, but deepfake audio shifts which layers are most vulnerable.

4) Can voice biometrics still be trusted?

Voice biometrics aren’t automatically worthless, but static voiceprints are widely treated as spoofable on their own. The U.S. Senate Banking Committee raised concerns in 2023 after reports that AI voice cloning could bypass bank voice authentication. Many institutions now treat voice as one signal among several, especially for high-risk actions.

5) What’s the safest response if my “bank” calls about fraud?

Assume the call could be spoofed. Hang up and call back using a trusted number—on the back of your card, in your bank app, or on the institution’s official website. Never share an OTP with an inbound caller. If the issue is real, the bank will be able to verify it through official channels.

6) How can I protect my family from AI voice impersonation scams?

Agree on a simple family passphrase and use it for unexpected urgent calls. If someone claims to be a relative in trouble, ask for the phrase and slow the conversation down. Also set expectations: no one should approve transfers, share codes, or change account details based only on a phone call—no matter how familiar the voice sounds.

7) What’s the bigger takeaway for the next few years?

Expect more friction and more verification. The key shift is cultural: a convincing voice is no longer strong evidence of identity. Deepfake audio makes trust negotiable, so the safest systems—and habits—treat voice as a starting point, then confirm identity through independent channels and signals.
T
About the Author
TheMurrow Editorial is a writer for TheMurrow covering trends.

Frequently Asked Questions

What does “one in four Americans heard a deepfake voice call” actually mean?

It comes from Hiya’s State of the Call 2026 consumer survey (March 2026), which reports one in four Americans say they experienced an AI-generated deepfake voice call in the past 12 months. It’s a survey result, not a government statistic. Its value is as a signal of perceived prevalence and growing public exposure.

How do deepfake voice scams usually steal money or accounts?

Most rely on vishing: a call designed to extract something actionable—one-time passcodes, password reset approvals, account details, or authorization for transfers. The FBI warned in May 2025 that criminals combined AI-generated voice messages with texts to build rapport and then attempt account access. The scam succeeds when you act quickly under pressure.

Are banks’ phone security questions useless now?

Not entirely, but they’re less reliable than they used to be. Knowledge-based questions can be defeated if personal data has been exposed, and OTPs can be defeated if you’re convinced to read the code aloud. Banks still use layered security because they must balance fraud prevention with speed and accessibility, but deepfake audio shifts which layers are most vulnerable.

Can voice biometrics still be trusted?

Voice biometrics aren’t automatically worthless, but static voiceprints are widely treated as spoofable on their own. The U.S. Senate Banking Committee raised concerns in 2023 after reports that AI voice cloning could bypass bank voice authentication. Many institutions now treat voice as one signal among several, especially for high-risk actions.

What’s the safest response if my “bank” calls about fraud?

Assume the call could be spoofed. Hang up and call back using a trusted number—on the back of your card, in your bank app, or on the institution’s official website. Never share an OTP with an inbound caller. If the issue is real, the bank will be able to verify it through official channels.

How can I protect my family from AI voice impersonation scams?

Agree on a simple family passphrase and use it for unexpected urgent calls. If someone claims to be a relative in trouble, ask for the phrase and slow the conversation down. Also set expectations: no one should approve transfers, share codes, or change account details based only on a phone call—no matter how familiar the voice sounds.

More in Trends

You Might Also Like