One in 4 Americans Say They’ve Heard a Deepfake Voice Call—So Why Are Banks Still Asking for ‘One Last Verification’ Like It Works?
AI voice fraud isn’t just getting better—it fits perfectly into the phone workflows banks still rely on. When voice becomes contestable, “verification” becomes the weakest link.

Key Points
- 1Recognize the scale shift: a Hiya survey says 1 in 4 Americans encountered a deepfake voice call in 12 months.
- 2Understand the workflow: vishing plus SMS, spoofed caller ID, and pressure tactics to extract OTPs, resets, or transfer approvals.
- 3Demote voice as proof: hang up, call back via trusted numbers, never share OTPs, and expect more bank friction as defenses tighten.
A bank representative calls to confirm a transfer. The voice is calm, patient, vaguely familiar—professional in the way call-center voices tend to be. You answer a few “security questions,” then hear the phrase customers have been trained to accept as routine: one last verification.
Except the person on the line might not be a person at all.
Hiya’s State of the Call 2026 consumer survey, published in March 2026, reports that one in four Americans say they received or experienced an AI-generated deepfake voice call in the past 12 months. That’s not a government count, and it shouldn’t be treated as one. Still, the number tracks with an unmistakable shift: voice impersonation is no longer a niche trick reserved for spectacular scams. It’s now common enough that regulators have moved from curiosity to action—and banks are quietly rethinking how they authenticate you.
The deeper problem isn’t only that synthetic voices are getting better. It’s that the phone network and the workflows built on top of it were designed for a world where a voice was a reasonable proxy for a person. That assumption has expired.
The most unsettling part of AI voice fraud isn’t the technology. It’s how well it fits the way phone systems already work.
— — TheMurrow
The “1 in 4” deepfake-call claim—and what it does (and doesn’t) prove
Readers should hold two thoughts at once. First: vendor surveys have limits. Methodology matters—how the sample was recruited, how respondents interpreted “deepfake,” and how well they can distinguish an AI voice from a human voice after the fact. Second: the statistic lands because it matches what institutions are seeing in practice. Phone-based social engineering has been accelerating for years; AI simply lowers the cost and raises the hit-rate.
Tech coverage of the report highlights a related, more corrosive reality: many people aren’t sure they could reliably tell an AI voice from a real person on a call. That uncertainty is a gift to fraudsters. A scammer no longer needs to be perfect; they just need you to hesitate.
Why this number matters even if it’s imperfect
- Suspicion becomes the default, even for legitimate calls from banks, hospitals, and schools.
- Call centers face higher verification burdens, slowing service and raising costs.
- Victims become harder to identify, because shame and uncertainty reduce reporting.
Fraud thrives in the gap between what we can prove and what we’re willing to trust.
— — TheMurrow
How deepfake voice scams actually work: vishing plus a workflow
A typical workflow looks less like a single magic call and more like a small campaign:
- A text arrives first to create urgency or legitimacy (“fraud alert,” “account locked,” “payment failed”).
- A call follows from a spoofed number, with a voice that sounds like a bank agent, a supervisor, or sometimes someone you personally know.
- The caller asks for one-time passcodes (OTPs), or for you to “confirm” details that can later be used for account takeover.
The FBI has described this multi-channel pattern directly. In a May 15, 2025 PSA from its Internet Crime Complaint Center (IC3), the bureau warned that since April 2025, malicious actors have used AI-generated voice messages plus text messages to impersonate senior U.S. officials, build rapport, and then attempt access to personal or government accounts. The point isn’t that most consumers will be targeted like senior officials. The point is that the playbook is now mainstream: mix channels, impersonate authority, and push for credentials.
Why phone calls remain such a powerful attack channel
- Caller ID is weak proof. Spoofing remains common.
- Calls create time pressure. Real-time conversation outpaces careful verification.
- Organizations train scripts for speed. Many agents are rewarded for resolution time, not adversarial skepticism.
Real-world examples: from political robocalls to targeted impersonation
In early 2024, a fake “Biden” robocall incident drew national attention and was followed by FCC action, widely reported at the time. Political robocalls are not the same as account-takeover scams, but they demonstrate a crucial point: synthetic audio can be deployed at scale, cheaply, and with the intent to manipulate behavior.
A second example underscores how targeted these attacks can be. In July 2025, AP reporting described a State Department cable warning about an impostor using AI to impersonate (then-)Sen. Marco Rubio via voice-related channels (including voicemail) and messaging apps to contact officials. That kind of impersonation doesn’t rely on fooling everyone. It relies on fooling the right person once.
The uncomfortable truth: “How little audio is needed?”
A deepfake voice doesn’t have to sound exactly like you. It has to sound credible to someone who’s in a hurry.
— — TheMurrow
Why banks still ask for “one last verification”
Most banks still use layered controls designed for older threats:
- Knowledge-based prompts (date of birth, address, last transactions)
- Device and account history checks
- Callbacks to numbers on file
- OTPs via SMS, email, or an authentication app for step-up verification
None of these controls is automatically “broken.” Deepfake voice changes where the weak points are. If a fraudster has breached personal data or gathered it from previous leaks, knowledge-based questions become less protective. If a fraudster can keep you on the phone while triggering a password reset, an OTP becomes a tool of the scam rather than a defense.
The three-way tradeoff: fraud loss, friction, and accessibility
1. Fraud losses (direct costs, reimbursement disputes, regulatory risk)
2. Customer friction (abandonment, complaints, churn)
3. Accessibility (not every customer can use the latest device-based tools, and not every issue can be resolved in person)
The most secure option—forcing in-person verification for high-risk actions—doesn’t scale and can lock out legitimate customers. So call centers implement checklists and step-up triggers. That’s where the phrase “one last verification” comes from: an agent finishing a defined workflow, often under time pressure.
Tech debt and the slow grind of change
Key Insight
Voice biometrics: from frictionless promise to spoofable reality
In May 2023, the U.S. Senate Banking Committee highlighted concerns about AI voice cloning bypassing bank voice authentication, citing multiple reports, including a widely discussed Wall Street Journal account in which a reporter fooled a bank’s voice biometric system. The committee’s attention signals something important: the question is no longer academic. Policymakers view it as a consumer protection issue.
Industry reporting has echoed that shift. American Banker (published around August 2025 and later crawled in 2026) described banks tightening ID checks as deepfakes improve, reflecting a broader debate sparked by public remarks that AI can defeat voiceprint systems.
Voiceprints aren’t “dead,” but the marketing era is over
- Additional liveness or challenge-response checks
- Stronger device and session signals
- Higher scrutiny for high-risk transactions (new payees, large transfers)
The deeper lesson is about identity: a voice is now a malleable artifact, not a stable signature.
What regulators and law enforcement are signaling
The FBI’s May 2025 IC3 warning is especially telling because it describes not merely “deepfakes,” but operational campaigns: AI-generated voice messages paired with texts, used to build trust and then push toward account access. That’s exactly the pattern consumers report in everyday scams—authority, urgency, and a request that seems routine until it’s too late.
Meanwhile, high-profile incidents like the 2024 fake “Biden” robocall show regulators are willing to treat AI audio as a public harm, not just a private annoyance. These actions won’t stop fraud by themselves, but they shape how platforms, carriers, and banks prioritize defenses.
The practical implication for readers
- The technique is cheap enough to use broadly.
- The technique is working often enough to justify the effort.
Treat that as a cue to upgrade your own habits, especially around codes, transfers, and requests that arrive by phone.
Editor’s Note
What you can do now: a practical playbook for deepfake voice calls
Non-negotiables for consumers
Deepfake-call rules that hold up under pressure
- ✓Never read an OTP aloud to anyone who called you. Banks use OTPs to verify you; scammers use them to verify themselves.
- ✓Hang up and call back using a trusted number (the back of your card, your bank app, or the institution’s official site). Don’t redial from the recent-calls list if you suspect spoofing.
- ✓Slow down any “urgent” request involving money movement, new payees, wire transfers, or gift cards.
- ✓Use app-based authentication when available, rather than SMS, for high-value accounts. (SMS can be intercepted or socially engineered.)
- ✓Create a family passphrase for emergencies. If someone calls claiming to be a relative in trouble, ask for the phrase.
What banks and employers can do without waiting for perfect tech
- Train agents to expect social engineering, not just “identity verification.”
- Make callback procedures more robust and less discretionary.
- Reduce reliance on knowledge-based questions that may be compromised by breaches.
- Give customers clear guidance: “We will never ask you to read a code to us.”
The most effective interventions often sound boring. Boring is good; boring is resilient.
Key Takeaway
Where this goes next: trust, friction, and the future of phone identity
Banks will continue tightening authentication, but customers will feel the cost as friction: more step-up checks, more app prompts, more “we need to verify one more thing.” Some of that friction will be clumsy, because the underlying systems were built for a different era. Some of it will be necessary.
The broader social cost is harder to measure. When people can’t trust what they hear, they retreat to channels they can verify—or they disengage. That shift will touch everything from elder fraud to customer service to political messaging.
The smart response is not to abandon voice communication, but to demote it as proof. A voice can start a conversation. It can’t finish an authentication.
1) What does “one in four Americans heard a deepfake voice call” actually mean?
2) How do deepfake voice scams usually steal money or accounts?
3) Are banks’ phone security questions useless now?
4) Can voice biometrics still be trusted?
5) What’s the safest response if my “bank” calls about fraud?
6) How can I protect my family from AI voice impersonation scams?
7) What’s the bigger takeaway for the next few years?
Frequently Asked Questions
What does “one in four Americans heard a deepfake voice call” actually mean?
It comes from Hiya’s State of the Call 2026 consumer survey (March 2026), which reports one in four Americans say they experienced an AI-generated deepfake voice call in the past 12 months. It’s a survey result, not a government statistic. Its value is as a signal of perceived prevalence and growing public exposure.
How do deepfake voice scams usually steal money or accounts?
Most rely on vishing: a call designed to extract something actionable—one-time passcodes, password reset approvals, account details, or authorization for transfers. The FBI warned in May 2025 that criminals combined AI-generated voice messages with texts to build rapport and then attempt account access. The scam succeeds when you act quickly under pressure.
Are banks’ phone security questions useless now?
Not entirely, but they’re less reliable than they used to be. Knowledge-based questions can be defeated if personal data has been exposed, and OTPs can be defeated if you’re convinced to read the code aloud. Banks still use layered security because they must balance fraud prevention with speed and accessibility, but deepfake audio shifts which layers are most vulnerable.
Can voice biometrics still be trusted?
Voice biometrics aren’t automatically worthless, but static voiceprints are widely treated as spoofable on their own. The U.S. Senate Banking Committee raised concerns in 2023 after reports that AI voice cloning could bypass bank voice authentication. Many institutions now treat voice as one signal among several, especially for high-risk actions.
What’s the safest response if my “bank” calls about fraud?
Assume the call could be spoofed. Hang up and call back using a trusted number—on the back of your card, in your bank app, or on the institution’s official website. Never share an OTP with an inbound caller. If the issue is real, the bank will be able to verify it through official channels.
How can I protect my family from AI voice impersonation scams?
Agree on a simple family passphrase and use it for unexpected urgent calls. If someone claims to be a relative in trouble, ask for the phrase and slow the conversation down. Also set expectations: no one should approve transfers, share codes, or change account details based only on a phone call—no matter how familiar the voice sounds.















