Your ‘Passkey’ Can Still Be Phished—The One Account-Recovery Shortcut Making Big Tech’s Passwordless Push Weirdly Fragile in 2026
Passkeys largely beat classic phishing—but attackers don’t need to steal your passkey. They just need to shove you into recovery and “Try another way” fallbacks that still accept phishable codes.

Key Points
- 1Understand the loophole: passkeys resist classic credential phishing, but attackers can still win by forcing recovery or “Try another way” fallbacks.
- 2Watch for downgrade tactics: SMS/email OTPs, recovery codes, and support workflows often become the weakest path that still grants account access.
- 3Reduce your exposure: prefer passkey-only, store recovery codes securely offline, and treat phone-number recovery as a high-risk security perimeter.
Passkeys arrived with a promise that sounded almost too clean for the modern internet: no more passwords, no more phishing. Your face or fingerprint unlocks a cryptographic key stored in secure hardware, and the web finally gets a login system built for the way attacks actually happen.
The promise is real. Passkeys—properly implemented—shut down the classic scam where a fake site harvests your password and uses it later. A passkey can’t be “typed” into the wrong place because nothing is being typed. The browser and your device do the exchange, and the credential is tied to the legitimate domain.
Yet people keep asking an uncomfortable question: “Can passkeys still be phished?” The most honest answer is also the most frustrating one. Attackers can’t usually steal your passkey the way they steal a password. They can still steal your account by pushing you away from the passkey and into the “helpful” back doors platforms keep around for recovery.
“Passkeys don’t fail the phishing test. Account recovery does.”
— — TheMurrow Editorial
Passkeys, precisely: what you’re actually using
A passkey matters because it changes what a “secret” looks like. Passwords are reusable strings. Anyone who sees the string can replay it. Passkeys use a private key that never leaves your device, combined with a protocol that proves you have that key without revealing it. No one can shoulder-surf your “secret.” Nothing meaningful appears in a text field.
MDN’s passkeys guidance emphasizes the reason security people like them: origin binding. WebAuthn credentials are tied to a specific site identity (the origin/RP ID). A look‑alike phishing domain can mimic the design of your bank, but it can’t convince your device to use the bank’s credential on the attacker’s domain.
That isn’t a small improvement. Origin binding directly targets the most common consumer attack pattern: trick the user into authenticating to the wrong place and reuse what they typed. With passkeys, there’s no reusable secret to harvest, and the wrong site can’t call the right credential.
“A password can be copied. A passkey has to be present—and it has to be present for the right domain.”
— — TheMurrow Editorial
Why “phishing-resistant” doesn’t mean “phish-proof”
Can an attacker steal my passkey like they steal a password?
Can an attacker still trick me into giving them account access anyway?
- SMS or email one-time codes (OTPs)
- Recovery codes
- Account recovery workflows run through support or automated prompts
- Other alternate sign-in options that bypass passkeys
Ars Technica’s reporting puts the consumer reality bluntly: even when platforms market “passwordless,” most still maintain multiple fallbacks—SMS, email, device-based recovery, trusted contacts—because people lose phones and forget PINs. Those fallbacks become the attacker’s favorite entrance, precisely because the passkey itself is doing its job.
So the phrase “passkeys can still be phished” usually means something more specific: attackers can downgrade you. If they can’t get the passkey, they try to route you into a path that accepts something phishable—like a code you can be talked into reading aloud.
Key Idea
The Achilles’ heel: recovery and fallback paths
Platforms respond with recovery channels designed for availability. Common options include:
- A registered phone number
- A registered email
- Recovery codes printed or stored somewhere
- Trusted contacts or social recovery
- Support-driven account restoration
Those are not inherently reckless choices. They’re attempts to answer a brutal question: how do you prove you’re you when you’ve lost the very device that proves it?
Security guidance increasingly treats those channels as the primary risk. NIST’s Digital Identity Guidelines (SP 800-63B, 2024 draft update) treat PSTN/SMS out-of-band methods in a restricted way and explicitly call out risks including SIM change/number porting and malware that reads SMS secrets. Translation: SMS is not merely “less secure.” It has known, repeatable failure modes that attackers understand.
The FIDO Alliance, in a 2025 paper on the journey to preventing phishing, warns about social engineering that “downgrade[s] account security level.” The point isn’t theoretical. If an attacker can’t beat the strongest door, they look for a side entrance labeled “Forgot your phone?”
“Security isn’t what your best login method can do. Security is what your weakest recovery method allows.”
— — TheMurrow Editorial
How downgrade attacks work in the real world
The classic move: “We can’t verify your passkey—use this code”
The “another way” often means an OTP. The attacker can request a real OTP from the real service, then persuade the victim to share it. The passkey remains secure; the account session becomes compromised.
The recovery move: “Your device was lost—confirm your phone number”
- Social engineering: persuade someone to disclose codes.
- SIM swap / number port-out: take control of the phone number.
- On-device malware: read inbound SMS codes (a risk NIST explicitly highlights).
The uncomfortable reality is that recovery paths are often designed to be forgiving. Forgiveness is good customer experience. It’s also a security liability. A process meant to save legitimate users from lockout can become a playbook for adversaries.
The platform move: “Fallbacks exist because they must”
Editor’s Note
Apple’s example: passkeys are strong; recovery still leans on SMS
Apple’s security documentation for Secure iCloud Keychain recovery describes a process that relies on “secondary authentication” and requires users to register a phone number. The recovery flow involves receiving an SMS and replying to it for recovery to proceed. Apple’s consumer passkey explainer also references SMS to a registered phone number as part of the recovery design.
No one should pretend that Apple is alone here. This is a common industry trade: recovery that a normal person can complete often involves channels a determined attacker can target. The key point is structural, not brand-specific:
- Passkeys live inside ecosystems (iCloud Keychain, password managers, device sync).
- Ecosystem recovery becomes, functionally, passkey recovery.
- If recovery leans on phone numbers, the system inherits SMS risks.
That doesn’t mean “don’t use passkeys.” It means the security story can’t stop at the passkey prompt. If an attacker can’t authenticate as you, they may try to recover as you.
The standards community is already warning about the “whole journey”
MDN’s documentation underlines what WebAuthn gets right: origin binding and the elimination of reusable shared secrets. That’s the core phishing resistance users should demand.
The FIDO Alliance goes further by addressing what happens around the core: the “journey.” In its 2025 paper on preventing phishing, FIDO explicitly calls out the risk of attackers using social engineering to downgrade account security level. That language matters because it reframes the problem. The threat isn’t that passkeys are secretly weak. The threat is that many deployments treat passkeys as an add-on while leaving older, weaker methods active.
NIST’s guidance, meanwhile, gives the blunt rationale for skepticism about SMS-based recovery. NIST doesn’t need a dramatic headline to make the point; it simply enumerates the known risks: number porting, SIM changes, and malware reading SMS. Those are not edge cases. They’re recurring incidents in consumer account takeovers.
Put together, the message is consistent: phishing resistance must cover login, fallback, and recovery—the full set of paths that lead to account access.
What passkeys fix vs. what recovery can reintroduce
Before
- Origin binding
- no reusable secret
- blocks classic credential-harvesting phishing
After
- Downgrade to SMS/email OTP
- support recovery abuse
- recovery-code theft
- session takeover via alternate flows
What readers should do now (and what platforms should stop pretending)
Practical steps you can take
- Treat recovery codes like keys, not paperwork. Store them offline in a secure place. A recovery code is often a master key.
- Harden your phone number. If a critical account uses SMS recovery, assume your phone number is part of your security perimeter. NIST’s concerns—SIM swap/port-out and malware—are exactly why.
- Be suspicious of “Try another way” prompts. Attackers love pushing victims into alternate flows. If you expected a passkey prompt and got an OTP request, pause and verify the domain and the context.
- Reduce account sprawl. The more accounts tied to a single email/number, the more valuable that recovery channel becomes.
Anti-downgrade checklist
- ✓Prefer passkey-only where available; remove passwords/SMS sign-in when you can
- ✓Store recovery codes offline in a secure place—treat them like master keys
- ✓Assume your phone number is security-critical if SMS recovery exists
- ✓Treat unexpected “Try another way” prompts as a red flag; verify domain/context
- ✓Reduce account sprawl tied to one email/number to shrink recovery-channel value
What platforms should stop doing
Passkeys can’t deliver on their public promise if the weak paths remain widely enabled and easy to trigger. FIDO’s own warning about downgrade attacks should be read as an implementation critique: phishing resistance is not a feature you bolt onto one screen. It’s a property of the system’s entire set of authentication options.
A fair counterpoint deserves airtime: removing fallbacks can lock out legitimate users and create accessibility problems. The challenge is to design recovery that is both humane and hard to abuse—without quietly reintroducing the very phishable secrets passkeys were meant to retire.
Key Insight
The uncomfortable truth: passkeys fix phishing—until you need help
The fine print is that accounts are not defended by one mechanism. They’re defended by a collection of mechanisms, built over years, often kept for good reasons, and rarely evaluated as a single attack surface. Ars Technica’s reporting captures the tension: elegant cryptography meets messy human reality.
The next phase of “passwordless” won’t be won by better face scans or slicker prompts. It will be won in the unglamorous work of recovery: eliminating downgrade paths, making high-assurance recovery usable, and resisting the temptation to keep SMS as the universal skeleton key.
Passkeys don’t need hype. They need follow-through.
Frequently Asked Questions
Can a hacker steal my passkey from a phishing website?
Usually not in the way passwords are stolen. Passkeys are WebAuthn credentials tied to a specific domain, and the private key isn’t typed or revealed. MDN explains that origin binding prevents a credential created for one site from being used on a look‑alike domain, which blocks classic credential-harvesting phishing.
If passkeys are phishing-resistant, why do people still get hacked?
Attackers often bypass the passkey itself. Many services still allow alternate sign-in and recovery methods—SMS codes, emailed OTPs, recovery codes, or support workflows. A phishing campaign can push you into “Try another way,” capture a one-time code, and take over a session even though the passkey remains secure.
What does “downgrade attack” mean with passkeys?
A downgrade attack is social engineering that nudges a user from a strong, phishing-resistant method (passkeys) to a weaker, phishable method (like SMS or email OTP). The FIDO Alliance explicitly warns about social engineering that “downgrade[s] account security level,” emphasizing that security must cover fallback and recovery, not just the primary login.
Are SMS codes safe as a backup for passkeys?
They’re convenient, but they carry known risks. NIST’s SP 800-63B guidance treats PSTN/SMS out-of-band methods in a restricted way and cites threats such as SIM swap/number porting and malware that reads SMS messages. If an attacker can take over your number or trick you into sharing a code, SMS becomes the weak link.
Does Apple’s passkey system have weak points?
The passkey cryptography is strong, but recovery is the pressure point. Apple’s security documentation for Secure iCloud Keychain recovery describes a process that requires a registered phone number and involves an SMS that must be replied to. That means phone-number security and recovery workflows matter to the overall safety of the ecosystem.
Should I turn off passwords and OTP fallback if a service lets me?
If you can keep reliable access without them, reducing fallback options generally reduces attack surface. The main trade-off is lockout risk: losing devices becomes more painful without recovery paths. A sensible approach is to keep high-assurance recovery (like securely stored recovery codes) while disabling low-assurance methods (like SMS) when the option exists.















