Sony Says It Deleted 135,000 ‘Deepfake Songs.’ The Real Scam Isn’t the Songs — It’s the Invisible Streams That Decide Who Gets Paid in 2026
Sony’s takedowns make deepfakes look like the headline threat. But the quieter crisis is bot-driven “demand” that inflates totals, dilutes royalty pools, and reshapes payouts for everyone.

Key Points
- 1Sony says it removed 135,000 deepfake tracks, signaling industrial-scale impersonation—but that headline figure masks deeper payout vulnerabilities.
- 2Understand the real threat: invisible bot streams inflate totals, dilute pooled royalty payouts, and quietly redirect money away from legitimate listening.
- 3Watch the industry pivot to trust infrastructure—penalties, provenance, and verification—because labeling alone can’t separate AI tools from fraud.
Sony’s number lands like a thunderclap
The first instinct is to treat that figure as a referendum on taste, authenticity, or the future of creativity. Yet the larger crisis isn’t aesthetic. It’s financial. The real fight is over the plumbing of streaming: how money moves, how fraud hides, and how quickly the cost of producing “content” has collapsed.
Deepfake songs are easy to hear once you know what you’re listening for. The more dangerous problem is quieter: streams that never came from listeners at all—bot-driven activity that can distort charts, inflate totals, and dilute royalty pools for everyone else. That’s where AI becomes less a novelty and more an accelerant.
“The problem isn’t only fake voices. It’s fake demand—manufactured at scale.”
— — Pullquote
Sony’s 135,000 removals: what the number says—and what it doesn’t
That figure matters because it puts scale on something labels have been warning about for years: impersonation is no longer a fringe prank. It’s industrial. When the cost of generating a convincing vocal clone approaches zero, the marginal cost of flooding platforms collapses too.
Still, readers deserve the nuance Sony’s headline number can obscure. The reported 135,000 refers to takedown requests/removals of impersonations, not a census of all AI-made music on streaming platforms. The figure also doesn’t necessarily describe a single service like Spotify or Apple Music; it’s reported broadly as “streaming services,” without a public platform-by-platform breakdown.
Why labels frame deepfakes as a reputational emergency
“A takedown is a remedy. It isn’t prevention—and it rarely arrives before the algorithm has already done its work.”
— — Pullquote
Deepfakes vs. “AI music”: a crucial distinction the discourse keeps missing
That distinction matters because the motivations differ:
- AI-created music can be experimental, clearly labeled, and made without mimicking anyone’s identity.
- Deepfake tracks trade directly on another person’s name, voice, and public trust.
Even if a platform wanted to welcome AI-assisted creativity, it would still need aggressive controls for impersonation. A service can host synthesizer music without pretending it’s a human band; it cannot credibly host a vocal clone of a chart-topping artist while insisting the ecosystem is safe for creators.
AI-created music vs. deepfake tracks
Before
- AI-created music can be experimental
- clearly labeled
- and made without mimicking anyone’s identity.
After
- Deepfake tracks trade directly on another person’s name
- voice
- and public trust.
The reputational spillover hits ordinary artists too
A second-order problem follows: identification becomes a burden shifted onto listeners and artists. Fans are asked to do forensic work; musicians are forced to police platforms for counterfeits.
Key Insight
The bigger heist: “invisible streams” and the streaming pool everyone shares
That means fraudulent streams don’t need cultural relevance to do damage. They only need volume. If bots inflate the denominator—total streams across the service—then the percentage allocated to legitimate music shrinks. The effect can be hard to see in any single report. It shows up as a slow leak: artists and labels feel underpaid compared to real audience demand, without being able to point to a single culprit.
How AI changes the economics of fraud
1. Content creation costs drop: generating tracks becomes fast and inexpensive.
2. Catalog scale becomes the strategy: large batches of tracks can be uploaded to spread manipulation across many IDs.
Deezer has explicitly connected the surge of AI-generated music to fraud incentives. Reporting and platform statements also describe a tactic that fits the pro‑rata model: fraudsters distribute artificial activity across many tracks, avoiding anomalies that might be triggered by one song receiving implausible plays.
How AI lowers the cost of streaming fraud
- 1.Content creation costs drop: generating tracks becomes fast and inexpensive.
- 2.Catalog scale becomes the strategy: large batches of tracks can be uploaded to spread manipulation across many IDs.
“Fraud doesn’t need a hit. It needs a spreadsheet.”
— — Pullquote
A case study in scale: the Michael Smith streaming-fraud indictment
The indictment’s theory—bots + huge AI catalog + streaming payouts—should reset the conversation. The harm isn’t just that listeners might be fooled by synthetic music. The alleged harm is that royalty money meant for working musicians is redirected to whoever can manufacture the most artificial activity.
Why the “distributed” model is hard to catch
That reality also complicates enforcement. Platforms can remove an account; fraudsters can return with a new distributor, new uploads, and a new batch of tracks. The fight starts to resemble spam control: an ongoing contest over identity, verification, and incentives.
Editor's Note
Platforms respond: penalties, demonetization, and the rise of “trust infrastructure”
Distributor support documentation (including TuneCore’s) describes Spotify imposing a €10 per track per month charge when a track is deemed to have high levels of artificial streaming, passed from Spotify to distributors and then to the account holder. Those materials also note that Spotify’s detection methods are proprietary, and that play counts and royalty reports may be adjusted when artificial streams are removed.
That approach signals a shift: platforms are trying to push costs back onto the supply chain—distributors and uploaders—rather than absorbing the burden internally.
The tension: deterrence vs. due process
A fair system needs two things at once:
- Strong detection and fast action to prevent dilution of payouts.
- Transparent standards and meaningful recourse so enforcement doesn’t become arbitrary.
Trust infrastructure—verification, labeling, and accountable distribution—starts to look less like bureaucracy and more like the foundation of the market.
What a fair anti-fraud system needs
- Transparent standards and meaningful recourse so enforcement doesn’t become arbitrary.
The labeling problem: why “just tag AI” sounds simple and fails in practice
A useful label has to answer at least three different questions:
1. Was AI used in creation? (A broad category that includes benign tools.)
2. Is the work impersonating a real artist? (A higher-risk category.)
3. Are the streams legitimate? (A behavior question, not a content question.)
A track could be fully human-made and still be the target of artificial streaming. Another could be AI-assisted but honestly presented. Another could be a deepfake designed to trick fans. A single “AI” tag collapses these scenarios into one bucket and invites the wrong enforcement.
What an “AI label” would need to clarify
- ✓Was AI used in creation? (A broad category that includes benign tools.)
- ✓Is the work impersonating a real artist? (A higher-risk category.)
- ✓Are the streams legitimate? (A behavior question, not a content question.)
The real need: provenance, not vibes
The uncomfortable truth is that music streaming grew up in an era optimized for frictionless uploads. The next era will reward systems that can say “yes” to legitimate creators quickly while saying “no” to the industrial bad actors just as fast.
Key Insight
What this means for artists, labels, and listeners who want a functioning music economy
For artists and rights holders, practical implications follow:
- Identity protection becomes ongoing work. Monitoring for impersonation is now as routine as monitoring for piracy once was.
- Royalty integrity becomes a collective-action problem. Fraud drains the pool; everyone pays unless systems catch it early.
- Distribution choices matter more. The supply chain—distributor policies, upload verification, enforcement responsiveness—affects risk.
For listeners, the takeaway is less moral panic and more consumer literacy. If a “new” track from a major artist appears without the normal promotion, credits, and official channels, skepticism is rational. That skepticism shouldn’t calcify into cynicism; it should push platforms toward clearer verification signals.
A functional streaming economy will likely look more like a financial system: not because music should be sterile, but because money attached to scale invites industrial abuse. The goal isn’t to stop AI tools. The goal is to stop counterfeit identity and counterfeit demand from setting the price of everyone else’s work.
Frequently Asked Questions
Did Sony say 135,000 deepfake songs were found on one platform like Spotify?
Sony’s statement was reported as affecting “streaming services” generally, without a public, platform-by-platform breakdown. Coverage summarized the figure as more than 135,000 AI-generated deepfake tracks removed after takedown requests. Without specific disclosure, readers should treat the number as an aggregate across services rather than proof of any single platform’s total.
Are “deepfake songs” the same thing as “AI music”?
No. Sony’s headline figure refers to impersonation deepfakes—tracks designed to mimic real artists such as Harry Styles, Beyoncé, and Queen (as cited in reporting). “AI music” is broader and can include non-impersonating, clearly labeled work. The policy and ethical stakes are different when a track trades on a specific artist’s identity.
Why does streaming fraud hurt artists who never get impersonated?
Because many streaming payouts broadly operate on a pooled, pro‑rata model. If fraudulent streams inflate total platform streams, they can dilute the share allocated to legitimate listening. The result can look like ordinary underpayment—hard to detect, widely felt—rather than a single dramatic incident.
What does the Michael Smith case show about AI and streaming fraud?
Prosecutors alleged that Michael Smith generated hundreds of thousands of AI-made songs and used bots to stream them billions of times, obtaining more than $10 million in royalties over a reported 2017–2024 period. The case illustrates how AI can enable catalog-scale schemes where the goal isn’t listeners, but manipulating payout systems.
What is Spotify’s €10 penalty, and who pays it?
Distributor documentation (including TuneCore’s support materials) describes Spotify charging €10 per track per month when a track is deemed to have high levels of artificial streaming. The fee is passed from Spotify to distributors and then to the artist/label account holder. The same materials note detection is proprietary and that play counts/royalties may be adjusted when artificial streams are removed.
Would mandatory “AI labels” solve the problem?
Labeling helps only if it distinguishes between very different issues: AI-assisted creation, impersonation deepfakes, and artificial streaming behavior. A single “AI” tag can be misleading, because a track can be human-made and still be fraudulently streamed—or AI-assisted and still legitimate. The harder need is provenance and accountable distribution, not a one-size label.















