TheMurrow

Sony Says It Deleted 135,000 ‘Deepfake Songs.’ The Real Scam Isn’t the Songs — It’s the Invisible Streams That Decide Who Gets Paid in 2026

Sony’s takedowns make deepfakes look like the headline threat. But the quieter crisis is bot-driven “demand” that inflates totals, dilutes royalty pools, and reshapes payouts for everyone.

By TheMurrow Editorial
March 21, 2026
Sony Says It Deleted 135,000 ‘Deepfake Songs.’ The Real Scam Isn’t the Songs — It’s the Invisible Streams That Decide Who Gets Paid in 2026

Key Points

  • 1Sony says it removed 135,000 deepfake tracks, signaling industrial-scale impersonation—but that headline figure masks deeper payout vulnerabilities.
  • 2Understand the real threat: invisible bot streams inflate totals, dilute pooled royalty payouts, and quietly redirect money away from legitimate listening.
  • 3Watch the industry pivot to trust infrastructure—penalties, provenance, and verification—because labeling alone can’t separate AI tools from fraud.

Sony’s number lands like a thunderclap

Sony Music’s number lands like a thunderclap: more than 135,000 AI-generated “deepfake” tracks removed from streaming services after takedown requests. Not 135,000 streams. Not 135,000 suspicious accounts. 135,000 tracks—songs built to sound like real, famous artists.

The first instinct is to treat that figure as a referendum on taste, authenticity, or the future of creativity. Yet the larger crisis isn’t aesthetic. It’s financial. The real fight is over the plumbing of streaming: how money moves, how fraud hides, and how quickly the cost of producing “content” has collapsed.

Deepfake songs are easy to hear once you know what you’re listening for. The more dangerous problem is quieter: streams that never came from listeners at all—bot-driven activity that can distort charts, inflate totals, and dilute royalty pools for everyone else. That’s where AI becomes less a novelty and more an accelerant.

“The problem isn’t only fake voices. It’s fake demand—manufactured at scale.”

— Pullquote

Sony’s 135,000 removals: what the number says—and what it doesn’t

Sony Music’s claim, reported in March 2026 via BBC coverage cited by multiple outlets, was stark: the company said it requested removal of more than 135,000 AI-generated deepfake tracks that impersonated its artists on streaming services. TechRadar’s coverage points to Dennis Kooker, Sony Music’s President of Global Digital Business, describing reputational and commercial harms from impersonations—everything from tarnished artist identities to derailed release campaigns. The examples tied to the reporting include targets such as Harry Styles, Beyoncé, and Queen.

That figure matters because it puts scale on something labels have been warning about for years: impersonation is no longer a fringe prank. It’s industrial. When the cost of generating a convincing vocal clone approaches zero, the marginal cost of flooding platforms collapses too.

Still, readers deserve the nuance Sony’s headline number can obscure. The reported 135,000 refers to takedown requests/removals of impersonations, not a census of all AI-made music on streaming platforms. The figure also doesn’t necessarily describe a single service like Spotify or Apple Music; it’s reported broadly as “streaming services,” without a public platform-by-platform breakdown.
135,000
AI-generated deepfake tracks Sony says it requested removed from streaming services after takedown requests.

Why labels frame deepfakes as a reputational emergency

A deepfake that mimics a superstar isn’t just “fake art.” It can be commercial sabotage. Release campaigns are carefully timed; a convincing impostor track can confuse fans, pollute search results, or ride the attention around an official drop. Even when streaming services eventually remove the track, the damage can arrive first—fast, algorithmic, and difficult to unwind.

“A takedown is a remedy. It isn’t prevention—and it rarely arrives before the algorithm has already done its work.”

— Pullquote

Deepfakes vs. “AI music”: a crucial distinction the discourse keeps missing

Public debate often lumps everything into “AI songs,” as though the only question is whether computers can write good choruses. Sony’s removals highlight a narrower—and more legally and ethically charged—category: impersonation deepfakes, designed to resemble a specific artist.

That distinction matters because the motivations differ:

- AI-created music can be experimental, clearly labeled, and made without mimicking anyone’s identity.
- Deepfake tracks trade directly on another person’s name, voice, and public trust.

Even if a platform wanted to welcome AI-assisted creativity, it would still need aggressive controls for impersonation. A service can host synthesizer music without pretending it’s a human band; it cannot credibly host a vocal clone of a chart-topping artist while insisting the ecosystem is safe for creators.

AI-created music vs. deepfake tracks

Before
  • AI-created music can be experimental
  • clearly labeled
  • and made without mimicking anyone’s identity.
After
  • Deepfake tracks trade directly on another person’s name
  • voice
  • and public trust.

The reputational spillover hits ordinary artists too

Impersonation doesn’t only harm megastars. The broader market suffers when fans start doubting whether a new upload is real. That skepticism can reduce engagement across the board—especially for emerging artists who rely on trust and momentum. When discovery becomes a minefield, the safest choice is listening to what you already know.

A second-order problem follows: identification becomes a burden shifted onto listeners and artists. Fans are asked to do forensic work; musicians are forced to police platforms for counterfeits.

Key Insight

Even a platform that welcomes AI-assisted creativity still needs aggressive controls for impersonation—because identity theft isn’t a genre.

The bigger heist: “invisible streams” and the streaming pool everyone shares

Deepfakes are flashy because they sound like someone. Fraud is more corrosive because it rewires payouts. Most major streaming royalty systems still broadly function on a pooled, pro‑rata model: platform revenue goes into a pot, and the pot is distributed based on share of total streams.

That means fraudulent streams don’t need cultural relevance to do damage. They only need volume. If bots inflate the denominator—total streams across the service—then the percentage allocated to legitimate music shrinks. The effect can be hard to see in any single report. It shows up as a slow leak: artists and labels feel underpaid compared to real audience demand, without being able to point to a single culprit.

How AI changes the economics of fraud

AI makes fraud cheaper on both sides of the equation:

1. Content creation costs drop: generating tracks becomes fast and inexpensive.
2. Catalog scale becomes the strategy: large batches of tracks can be uploaded to spread manipulation across many IDs.

Deezer has explicitly connected the surge of AI-generated music to fraud incentives. Reporting and platform statements also describe a tactic that fits the pro‑rata model: fraudsters distribute artificial activity across many tracks, avoiding anomalies that might be triggered by one song receiving implausible plays.

How AI lowers the cost of streaming fraud

  1. 1.Content creation costs drop: generating tracks becomes fast and inexpensive.
  2. 2.Catalog scale becomes the strategy: large batches of tracks can be uploaded to spread manipulation across many IDs.

“Fraud doesn’t need a hit. It needs a spreadsheet.”

— Pullquote

A case study in scale: the Michael Smith streaming-fraud indictment

If the Sony story is about identity, the most clarifying fraud story is about money. In 2024, U.S. federal prosecutors charged Michael Smith, a North Carolina musician, alleging a scheme that used hundreds of thousands of AI-made songs and bots to stream them billions of times, generating more than $10 million in royalties over a reported 2017–2024 period. Forbes covered the case, and Bloomberg Law framed it as a moment that brings scrutiny to streaming inflation.

The indictment’s theory—bots + huge AI catalog + streaming payouts—should reset the conversation. The harm isn’t just that listeners might be fooled by synthetic music. The alleged harm is that royalty money meant for working musicians is redirected to whoever can manufacture the most artificial activity.
Hundreds of thousands
AI-made songs alleged in the Michael Smith case, used as catalog-scale fuel for manipulation.
Billions
Streams prosecutors alleged were generated by bots in the Michael Smith scheme.
>$10 million
Royalties prosecutors alleged were generated in the Michael Smith scheme over the reported 2017–2024 period.

Why the “distributed” model is hard to catch

A naïve fraud model looks for one song with an impossible spike. Modern manipulation can look like ordinary, low-level engagement across thousands of tracks. When AI can generate those tracks in bulk, the catalog becomes camouflage.

That reality also complicates enforcement. Platforms can remove an account; fraudsters can return with a new distributor, new uploads, and a new batch of tracks. The fight starts to resemble spam control: an ongoing contest over identity, verification, and incentives.

Editor's Note

The enforcement challenge increasingly resembles spam control: remove one account, and the same playbook can reappear through new distributors and new catalogs.

Platforms respond: penalties, demonetization, and the rise of “trust infrastructure”

Streaming services have begun to treat artificial streaming as both a financial threat and a product-quality problem. The most concrete example in the research is Spotify’s move toward explicit penalties.

Distributor support documentation (including TuneCore’s) describes Spotify imposing a €10 per track per month charge when a track is deemed to have high levels of artificial streaming, passed from Spotify to distributors and then to the account holder. Those materials also note that Spotify’s detection methods are proprietary, and that play counts and royalty reports may be adjusted when artificial streams are removed.

That approach signals a shift: platforms are trying to push costs back onto the supply chain—distributors and uploaders—rather than absorbing the burden internally.
€10
Per track per month charge described in distributor documentation when Spotify deems a track has high levels of artificial streaming.

The tension: deterrence vs. due process

Penalty systems deter obvious manipulation, but they also raise hard questions. Detection is proprietary. Account holders may face charges or clawbacks without clear visibility into the evidence. A small label or independent artist caught up in suspicious traffic—whether from a bad marketing vendor or a malicious actor—can be punished quickly, while appeals move slowly.

A fair system needs two things at once:

- Strong detection and fast action to prevent dilution of payouts.
- Transparent standards and meaningful recourse so enforcement doesn’t become arbitrary.

Trust infrastructure—verification, labeling, and accountable distribution—starts to look less like bureaucracy and more like the foundation of the market.

What a fair anti-fraud system needs

- Strong detection and fast action to prevent dilution of payouts.
- Transparent standards and meaningful recourse so enforcement doesn’t become arbitrary.

The labeling problem: why “just tag AI” sounds simple and fails in practice

Sony’s takedown figure points to an obvious policy proposal: label AI material. TechRadar’s coverage underscores that labeling is emerging as a “critical challenge.” Yet labeling alone doesn’t solve impersonation or fraud.

A useful label has to answer at least three different questions:

1. Was AI used in creation? (A broad category that includes benign tools.)
2. Is the work impersonating a real artist? (A higher-risk category.)
3. Are the streams legitimate? (A behavior question, not a content question.)

A track could be fully human-made and still be the target of artificial streaming. Another could be AI-assisted but honestly presented. Another could be a deepfake designed to trick fans. A single “AI” tag collapses these scenarios into one bucket and invites the wrong enforcement.

What an “AI label” would need to clarify

  • Was AI used in creation? (A broad category that includes benign tools.)
  • Is the work impersonating a real artist? (A higher-risk category.)
  • Are the streams legitimate? (A behavior question, not a content question.)

The real need: provenance, not vibes

Platforms and labels are drifting toward a more demanding concept: provenance—a chain of accountability for who uploaded what, on whose authority, and with what rights. That includes basic identity verification and distribution controls, but also systems that can flag suspicious patterns at the account level.

The uncomfortable truth is that music streaming grew up in an era optimized for frictionless uploads. The next era will reward systems that can say “yes” to legitimate creators quickly while saying “no” to the industrial bad actors just as fast.

Key Insight

Labeling is not provenance. The harder requirement is a chain of accountability for who uploaded what, with what rights, and under which verified identity.

What this means for artists, labels, and listeners who want a functioning music economy

The story isn’t “AI is ruining music.” The story is that the cost of faking music—and faking demand—has dropped faster than the industry’s ability to verify either. Sony’s 135,000 removals show impersonation at scale. The Michael Smith indictment shows alleged fraud at scale. Deezer’s warnings show how tightly AI volume can couple to fraudulent incentives. Spotify’s €10-per-track penalty shows platforms experimenting with deterrence.

For artists and rights holders, practical implications follow:

- Identity protection becomes ongoing work. Monitoring for impersonation is now as routine as monitoring for piracy once was.
- Royalty integrity becomes a collective-action problem. Fraud drains the pool; everyone pays unless systems catch it early.
- Distribution choices matter more. The supply chain—distributor policies, upload verification, enforcement responsiveness—affects risk.

For listeners, the takeaway is less moral panic and more consumer literacy. If a “new” track from a major artist appears without the normal promotion, credits, and official channels, skepticism is rational. That skepticism shouldn’t calcify into cynicism; it should push platforms toward clearer verification signals.

A functional streaming economy will likely look more like a financial system: not because music should be sterile, but because money attached to scale invites industrial abuse. The goal isn’t to stop AI tools. The goal is to stop counterfeit identity and counterfeit demand from setting the price of everyone else’s work.
T
About the Author
TheMurrow Editorial is a writer for TheMurrow covering entertainment.

Frequently Asked Questions

Did Sony say 135,000 deepfake songs were found on one platform like Spotify?

Sony’s statement was reported as affecting “streaming services” generally, without a public, platform-by-platform breakdown. Coverage summarized the figure as more than 135,000 AI-generated deepfake tracks removed after takedown requests. Without specific disclosure, readers should treat the number as an aggregate across services rather than proof of any single platform’s total.

Are “deepfake songs” the same thing as “AI music”?

No. Sony’s headline figure refers to impersonation deepfakes—tracks designed to mimic real artists such as Harry Styles, Beyoncé, and Queen (as cited in reporting). “AI music” is broader and can include non-impersonating, clearly labeled work. The policy and ethical stakes are different when a track trades on a specific artist’s identity.

Why does streaming fraud hurt artists who never get impersonated?

Because many streaming payouts broadly operate on a pooled, pro‑rata model. If fraudulent streams inflate total platform streams, they can dilute the share allocated to legitimate listening. The result can look like ordinary underpayment—hard to detect, widely felt—rather than a single dramatic incident.

What does the Michael Smith case show about AI and streaming fraud?

Prosecutors alleged that Michael Smith generated hundreds of thousands of AI-made songs and used bots to stream them billions of times, obtaining more than $10 million in royalties over a reported 2017–2024 period. The case illustrates how AI can enable catalog-scale schemes where the goal isn’t listeners, but manipulating payout systems.

What is Spotify’s €10 penalty, and who pays it?

Distributor documentation (including TuneCore’s support materials) describes Spotify charging €10 per track per month when a track is deemed to have high levels of artificial streaming. The fee is passed from Spotify to distributors and then to the artist/label account holder. The same materials note detection is proprietary and that play counts/royalties may be adjusted when artificial streams are removed.

Would mandatory “AI labels” solve the problem?

Labeling helps only if it distinguishes between very different issues: AI-assisted creation, impersonation deepfakes, and artificial streaming behavior. A single “AI” tag can be misleading, because a track can be human-made and still be fraudulently streamed—or AI-assisted and still legitimate. The harder need is provenance and accountable distribution, not a one-size label.

More in Entertainment

You Might Also Like