TheMurrow

Deezer Says ~30% of New Tracks Are AI—So Why Are You Still Hearing Them on Spotify? The Royalty-Loophole Nobody Explains

Deezer claims the real story isn’t robot pop—it’s economics: mass AI uploads plus bot plays can siphon royalties at scale. Its data argues uploads are exploding while real listening barely moves.

By TheMurrow Editorial
March 9, 2026
Deezer Says ~30% of New Tracks Are AI—So Why Are You Still Hearing Them on Spotify? The Royalty-Loophole Nobody Explains

Key Points

  • 1Track the surge: Deezer says fully AI-generated uploads climbed from ~10% (Jan 2025) to 39% (Jan 2026).
  • 2Follow the money: Deezer argues much of the flood is bot-driven fraud—citing up to 70% then 85% suspicious streams.
  • 3Watch enforcement expand: Deezer labels AI content, de-boosts it from recommendations/playlists, filters royalties, and is licensing detection tools.

A song hits Deezer, then another, then another—tens of thousands a day. Only a sliver of them will ever find a real audience. Yet each one still arrives with the same quiet ambition: to be counted, surfaced, paid.

The startling part isn’t that AI can make music. It’s that AI can now mass-produce “music” cheaply enough to be used as a financial instrument—a way to siphon pennies from the streaming economy at scale. That is the story Deezer keeps telling, even as headlines focus on the novelty of robot pop.

60,000 tracks/day
Deezer says it’s now receiving 60,000 “fully AI-generated” tracks per day—about 39% of daily delivered music (Jan. 29, 2026, per industry reporting).

The numbers are no longer a curiosity. They’re an operational reality. Deezer says it’s now receiving 60,000 “fully AI-generated” tracks per day, which the company pegs at roughly 39% of daily delivered music (Jan. 29, 2026, per industry reporting). Just nine months earlier, Deezer said 18% of daily uploads were fully AI-generated—about 20,000 tracks/day (Apr. 16, 2025). And in September 2025, Deezer put it at 30,000 fully AI-generated tracks/day, or about 28% of daily delivery (Sept. 11, 2025).

Volume alone would be troubling. Deezer’s more unsettling claim is about motive: the flood is “mostly” a fraud problem, not a listener-demand story. In other words, many of these tracks aren’t made to be loved. They’re made to be streamed—by bots.

“When nearly two out of five new tracks are fully AI-generated, the question shifts from culture to accounting.”

— TheMurrow

Deezer’s numbers, and what they actually mean

Deezer’s AI story has been widely summarized as “about 30% of uploads are AI.” That shorthand isn’t wrong, but it’s dated and imprecise. Deezer’s own disclosures show a steep climb across 2025 and into 2026, and readers deserve the timeline.

The timeline behind the headline

Here’s what Deezer has said publicly about the share of fully AI-generated uploads or deliveries:

- January 2025: roughly 10% of daily uploads were fully AI-generated (Deezer newsroom).
- April 16, 2025: 18% of daily uploads were fully AI-generated—about 20,000 tracks per day (Deezer press release via Euronext).
- September 11, 2025: 30,000 fully AI-generated tracks/day, about 28% of daily delivery (Deezer newsroom).
- January 29, 2026: 60,000 fully AI-generated tracks/day, about 39% of daily delivered music (reported by Music Business Worldwide, citing Deezer’s statements).

Some secondary coverage in late 2025 cited figures like ~50,000/day and ~34%; those numbers are best treated as “reported,” unless you’re relying on Deezer’s primary materials.

“Uploads” vs. “listening”: Deezer’s crucial distinction

The company’s core argument hinges on a mismatch: uploads are exploding; actual listening is not. In mid-2025 messaging, Deezer said fully AI-generated tracks represented about 0.5% of streams (as reported by TechCrunch). That gap matters. It implies most AI uploads are not organic hits waiting to be discovered; they’re noise—sometimes strategic noise.

“The upload chart is not the listening chart—and Deezer wants the industry to stop confusing the two.”

— TheMurrow

The fraud thesis: why Deezer thinks bots, not fans, drive the surge

Deezer is unusually direct about what it believes is happening. According to the company, bad actors can upload near-zero-cost music at scale, then use artificial plays to pull royalties from the shared pool. If streaming payouts are a spreadsheet, these tracks are a way to game the formulas.

“Up to 70%” fraudulent streams—then “up to 85%”

In June 2025 coverage, Deezer’s position sounded blunt: up to 70% of streams of fully AI-generated tracks could be fraudulent (as reported by The Guardian). By January 2026, Deezer’s language hardened further—citing “up to 85%” fraudulent (and demonetized) streams in some months, according to a Deezer press release distributed via Euronext.

That doesn’t mean 85% of all AI music listening is fake, or that most AI-assisted music is fraud. Deezer’s claims are specifically about fully AI-generated tracks and the suspicious activity around them. Still, the escalation signals something important: Deezer believes the attack is not hypothetical; it’s measurable.
Up to 85%
By January 2026, Deezer cited “up to 85%” fraudulent (and demonetized) streams around fully AI-generated tracks in some months.

Case study: the “royalty pool siphon”

Deezer hasn’t published a single canonical fraud “poster child,” but the mechanics are widely implied by its statements. The alleged playbook looks like this:

- Generate thousands of tracks cheaply using tools such as Suno or Udio (both referenced by Deezer as targets its detector can identify).
- Upload them in bulk.
- Push plays through bots or coordinated networks.
- Collect payouts unless the platform detects and filters the activity.

Deezer frames its response as a protection of legitimate creators: fewer fraudulent plays in the system means less dilution of the royalty pool.

“AI music isn’t only an aesthetic question. It’s a systems question: what happens when ‘content’ becomes cheaper than policing it?”

— TheMurrow

Detection and labeling: what Deezer is doing that rivals haven’t standardized

Deezer’s most concrete contribution to the debate is technical: it says it built internal systems to detect and label fully AI-generated music and then change how that music behaves inside the product.

Detection: “100% AI-generated” from major generators

Deezer says its internal tool can identify “100% AI-generated” tracks from major generators including Suno and Udio, and that the approach can be extended (per Deezer newsroom communications). That specificity matters. The current public conversation often collapses into “AI vs. human,” but Deezer is targeting a narrower category: tracks it believes are fully generated, likely at scale, likely tied to manipulation.

Skeptics will reasonably ask for more transparency: What are the false positives? What is the appeals process? What about hybrid tracks, where a human writes and an AI performs? Deezer’s public messaging emphasizes the category it can detect with high confidence—suggesting it’s trying to avoid messy edge cases.

Labeling: a rare moment of honesty in the interface

In June 2025, Deezer began explicitly labeling AI-generated content for users, adding an “AI-generated content” label (reported by TechCrunch). That move is both simple and culturally charged. Labels can inform listeners, but they also shape stigma, curiosity, and trust.

From a reader’s perspective, labeling is less about moral judgment and more about disclosure. If the platform can detect that a track is fully AI-generated, the listener has a legitimate interest in knowing that—just as they might want to know if an image is AI-generated in a news feed.

Key Insight

Deezer’s framing is narrow by design: it targets fully (even “100%”) AI-generated tracks and suspicious activity—rather than treating all AI-assisted music as the same.

Turning down the algorithm: recommendations, playlists, and incentives

Detection is only half the question. The larger issue is incentives. If a platform’s recommendation engine is a growth machine, then the real power lies in deciding what the machine amplifies.

Removed from algorithmic recommendations and editorial playlists

Deezer says it removes fully AI-generated tracks from algorithmic recommendations and editorial playlists (per Deezer newsroom statements). That is an unusually clear stance in a market where platforms rarely explain their ranking choices in plain language.

The practical effect is straightforward: even if fully AI-generated tracks can be uploaded, they won’t get the free distribution boost that comes from being served to users who didn’t ask for them. That changes the economics for would-be fraudsters. Bot plays can still be purchased; frictionless discovery is harder to counterfeit.

A counterpoint: who decides what counts as “fully AI”?

Critics will worry about overreach. If a platform is the judge of what’s “fully AI-generated,” it is also the judge of what’s eligible for algorithmic oxygen. Deezer’s approach tries to thread the needle by targeting “100% AI-generated” content from identifiable sources, but the principle remains: platform governance is becoming cultural governance.

A fair reading is that Deezer is prioritizing the integrity of the recommendation system. Recommendation is not just a convenience feature; it’s a gatekeeper. And in a world of 60,000 AI tracks a day, gatekeeping becomes a form of maintenance.

What Deezer is really protecting

Recommendation is not neutral—it’s distribution. Deezer’s policy aims to deny mass AI uploads the “algorithmic oxygen” that makes scale profitable.

Money and enforcement: filtering, demonetization, and the royalty pool

The fiercest conflicts in music rarely hinge on taste. They hinge on money—who gets paid, who doesn’t, and who gets to decide.

Filtering fraudulent streams out of royalties

Deezer says it filters fraudulent streams out of royalty payments (per Deezer newsroom). That sounds technical, but it’s a moral statement disguised as bookkeeping: fraud shouldn’t pay. For legitimate artists—human or otherwise—fraud distorts the marketplace and dilutes the pool.

From filtering to demonetization

By January 2026, Deezer’s public language moved toward demonetization of streams it deems fraudulent, positioning the work as protection for human creators (per Deezer’s Euronext-distributed press release). Demonization is the point where policy becomes power. It’s no longer about labeling; it’s about shutting off revenue.

Deezer’s fraud estimates—up to 85% in some months—also suggest why demonetization became central. If the majority of activity around certain fully AI-generated tracks is fraudulent, then paying those plays isn’t neutral. It’s subsidizing exploitation.

Practical takeaway: what artists and labels should watch

For musicians, managers, and indie labels, Deezer’s stance offers a short checklist:

What artists and labels should watch

  • Expect more disclosures: “AI-generated content” labels are likely to spread.
  • Prepare for disputes: detection systems will misclassify some borderline cases.
  • Track royalty adjustments: demonetization and fraud filtering can change statements month to month.
  • Diversify promotion: if algorithmic surfaces tighten, direct fan channels matter more.
0.5% of streams
Deezer emphasized that fully AI-generated tracks represented about 0.5% of streams in mid-2025 coverage—far below their share of uploads.

Portable detection: why Deezer wants to license its tools

Deezer is not just defending its own platform. It’s trying to export its approach.

Licensing detection tech to the wider industry

In January 2026 industry reporting, Deezer said it was moving to license its AI detection tooling to other services and had tested with partners (per Music Business Worldwide, citing Deezer’s plans and communications). Strategically, that makes sense: fraud is not a Deezer-only problem. If bad actors can upload to multiple platforms, they can arbitrage the weakest enforcement.

Licensing also reframes Deezer’s role. Rather than being the smallest of the global majors, it becomes something like a standards-setter—at least for identifying certain types of fully AI-generated content.

The risk: a fragmented standard becomes no standard

There’s a downside. If each service uses different detectors, thresholds, and policies, creators could face inconsistent outcomes:

- A track labeled “AI-generated” on Deezer could appear unlabeled elsewhere.
- A track excluded from recommendations on one platform might be promoted on another.
- Fraud networks could simply migrate.

If Deezer’s licensing effort succeeds, it could reduce that fragmentation. If it fails, the market may drift toward a patchwork where enforcement depends on where you upload.

Editor’s Note

A licensing push only works if platforms align on thresholds and appeals; otherwise, creators face inconsistent labeling, de-boosting, and monetization outcomes.

Multiple perspectives: transparency, creativity, and the future of “AI music”

Deezer’s framing is persuasive, but it is not the entire story. AI-generated music includes legitimate experimentation, new workflows, and creators who don’t fit old categories. A policy designed to stop fraud can accidentally punish art—especially when the boundary between “fully AI-generated” and “AI-assisted” is blurry in practice.

The pro-enforcement view: protect the commons

From Deezer’s perspective, the system is under attack. When 39% of daily delivered music is fully AI-generated, quality control becomes survival. When up to 85% of streams around those tracks can be fraudulent in some months, the royalty system becomes a target.

In that worldview, strong enforcement isn’t anti-innovation. It’s pro-ecosystem.

The cautionary view: labels and removals can chill legitimate work

From an artist-rights perspective, labeling and de-boosting come with cultural consequences. Some listeners will treat an “AI-generated” label as a warning. Some editors will avoid it. Some creators will hide their tools rather than discuss them openly.

A responsible approach needs:

- Clear definitions (what counts as “fully AI-generated”?)
- Appeals processes for mislabeling or mistaken demonetization
- Transparency reporting that helps outsiders assess accuracy and bias

Deezer has been clear about its motivations—fraud prevention—but clarity about error rates and governance would strengthen its credibility.

Conclusion: the real battle is over incentives, not aesthetics

The debate over AI music often defaults to taste: can a machine make something moving? Deezer’s data points somewhere else. The pressing problem is industrial scale—60,000 fully AI-generated tracks a day—colliding with an economy that pays per play and relies on trust.

Deezer’s message is consistent across its disclosures: most of this flood isn’t arriving because listeners asked for it. It’s arriving because uploading is cheap, detection is hard, and the royalty pool can be gamed. That’s why the company labels AI-generated content, removes fully AI-generated tracks from recommendations and playlists, filters fraudulent streams, and increasingly talks about demonetization.

If Deezer succeeds, it won’t “solve” AI music. Nothing will. It will do something more modest and more necessary: make it harder to turn streaming into a bot-driven extraction business. The rest—what counts as authorship, what creativity looks like, what listeners embrace—will remain contested, as it should.
T
About the Author
TheMurrow Editorial is a writer for TheMurrow covering entertainment.

Frequently Asked Questions

What percentage of music uploads on Deezer are AI-generated?

Deezer’s figures have risen quickly. In April 2025, Deezer said 18% of daily uploads were fully AI-generated (about 20,000 tracks/day). By September 2025, Deezer said it was receiving 30,000 fully AI-generated tracks/day (about 28% of delivery). As of January 29, 2026, reporting citing Deezer put the figure at 60,000 fully AI-generated tracks/day, about 39% of daily delivered music.

Is Deezer saying people actually listen to AI-generated music that much?

No. Deezer has repeatedly emphasized a gap between upload volume and listening. Mid-2025 coverage reported Deezer saying fully AI-generated tracks accounted for about 0.5% of streams. Deezer’s argument is that the upload spike is not primarily demand-driven, but influenced by low-cost mass uploading and suspected manipulation.

Why does Deezer connect AI music to streaming fraud?

Deezer says bad actors can upload huge volumes of near-zero-cost fully AI-generated tracks and then generate artificial plays to extract royalties. In June 2025 reporting, Deezer’s estimates suggested up to 70% of streams for fully AI-generated tracks could be fraudulent. By January 2026, Deezer cited up to 85% fraudulent (and demonetized) streams in some months.

How does Deezer detect AI-generated tracks?

Deezer says it has an internal detection system that can identify “100% AI-generated” tracks from major generators like Suno and Udio, and that the system can be extended. Deezer has also said it began labeling AI-generated content for users in June 2025.

Does Deezer ban AI-generated music?

Deezer’s public approach, as described in its statements, focuses on labeling and limiting amplification, not a blanket ban. Deezer says it removes fully AI-generated tracks from algorithmic recommendations and editorial playlists. It also filters fraudulent streams from royalties and, by early 2026, described demonetizing streams it deems fraudulent.

What happens to royalties when Deezer thinks streams are fraudulent?

Deezer says it filters fraudulent streams out of royalty payments, and later messaging emphasizes demonetization of streams deemed fraudulent. The goal, according to Deezer, is to prevent bot-driven listening from diverting money from legitimate creators and to reduce incentives for mass-upload fraud schemes.

More in Entertainment

You Might Also Like