Deezer Says 39% of New Uploads Are AI—So Why Are Humans Still Getting Paid Like It’s 2015? The Royalty Math That’s About to Flip Streaming
Deezer’s “39% of daily intake” isn’t about what people listen to—it’s about what floods the pipeline. And Deezer says most AI-track streams are fraud, forcing streaming’s payout logic into a rewrite.

Key Points
- 1Deezer says 60,000 AI tracks/day now hit its catalog—39% of daily intake—a supply shock that reshapes discovery and payouts.
- 2Track the distinction: uploads ≠ streams ≠ revenue. AI may flood inventory while remaining a small share of actual listening.
- 3Follow the money: Deezer claims up to 85% of streams on fully AI-generated tracks are fraudulent—and says it demonetizes them.
A new song arrives on a streaming service every second. Then another. Then ten more.
For years, that flood has been a familiar industry problem: too much music, too little human attention. But in early 2026, Deezer put a number on what many labels, distributors, and independent artists had been whispering about—and the figure is hard to ignore.
The claim is about uploads, not about what listeners are actually playing. Still, the intake number matters because it describes the raw material that recommendation systems sift through, moderators review, and royalty systems must defend.
“The new music problem isn’t only how much gets made. It’s how much gets delivered.”
— — Pullquote
What Deezer actually said—and what “39%” does (and doesn’t) mean
That distinction is not pedantic. “Uploads” are the music entering the catalog through distributors and rights holders. “Streams” are what audiences choose (or are tricked into) playing. “Revenue” is what rights holders receive after platforms apply their rules, fraud filters, and payout formulas.
Deezer’s own public messaging underscores that point. Multiple reports echo Deezer’s stance that fully AI-generated tracks remain a small share of listening in many contexts—even as the volume of AI music arriving at the gates accelerates. TechRadar, for example, has noted the mismatch between the surge in AI supply and the much smaller share of streams those tracks typically capture.
Why the number is platform-specific—and why that still matters
The statistic is also platform-specific. Competing services generally do not publish equivalent intake metrics, which is why Deezer’s number now anchors so much public debate. French outlet Le Monde has echoed that more than 39% of daily uploads to Deezer are AI-generated, citing the broader industry uncertainty about how close other platforms are to the same ratio.
The best way to read “39%” is as a stress test for the streaming pipeline: discovery, trust, and payments all depend on what enters the system—even if listeners never consciously seek it out.
“Uploads aren’t culture yet. They’re inventory—and inventory shapes everything downstream.”
— — Pullquote
From 18% to 39%: the trendline that changed the conversation
In an April 2025 release, Deezer reported that 18% of all new music uploaded was “fully AI-generated.” Nine months later, Deezer’s estimate for AI tracks in daily intake had climbed to roughly 39%, alongside the headline number of 60,000 AI tracks per day.
Two caveats are worth keeping in view. First, these are statements made at different times, potentially using slightly different operational definitions and detection improvements. Second, Deezer’s AI detection tool itself likely became better at recognizing what earlier might have slipped through.
Even with those cautions, the direction is unmistakable: more AI music is being shipped to platforms, in greater volume, at faster speed than human catalog growth could reasonably match.
Why “daily intake” is the number that spooks platforms
- Catalog management: what gets accepted, how it’s categorized, and whether it’s legitimate
- Discovery: how algorithms decide what to surface
- Trust and safety: how manipulation is detected and removed
- Royalties: how money is allocated when plays are counted
A rapidly rising intake of AI-generated tracks strains every layer at once. Even if only a fraction becomes popular, the platform still has to ingest the files, store them, scan them, potentially label them, and decide whether to recommend them.
Deezer’s statistic functions like a weather report. It doesn’t tell you which neighborhood will flood, but it tells you the storm is getting heavier.
The accelerant is fraud: why Deezer links AI uploads to royalties
Deezer’s central allegation is direct: a large portion of the AI-generated upload wave is tied to streaming fraud—actors who upload huge amounts of low-cost content, then attempt to generate artificial listening to siphon money from the royalty pool.
In its January 29, 2026 release, Deezer said up to 85% of streams of fully AI-generated tracks are fraudulent, and that those streams are demonetized—removed from the royalty pool. In mid‑2025 reporting, including The Guardian, an earlier snapshot circulating widely cited Deezer’s estimate that up to 70% of streams for AI-generated music on the service were fraudulent.
That change in figure (from ~70% to up to 85%) matters less as a precise measurement than as a sign of how intensely Deezer is framing the problem: not primarily as an artistic debate, but as an economic and integrity threat.
“The fight over AI music on streaming isn’t only about taste. It’s about theft—real or attempted—at industrial scale.”
— — Pullquote
How the math of streaming makes fraud everyone’s problem
In such a system, fraudulent plays can do two things at once:
- Inflate the denominator (total streams), diluting legitimate shares
- Redirect payouts toward the fraudster if fake streams are counted as real
Deezer’s demonetization claim is an attempt to reassure artists and rights holders: if fraudulent plays are removed, the pool is protected. The harder question is operational—how accurately can any platform detect fraud and AI generation at scale, and how quickly can it respond as tactics change?
Detection, labeling, and exclusion: Deezer’s three-part response
Labeling AI-generated tracks for listeners
Labels are a deceptively serious intervention. They change how listeners interpret what they hear, how journalists report on charts, and how rights holders evaluate the platform. Labels also invite debate: some listeners want full disclosure; others worry that labeling becomes stigma, or that detection errors could misidentify human work.
Excluding fully AI-generated tracks from recommendations
That move acknowledges an uncomfortable truth about streaming: discovery is oxygen. If recommendation slots can be gamed with cheap, mass-produced tracks, the most valuable real estate on the platform becomes vulnerable. Excluding fully AI-generated tracks from recommendations isn’t a moral verdict; it’s an incentive redesign.
Selling the detection tool to other platforms
That step turns a defensive measure into a potential industry standard—if others adopt it. It also invites scrutiny. Any detection tool must answer the hardest questions transparently: false positives, false negatives, model updates, appeals, and who gets to define “fully AI-generated” in the first place.
Key Insight
The listening reality: why AI can be 39% of uploads but a small share of streams
Deezer and multiple media reports have emphasized that fully AI-generated tracks remain a small share of actual listening in many contexts—sometimes characterized as low single-digit percentages of streams. The exact number varies by reporting context, but the direction is clear: audiences are not yet streaming AI music in proportion to its arrival rate.
That gap has several plausible explanations—none of which require guessing beyond the evidence Deezer has made public.
Discovery is a bottleneck, not a blank check
Recommendation systems also act as gatekeepers. If Deezer is excluding fully AI-generated tracks from algorithmic recommendations, that alone would suppress their listen share relative to upload share.
Fraud tries to substitute for fandom
The public tension here is straightforward: stream counts have become a proxy for popularity, and popularity is a proxy for money and opportunity. Fraud exploits the proxy.
Editor’s Note
What this means for artists, labels, and listeners—practical takeaways
Here are the practical implications, grounded in what Deezer has said and what its policies suggest.
For artists and songwriters: visibility becomes the scarce resource
- More friction in distribution and verification
- Greater reliance on trusted channels (editorial playlists, direct fan engagement)
- Higher stakes for metadata accuracy and rights clarity
Deezer’s demonetization stance is designed to protect artists, but it also signals a more policed environment.
For labels and distributors: intake quality becomes reputational
- Improve screening and provenance checks
- Separate AI content clearly in delivery metadata
- Cooperate with platform detection and labeling standards
Deezer’s choice to commercialize detection tech hints at a future where proof and labeling become routine parts of distribution, not optional add-ons.
For listeners: transparency is coming, whether you want it or not
The deeper point is that streaming services are reasserting editorial power—algorithmic and policy-driven—after years of presenting themselves as neutral pipes.
Practical implications Deezer’s disclosures point toward
- ✓Tighter distribution and verification for legitimate artists
- ✓Greater importance of metadata accuracy and rights clarity
- ✓Upstream pressure on distributors to screen and label AI content
- ✓More visible user-facing transparency (labels) and reduced algorithmic exposure for fully AI tracks
The industry split: cultural anxiety vs. system integrity
A listener might enjoy an AI-generated ambient track and still agree that streaming fraud should be stamped out. A label might experiment with AI in production workflows and still oppose mass upload schemes designed to manipulate royalties.
TechCrunch’s coverage connects Deezer’s labeling initiative explicitly to fraud. Le Monde describes an industry “at a loss,” highlighting the uncertainty platforms face when supply grows faster than their governance.
None of these perspectives require treating AI music as inherently illegitimate. They require treating scale and incentives as decisive.
A real-world case study in policy, not philosophy: Deezer’s demonetization stance
That policy is a case study in how streaming platforms may respond more broadly:
- Detect AI-generated content at ingestion
- Monitor its streaming behavior for fraud signals
- Remove fraudulent activity from payouts
- Reduce discovery incentives by excluding content from recommendations
- Add labeling to shape listener expectations
Whether these measures become industry norms depends on whether other platforms follow Deezer into public disclosure—and whether rights holders demand comparable transparency elsewhere.
Conclusion: 39% is not a verdict—it’s a warning light
The number does something more valuable: it tells you where the pressure is building.
A streaming ecosystem designed for human-scale creativity is now absorbing machine-scale production. Deezer’s response—detection, labeling, recommendation exclusion, and demonetization—signals that platforms are preparing for a long fight over trust, incentives, and money.
The next phase won’t be decided by philosophy alone. It will be decided by operational definitions, audit trails, and whether the industry can agree on a simple principle: if streaming is the marketplace, then the marketplace has to be defendable.
Frequently Asked Questions
Did Deezer say 39% of streams are AI-generated?
No. Deezer’s January 29, 2026 press release says AI-generated tracks are roughly 39% of daily intake, meaning uploads/deliveries to the platform. Deezer and media coverage have separately emphasized that fully AI-generated tracks are still a small share of listening in many contexts, even as uploads surge.
What exactly did Deezer report in January 2026?
Deezer said “60,000 AI-tracks are now uploaded per day,” representing “roughly 39% of daily intake.” Deezer presented the figure as a one-year milestone for its AI-music detection tool, and linked the surge to efforts to combat streaming fraud and protect royalties.
How does the January 2026 number relate to Deezer’s earlier 18% figure?
In April 2025, Deezer said 18% of all new music uploaded was “fully AI-generated.” In January 2026, Deezer reported roughly 39% of daily intake as AI tracks. The two datapoints suggest rapid acceleration, though they come from different timeframes and could reflect evolving definitions or improved detection.
Why is Deezer focusing so much on fraud?
Deezer argues that a significant portion of AI-music activity is tied to streaming manipulation designed to siphon royalties. In January 2026, Deezer said up to 85% of streams of fully AI-generated tracks are fraudulent and that those streams are demonetized—removed from the royalty pool to protect legitimate rights holders.
What does “demonetized” mean in this context?
“Demonetized” means Deezer says it removes identified fraudulent streams from royalty calculations, so they do not earn payouts. The goal, according to Deezer’s January 2026 statement, is to ensure royalties for human artists, songwriters, and other rights owners are not affected by fraudulent activity.
What steps has Deezer taken against AI-generated music?
Deezer’s publicly described measures include:
- Detecting fully AI-generated tracks (and marking the one-year milestone in January 2026)
- Labeling AI-generated music for users (reported by TechCrunch in June 2025)
- Excluding fully AI-generated tracks from algorithmic recommendations (reported by TechCrunch)
- Commercializing its detection tech by offering to license/sell it (January 2026 press release)















