TheMurrow

Deezer Says ~39% of New Uploads Are AI—So Why Are Your Favorite Artists Still Getting Blamed for ‘AI Slop’ (and Who’s Actually Cashing the Checks)?

Deezer’s “39%” is about what hits the intake pipe—not what listeners play. The real story is the gap between uploads, streams, and a fraud economy targeting royalties.

By TheMurrow Editorial
April 12, 2026
Deezer Says ~39% of New Uploads Are AI—So Why Are Your Favorite Artists Still Getting Blamed for ‘AI Slop’ (and Who’s Actually Cashing the Checks)?

Key Points

  • 1Separate the stats: Deezer’s 39% refers to daily AI uploads, while fully AI music is about ~0.5% of streams.
  • 2Follow the money: Deezer links AI catalogs to fraud, saying it detects and demonetizes up to 85% of AI-track streams.
  • 3Watch discovery: Deezer says it won’t recommend fully AI tracks—so catalogs can explode without changing what most listeners hear.

A number—39%—has been ricocheting around the music internet like a fire alarm.

It sounds like a verdict: AI has arrived, swallowed streaming whole, and is now what everyone is hearing. Except that’s not what the number means, and it’s not what listeners are doing.

In early 2026, Deezer reported that daily deliveries of AI-generated music averaged around 60,000 tracks in January 2026, representing about 39% of all music delivered to the platform each day. That statistic is real, but it describes uploads, not listening. You can flood a warehouse without changing what people buy.

The more revealing detail sits elsewhere: Deezer has said fully AI-generated music accounts for about 0.5% of total streams on the service. And within that small slice, Deezer and press reporting say fraud is rampant—so rampant that the “AI boom” looks less like a consumer trend than an attempted raid on the payout system.

“The headline isn’t that listeners switched to AI music. The headline is that upload pipes can be exploited at industrial scale.”

— TheMurrow Editorial
39%
Deezer says about 39% of daily deliveries/uploads were AI-generated in January 2026—an intake metric, not a listening metric.
60,000
Deezer reported ~60,000 AI-generated tracks per day delivered in January 2026—an industrial-scale supply surge.
~0.5%
Deezer says fully AI-generated music is ~0.5% of total streams—tiny compared with the upload share.

What Deezer actually said—and what the 39% figure is (and isn’t)

Deezer’s 39% figure is easy to misunderstand because it arrives with the force of a cultural diagnosis. But in Deezer’s own phrasing, the claim is narrow: “Daily deliveries of AI-generated music averaged around 60,000 tracks in January 2026,” which the company framed as roughly 39% of all music delivered every day. That framing appeared in Deezer’s March 2026 business update. The metric is supply-side: what arrives at the platform, not what people play.

A quick look at Deezer’s earlier public numbers shows how fast the supply curve has moved. In June 2025, Deezer announced what it called the first AI tagging system for a streaming service and began speaking more bluntly about volume and fraud. By September 11, 2025, industry coverage cited Deezer saying 28% of new uploads were AI-generated. By January 2026, Deezer’s figure rose again to about 39%—a jump that signals a structural change in how music is being delivered, not necessarily how it’s being enjoyed.

The interpretive mistake is a common one: readers see “39%” and assume “39% of what I’m hearing.” That assumption collapses two different marketplaces into one. Uploads measure friction in the delivery pipeline; streams measure demand. When those metrics diverge sharply, the story is rarely aesthetic. It’s usually economic.

The metric that matters: streams, not submissions

Deezer’s own stream share claim—about 0.5% of total streams for fully AI-generated music—is the key counterweight to the 39% upload figure. Uploads can spike because distribution is cheap and automated; listening tends to move slower because taste is stubborn.

A platform can be inundated while its users remain largely indifferent. That doesn’t make the flood harmless. It just changes what the flood is: not a takeover of culture, but a stress test of infrastructure.

The upload flood vs. what people listen to: why these numbers can both be true

The 39% statistic is a measure of volume, not impact. It tells you the pipes are wide open. It doesn’t tell you the audience is paying attention.

Deezer’s “about 0.5% of total streams” claim suggests that—so far—listeners haven’t embraced fully AI-generated catalogs at anything like the rate those catalogs are being delivered. A sensible reading is that much of the new material is not being made to be heard by humans in the first place. It’s being made to be processed by systems: ingestion tools, recommendation engines, payout formulas.

That distinction matters because the moral panic tends to focus on taste: “AI songs are replacing artists.” The platform reality often points to something less romantic and more mechanical: if you can upload at scale, you can also probe a service’s weaknesses at scale.

Discovery is a choke point—and Deezer is tightening it

Deezer has said it won’t actively promote fully AI-generated tracks in editorial playlists or algorithmic recommendations, according to coverage of its June 2025 moves. That matters because most streaming consumption flows through discovery. If a platform suppresses promotion, a massive catalog can sit there like an uninhabited city: technically present, rarely visited.

From Deezer’s perspective, this is both consumer protection and market hygiene. From an AI-music creator’s perspective, it’s a hard limit on reach—especially when the product depends on the platform doing the marketing.

The practical implication for listeners is subtle: you may not “see” much of what’s being uploaded, even if it’s pouring in by the tens of thousands of tracks per day. The platform can be packed and still feel normal.

“A catalog can explode in size without changing what the average listener hears—if discovery stays closed.”

— TheMurrow Editorial

The fraud engine hiding inside “AI music”: the money problem Deezer keeps pointing to

Deezer has been unusually direct about what it thinks is happening inside the AI category: fraud.

In June 2025, reporting cited Deezer estimates suggesting up to about 70% of streams of AI-generated music were fraudulent—bot or farm activity rather than human listening. By January 2026, Deezer’s own communications sharpened the claim: it said it detects and demonetizes up to 85% of streams on AI-generated music as fraudulent, removing them from the royalty pool.

Those are staggering numbers, not because they suggest AI music is popular, but because they suggest a portion of AI music is being used as a vehicle for extracting money—attempting to generate royalties through automated listening rather than winning attention.
~70%
June 2025 reporting cited Deezer estimates that up to ~70% of AI-generated music streams were fraudulent.
85%
Deezer said it detects and demonetizes up to 85% of streams on AI-generated music as fraudulent, removing them from royalties.

What “demonetization” actually changes

Deezer says fraudulent streams it detects are removed from royalty calculations. That move targets the incentive structure: if the payout disappears, the scheme becomes less attractive. It also reframes the broader debate. When people argue about whether AI music is “good,” they may be ignoring a different question: whether the streaming economy can be gamed faster than it can be protected.

The public is trained to read platforms culturally. Deezer is inviting readers to view the problem financially. A flood of synthetic tracks looks very different when you treat it as an attack on a payment system rather than a shift in taste.

A case study in incentives: why upload volume is such a tempting lever

Consider what the numbers imply. 60,000 AI-generated tracks per day is not a hobbyist scene. It resembles an industrial pipeline—one that benefits from cheap generation and cheap delivery. Even if most tracks never find real listeners, they can still be useful to bad actors if they help test fraud detection, attempt to slip through filters, or target payout loopholes.

No single statistic proves motive. But Deezer’s repeated emphasis on fraud—70% fraudulent streams (as reported in 2025) and up to 85% demonetized (per Deezer’s 2026 statement)—makes the platform’s interpretation clear: much of the “AI flood” is less art movement than monetization attempt.

“When a platform says it’s demonetizing 85% of AI-track streams as fraud, the genre isn’t the story. The business model is.”

— TheMurrow Editorial

Deezer’s response: labeling, recommendation limits, and a new kind of gatekeeping

Deezer’s most visible intervention has been labeling. In June 2025, the company announced an AI tagging system that displays a label for fully AI-generated tracks on albums. The claim was bold—Deezer called it the first such tagging system on a streaming service—and it aimed to do something platforms rarely do well: give ordinary listeners a clear signal.

The labeling matters for a simple reason: without it, listeners fill the void with suspicion. A song sounds slightly uncanny, a vocal line seems too perfect, a cover art style feels templated—and suddenly social feeds are full of allegations. Labels don’t solve the argument about whether AI music belongs on streaming. But they reduce the chaos of guessing.

The second intervention is quieter: don’t recommend it

Alongside labeling, Deezer has said fully AI-generated music will not be promoted through editorial playlists or algorithmic recommendations, as reported in coverage of its policy approach. That is a form of gatekeeping, and it will be controversial precisely because it works.

Recommendation systems are the largest distributor of attention in the modern music economy. Saying “we won’t amplify this category” doesn’t ban it. It simply denies it oxygen.

Platforms have always shaped taste by shaping exposure. Deezer’s move makes that power explicit—and positions it as a defense of listeners and artists rather than an editorial preference.

The third intervention is economic: remove fraud from payouts

The most consequential step may be the least discussed: Deezer says it removes detected fraudulent AI streams from the royalty pool. That directly targets what appears to be a major driver of the upload spike.

If Deezer’s fraud estimates are even close, then demonetization is not a marginal fix. It’s a structural response to a structural exploitation attempt.

Key Insight

Deezer’s playbook is three-part: label fully AI tracks, deny discovery oxygen, and strip fraud from payouts—treating AI as a trust-and-incentives problem.

Why fans blame artists anyway: misattribution, impersonation, and “AI slop” as a catch-all

The cultural argument about AI music often misfires because people use “AI” to describe three different things:

- Fully AI-generated tracks uploaded at scale (often anonymous, high-volume, low-identity catalogs)
- Human-led music using AI tools (mastering, stems, vocal processing, writing assistance)
- Unauthorized voice cloning and impersonation (deepfake-style releases)

Deezer’s labeling is specifically about tracks it classifies as “100% AI-generated,” as noted in coverage. That’s a narrower category than what most people mean when they say “AI slop.”

The confusion is not merely semantic. It fuels accusations—especially when tracks appear under the wrong name. Spotify has publicly emphasized that a major abuse vector is impersonation and fraudulent delivery to the wrong artist profile, a problem that can make it look like a real artist released material they never touched.

Listeners can’t reliably tell—and Deezer says so

Deezer has also claimed research suggesting many people cannot reliably distinguish fully AI-generated tracks from human-made music. That claim helps explain why blame travels so quickly. If listeners can’t confidently identify what’s synthetic, they default to pattern recognition: the vocal timbre is similar to a famous singer; the artist page looks familiar; the “release” appears overnight.

The result is a kind of digital witch hunt where the visible target is the artist, while the underlying mechanism may be distribution abuse. When labeling is inconsistent across platforms—and there is no universal standard requiring it—guesswork becomes the norm.

Practical takeaway for listeners and artists

For listeners: treat viral accusations with caution. A track appearing under a famous name is not proof of authorship. For artists: the problem is reputational as much as financial. Profile hygiene, monitoring, and rapid reporting channels matter more than ever when delivery systems can be exploited.

Editor’s Note

In this debate, “AI” often conflates fully generated spam, legitimate tool-assisted production, and outright impersonation—three different problems with different fixes.

The missing standard: why the industry can’t agree on what to disclose

One reason the AI debate stays messy is that the industry lacks a shared disclosure regime. Without a universal obligation to label AI-generated music, platforms can take different approaches—or none at all—and users are left with rumor as an interface.

Deezer’s bet is that labeling and demotion can preserve trust. Other platforms may prioritize openness, fearing that labeling becomes a value judgment or a moderation nightmare. Some creators argue that tool-based creation is a spectrum and that rigid labels punish legitimate experimentation.

All of those concerns can be true at once. But the absence of standards has a cost: it externalizes the burden of detection to listeners, who are least equipped to do it, and to artists, who suffer when impersonation or spam lands on their pages.

Expert perspective: Deezer’s own framing

Deezer’s public statements repeatedly tie AI-generated uploads to platform integrity—fraud detection, demotion from recommendations, and transparency through labels. In other words, Deezer is treating AI not as a philosophical debate but as a product and trust problem.

That framing may read self-serving, especially because Deezer has also described monetization potential in licensing its detection technology as a B2B opportunity. Yet the existence of a commercial angle doesn’t invalidate the underlying issue. Fraud is real whether or not someone sells a solution.

Who benefits—and what to watch next

The most important question for readers isn’t “Will AI write hits?” It’s “Who profits from the current confusion?”

The upload spike suggests at least one group benefits: entities capable of generating and delivering music at enormous scale. If Deezer is right that a high percentage of AI-track streaming is fraudulent—up to 85% detected and demonetized, per Deezer—then part of the ecosystem is attempting to turn streaming payouts into a programmable income stream.

Deezer’s countermeasures—labeling, recommendation suppression, and demonetization—aim to break that loop. The open question is whether those defenses become a template across the industry or a differentiator that leaves other services more exposed.

What readers should track (practical implications)

- Uploads vs. streams: When you see “AI is X%,” ask which metric it is. Deezer’s 39% was uploads; Deezer’s ~0.5% was streams.
- Discovery policy: If a platform demotes AI-generated tracks from recommendations, the cultural impact may stay limited even as catalog volume explodes.
- Fraud enforcement: Demonetization changes incentives. Weak enforcement invites industrial abuse.
- Attribution integrity: Impersonation and profile-hijacking are reputational landmines, not just platform housekeeping.

The larger lesson is uncomfortable: streaming has always been vulnerable to scale. AI didn’t create that vulnerability; it lowered the cost of exploiting it.

What to watch when you see an “AI is X%” headline

  • Ask: Is the number uploads/deliveries or streams/listening?
  • Check: Does the platform recommend this content or keep it out of discovery?
  • Look: Are there stated fraud rates and demonetization policies?
  • Verify: Could it be impersonation/misdelivery to the wrong artist profile?

The real story behind 39%: an attention economy meets an automation economy

Deezer’s 39% number is alarming in the way a factory siren is alarming. Something is happening at scale. But the first-order story isn’t that listeners have embraced synthetic music. It’s that a platform’s intake systems can be saturated—cheaply, quickly, and perhaps strategically.

Meanwhile, the listening metric—about 0.5% of streams for fully AI-generated music, per Deezer—suggests culture is not moving as fast as automation. That gap between what can be produced and what can be loved is where the next phase of streaming economics will be fought.

If Deezer is correct that the majority of AI-track streaming it sees is fraudulent—reported as up to ~70% in 2025 coverage, and described as up to 85% demonetized in a 2026 Deezer statement—then the industry’s challenge isn’t a battle over authenticity. It’s a battle over incentives.

The future of music on streaming may hinge less on whether AI can write a chorus and more on whether platforms can prevent their royalty systems from being treated like a slot machine.
T
About the Author
TheMurrow Editorial is a writer for TheMurrow covering entertainment.

Frequently Asked Questions

Is 39% of the music on Deezer AI-generated now?

No. Deezer’s ~39% figure refers to daily deliveries/uploads, not the share of the overall catalog and not the share of listening. Deezer reported about 60,000 AI-generated tracks delivered per day in January 2026, which it framed as ~39% of all daily deliveries. Upload volume can rise dramatically without becoming what most users play.

How much of Deezer listening is actually fully AI-generated?

Deezer has said fully AI-generated music represents about ~0.5% of total streams on the platform. That’s a small fraction compared with the upload share. The mismatch suggests a major supply-side surge without a corresponding demand-side takeover.

Why does Deezer say AI music is linked to streaming fraud?

Because Deezer and press reporting have described extremely high fraud levels in this category. In June 2025 reporting, Deezer estimates suggested up to ~70% of streams of AI-generated music were fraudulent. In a January 2026 statement, Deezer said it detects and demonetizes up to 85% of streams on AI-generated music as fraudulent, removing them from the royalty pool.

What is Deezer doing about AI-generated tracks?

Deezer has taken three major steps described in its communications and coverage: labeling fully AI-generated tracks on albums, not promoting AI-generated tracks in editorial playlists or algorithmic recommendations, and demonetizing streams it detects as fraudulent—removing those streams from royalty calculations.

Why do AI tracks sometimes appear under real artists’ names?

One major reason is impersonation and misdelivery—fraudulent uploads that land on the wrong artist profile. Spotify has highlighted impersonation and incorrect delivery as an abuse vector, which helps explain why fans sometimes think a favorite artist “released” an AI track when it may be distribution abuse rather than authorship.

Can listeners reliably tell whether a song is AI-generated?

Not consistently. Deezer has claimed research indicating many listeners cannot reliably distinguish fully AI-generated music from human-made tracks. That uncertainty fuels misattribution and online panic—especially when platforms don’t have clear, consistent labeling.

More in Entertainment

You Might Also Like