TheMurrow

7 Million AI Songs a Day Are Hitting Streaming Apps—So Why Are Your Royalties (and Recommendations) Getting Worse Even If You Never Press Play?

The fallout from AI music isn’t just about taste—it’s about streaming infrastructure: ingestion, fraud, recommendations, and the accounting that decides what gets paid. Even if you skip every AI track, the pipes can still distort discovery and dilute payouts.

By TheMurrow Editorial
March 1, 2026
7 Million AI Songs a Day Are Hitting Streaming Apps—So Why Are Your Royalties (and Recommendations) Getting Worse Even If You Never Press Play?

Key Points

  • 1Separate creation from distribution: “7 million AI songs/day” reflects generation-scale output, while real harm often starts at ingestion and accounting.
  • 2Track Deezer’s numbers: tens of thousands of fully AI tracks delivered daily, yet only ~0.5% of streams—many allegedly fraudulent.
  • 3Expect upstream fallout: pro‑rata pools, fraud filtering, and noisier recommendations can cut payouts and discovery even if you never play AI music.

A strange kind of music boom is underway: not the kind driven by a breakout star or a new sound, but by machines that can manufacture songs faster than anyone can listen to them.

In recent months, one number has ricocheted around artist circles and tech headlines—“7 million AI songs a day.” It’s the sort of figure that lands like a threat: if the world can mint millions of tracks daily, what happens to the value of the song you spent months writing?

The more unsettling answer is that the damage doesn’t require your attention. You can ignore every AI-generated track on your “Release Radar” and still feel the effects—because the pressure point isn’t only taste. It’s infrastructure: streaming ingestion, recommendation systems, fraud detection, and the accounting rules that turn a finite pool of subscription money into payouts.

“The harm doesn’t require you to press play. It can happen upstream—in the pipes that decide what gets surfaced and what gets paid.”

— TheMurrow (Pullquote)

The “7 million AI songs a day” claim—and what it actually describes

The “7 million songs/day” figure is not a verified count of new tracks appearing on Spotify, Apple Music, or YouTube each day. The claim is tied to Suno, an AI music-generation platform, and it arrives via an artists’ pressure effort called “Say No To Suno,” covered by MusicRadar. The campaign argues that Suno is producing around 7 million AI-generated tracks per day, and that this output risks overwhelming distribution and “diluting” royalty pools for human artists.

That framing contains a crucial ambiguity: generation isn’t distribution. AI systems can create vast quantities of audio, but major streaming services have multiple layers between “a track exists” and “a track earns money.” Readers deserve clean definitions, because policy debates turn on them.
7 million
The headline-grabbing claim is linked to Suno’s creation-scale output via the “Say No To Suno” campaign, not audited DSP catalog additions.

Creation vs. distribution: four different “volumes” people confuse

A sober way to interpret the 7 million figure is to treat it as creation-layer output—how many tracks can be generated—rather than an audited count of tracks that make it into consumer catalogs. The pipeline breaks down into four stages:

- Creation volume: how many tracks are generated (for example, inside Suno).
- Distribution volume: how many tracks are submitted into a DSP’s ingestion pipeline.
- Catalog acceptance: how many tracks actually go live.
- Consumption: how much real listening occurs.

The distinction matters because the real harm can occur even if consumption stays low. A flood at the creation layer can still drive spam submissions, fraudulent streaming, and policy shifts that reshape payouts and discovery for everyone else.

“A million tracks can be a rounding error in listening—and still a crisis in accounting.”

— TheMurrow (Pullquote)

Deezer’s rare transparency: what the streaming gate actually looks like

Most major platforms speak cautiously about AI uploads, offering the occasional enforcement headline but few hard numbers. Deezer has become an outlier: unusually data-forward, publishing regular figures and describing what it’s doing to contain the problem.

Deezer’s disclosures matter for one simple reason: they provide a real-world window into the scale of AI submissions at the gate, not at the hype layer.

The numbers: tens of thousands of fully AI tracks per day

Deezer reported a rapid rise in “fully AI-generated” tracks delivered to its service:

- April 2025: about 20,000 fully AI-generated tracks per day, roughly 18% of daily deliveries. (Deezer newsroom)
- September 11, 2025: 30,000+ per day, about 28% of daily delivered music. (Deezer newsroom)
- November 2025: roughly 50,000 AI tracks/day, around 34% of daily submissions, as reported by Music Business Worldwide citing Deezer figures.
- Late January 2026: multiple outlets cited Deezer at roughly 60,000 fully AI-made tracks/day, around 39% of daily deliveries (some of this is secondary reporting rather than a Deezer newsroom post).

These are not abstract “AI is coming” warnings. They describe day-to-day operational reality: a meaningful share of incoming music being generated by machines.
20,000/day
Deezer’s April 2025 figure for fully AI-generated tracks delivered to the platform—about 18% of daily deliveries.
30,000+/day
Deezer’s September 11, 2025 figure—about 28% of daily delivered music.
60,000/day
Cited by multiple outlets for late January 2026—around 39% of daily deliveries (some secondary reporting).

“Nobody listens” and “it still matters” can both be true

Deezer’s own messaging complicates the popular fear that AI music is winning hearts. When Deezer rolled out AI labeling in June 2025, TechCrunch reported the company’s view that most AI tracks don’t go viral. More pointedly, Deezer has said a large share of the streams they do receive are fake—and TechCrunch reported Deezer’s estimate that up to about 70% of streams of fully AI-generated tracks are fraudulent.

Meanwhile, Music Business Worldwide reported Deezer’s claim that fully AI music represented only about 0.5% of all streams. So the supply is swelling, but legitimate demand appears small.

That combination—huge supply, low genuine listening, high fraud—points to the heart of the issue: the incentive isn’t artistry; it’s arbitrage.

“If only 0.5% of streams are AI, why ingest 30,000 of them a day? Because the point isn’t culture—it’s exploitation.”

— TheMurrow (Pullquote)

How AI “songs you never play” can still cost artists money

Streaming payouts are not a simple “you played my song, I get paid” mechanic. They’re the result of accounting systems that distribute finite revenue across massive catalogs—and those systems can be distorted without any listener knowingly choosing AI.

Deezer’s fraud figures help explain how. If a large share of streams for fully AI tracks are fraudulent, then the “listener” is often not a person at all. It’s a bot, a farm, or a coordinated manipulation effort designed to turn low-cost audio into payout.

The pro‑rata problem: dilution without fandom

Most streaming services historically rely on pro‑rata models: subscription and ad revenue go into a pool, then payouts are allocated by share of total streams. Under that structure, even a small amount of fraudulent listening can matter because it changes the denominator—total streams—and shifts slices of the revenue pie.

Even if your fans never click an AI track, your effective share can shrink when:

- fake streams inflate total platform streams,
- fraud siphons payouts toward manipulated tracks,
- discovery systems become noisier, reducing the odds your song reaches real listeners.

Deezer has said it is filtering fraudulent streams out of royalty payments, according to TechCrunch. That’s an explicit admission that the problem isn’t hypothetical: streams are being manufactured, and platforms are having to decide whether to pay them.

Key Insight

In a pro‑rata system, you can lose money without losing fans: inflated totals and fraud can shrink your revenue share even if nobody “chooses” AI music.

Recommendations are an economic system, not just a convenience

Music discovery isn’t merely a user feature; it’s an allocation mechanism. If a platform’s systems are forced to process mountains of low-signal, high-spam audio, the cost is paid in degraded recommendations, slower editorial workflows, and more aggressive filtering. Human artists can get caught in that net—especially independent musicians whose metadata, release cadence, or audience profile resembles spam patterns.

So yes, you can be “harmed” without listening. The infrastructure that determines what gets surfaced—and what gets monetized—can become less reliable for everyone.

What Deezer is doing: labeling, de‑ranking, and the politics of transparency

Deezer’s response is unusually direct. In June 2025, Deezer began labeling AI-generated albums and tracks, a move reported by TechCrunch. The company also stated that most AI tracks do not go viral—yet many of their streams are fake, reinforcing the fraud narrative.

By September 2025, Deezer was going further. In its newsroom post about “28% fully AI-generated music,” Deezer said it tags fully AI-generated music and excludes it from algorithmic recommendations and editorial playlists—positioning itself, by its own account, as the only service explicitly taking that approach at the time.

What “labeling” really signals

Labeling is not just consumer information. It’s a policy claim: the platform is confident enough in its detection to mark releases as AI-generated, with potential economic consequences. That raises inevitable questions:

- How accurate is detection, and how often are artists misclassified?
- What counts as “fully AI-generated” versus partially assisted production?
- Who gets to contest a label?

The available research here doesn’t provide error rates or appeals processes, so a responsible conclusion is limited: Deezer is choosing visibility over vagueness, and that choice will shape its relationship with both listeners and creators.

What Deezer’s labeling implies

Labeling is a trust claim, not just a tag: it signals detection confidence, carries economic consequences, and raises questions about definitions, accuracy, and appeals.

De‑ranking as a cultural stance

Excluding fully AI-generated music from recommendations and editorial playlists is not a neutral act. It implies an answer—at least provisionally—to a question streaming services have often dodged: should AI-generated music compete in the same discovery funnels as human releases?

Some readers will applaud that as common sense. Others will see it as gatekeeping, especially if the definition of “AI-generated” expands. What matters is that Deezer has made the trade-off explicit: protect discovery quality, even if it means limiting reach for a category of content.

Fraud, not fandom, is the accelerant—and it exploits the weakest part of streaming

Deezer’s reported figure—up to ~70% fraudulent streams for fully AI tracks—forces a reframing. The biggest risk is not that listeners suddenly prefer machine-made music. The risk is that streaming’s incentives invite industrial-scale manipulation, and AI makes the raw material cheap.

A synthetic track can be generated in minutes. A thousand synthetic tracks can be generated in an afternoon. Pair that with automated uploading and botted streaming, and you have a low-cost attempt to capture revenue or launder payouts through a system built for volume.

Why AI makes streaming fraud easier

Traditional streaming fraud already had a problem: it required content. AI reduces that requirement to near-zero marginal cost. That shifts the economics of scam attempts:

- More tracks to distribute risk across accounts.
- More releases to test what gets past ingestion checks.
- More “artist profiles” to keep enforcement playing whack‑a‑mole.

Deezer’s data suggests that much of what arrives is not intended to become part of culture. It’s intended to become part of an accounting ledger.

The collateral damage: stricter rules, higher friction

When platforms respond to spam and fraud, they often tighten systems across the board. Even without detailed Spotify figures in the provided research, the logic is familiar: to prevent abuse, platforms introduce more verification, more thresholds, more removals. The downside is that legitimate independents—especially those without label infrastructure—tend to feel the friction first.

The result can look like “my streams are down” or “my release didn’t get surfaced,” even when the listener’s taste hasn’t changed. The platform’s trust machinery has.

The creator’s dilemma: AI as tool, AI as competitor, AI as counterfeit

Artists are not monolithic on this issue. Some want hard bans. Others want licensing and compensation. Many use AI tools in mundane ways—brainstorming lyrics, generating reference tracks, accelerating ideation—without believing they’re replacing human authorship.

The controversy intensifies around two flashpoints embedded in the “Say No To Suno” argument as reported by MusicRadar: scale and derivation. If AI systems generate music at enormous volume, and if that output is perceived as “derived” from existing artists’ work, then the debate becomes less about innovation and more about extraction.

Three perspectives readers should hold at once

A clear-eyed view makes room for at least three positions:

1. AI as creative assistance: tools that expand access and speed up production, especially for newcomers.
2. AI as market competitor: fully generated tracks competing for attention, playlist slots, and sometimes payouts.
3. AI as counterfeit: impersonation, style-copying, and fraud, where the goal is to confuse systems or siphon revenue.

Deezer’s actions are aimed primarily at the third category—fraud and manipulation—while also constraining the second by removing fully AI tracks from recommendations.

Practical takeaways: what musicians, listeners, and platforms can do now

The research points to a messy reality: the problem is less “AI songs are better” than “AI songs are cheap enough to weaponize.” That suggests concrete priorities.

For musicians: protect your identity, watch the pipes

Even without platform-wide transparency, creators can reduce risk and improve response time:

- Monitor your artist profiles for lookalike releases and odd compilations.
- Track sudden spikes in streams from unusual geographies or playlists; fraud can backfire and trigger enforcement.
- Keep metadata clean (consistent artist name, correct credits), because filtering systems lean heavily on metadata patterns.

Deezer’s labeling also hints at a future where artists may need to be able to explain their process. Keeping session files and project documentation is not romantic, but it may become a practical defense.

Musician checklist: reduce risk fast

  • Monitor your artist profiles for lookalike releases and odd compilations
  • Track sudden spikes in streams from unusual geographies or playlists
  • Keep metadata clean (consistent artist name, correct credits)
  • Retain session files and project documentation in case process questions arise

For listeners: demand disclosure that doesn’t punish legitimate experimentation

Many listeners won’t care whether a track is AI-generated—until they’re misled. Labeling can serve consumer trust, but it must be paired with nuance: “fully AI-generated” is not the same thing as “used AI somewhere.”

Deezer’s approach—labeling plus recommendation exclusion—will attract both praise and criticism. Readers can push for something simpler and more universal: clear labeling standards and transparent enforcement, so neither artists nor audiences are guessing.

For platforms: treat AI spam as a security problem, not a genre

Deezer’s most useful contribution may be how it frames the issue: not as aesthetics, but as fraud and quality control. If up to ~70% of streams on fully AI-generated tracks are fraudulent, the response belongs alongside other platform integrity efforts.

That includes detection, demotion, and—most importantly—royalty filtering that prevents fake streams from being paid. Deezer has said it is already filtering fraudulent streams out of payments, as reported by TechCrunch. That is the kind of measurable intervention that matters.

The harder question: what happens when the flood becomes normal

The numbers will keep rising. Deezer’s progression—from 20,000 fully AI tracks/day in April 2025 to reports of 60,000/day by early 2026—is the shape of a curve, not a blip.

Meanwhile, “7 million songs/day” remains a potent symbol of creation-scale abundance, whether or not that many tracks ever touch a DSP catalog. The important point is not the exactness of the number; it’s what it implies: a world where the unit cost of a “song” approaches zero.

When songs become nearly free to manufacture, streaming services are forced to decide what a song is for. Culture? Background noise? Arbitrage? The answer will be expressed less in slogans than in quiet settings: what gets labeled, what gets recommended, what gets paid, what gets removed.

The listeners who “never press play” are still inside that system. So are the artists whose livelihoods depend on it. The next era of streaming may be defined by an unglamorous fight over integrity—because when anyone can mint music at scale, the scarce resource is no longer sound. It’s trust.
T
About the Author
TheMurrow Editorial is a writer for TheMurrow covering entertainment.

Frequently Asked Questions

Is it true that AI is making 7 million songs per day?

The “7 million AI songs a day” claim has been linked to Suno, not to audited daily uploads on Spotify or Apple Music. As reported by MusicRadar, it appears in an artists’ pressure campaign (“Say No To Suno”) and is best understood as a creation-layer output estimate. It does not mean 7 million tracks are being accepted into major streaming catalogs each day.

How many AI-generated tracks are actually being uploaded to streaming services?

Deezer is the clearest public source in the provided research. It reported about 20,000 fully AI-generated tracks per day in April 2025 (about 18% of deliveries) and 30,000+ per day by September 2025 (about 28%). Reporting later cited 50,000/day in November 2025 and around 60,000/day by early 2026, though some of the latter is secondary reporting.

If nobody listens to AI tracks, why should artists care?

Because “listens” can be manufactured. Deezer has said up to ~70% of streams on fully AI-generated tracks may be fraudulent (reported by TechCrunch). In payout systems where revenue is allocated based on stream share, fraudulent volume can redirect money and distort recommendation signals, even if real fans aren’t choosing AI tracks.

Are AI-generated songs taking over streaming?

Deezer data suggests a split: AI can be a large share of submissions while remaining a small share of listening. Music Business Worldwide reported Deezer’s claim that fully AI-generated music accounted for only about 0.5% of all streams. That doesn’t eliminate the risk; it changes it. The issue becomes spam, fraud, and discovery quality more than audience preference.

What is Deezer doing about AI music?

Deezer began labeling AI-generated albums and tracks in June 2025, as reported by TechCrunch. Deezer also says it excludes fully AI-generated music from algorithmic recommendations and editorial playlists, according to its September 2025 newsroom post. Deezer has additionally said it is filtering fraudulent streams out of royalty payments.

Does labeling AI music solve the problem?

Labeling helps with transparency, but it raises hard questions about accuracy and definitions—especially around what counts as “fully AI-generated” versus AI-assisted production. The provided research does not include detection error rates or appeals processes. Labeling can still be valuable as a trust signal, but it works best alongside fraud prevention and clear standards so legitimate artists aren’t mistakenly penalized.

More in Entertainment

You Might Also Like