TheMurrow

The FTC Just Made Fake Reviews Expensive—So Why Are the Worst Products Still ‘4.6 Stars’ in April 2026? The New Trick Is a Rating That Never Goes Away

The FTC can now seek civil penalties for defined review fraud—but the market’s “endless blue sky” ratings barely moved. The real battleground is what platforms let disappear—and what averages never have to remember.

By TheMurrow Editorial
April 8, 2026
The FTC Just Made Fake Reviews Expensive—So Why Are the Worst Products Still ‘4.6 Stars’ in April 2026? The New Trick Is a Rating That Never Goes Away

Key Points

  • 1Track the timeline: the FTC finalized the Consumer Reviews Rule in 2024, making defined review fraud eligible for civil penalties.
  • 2Understand what’s banned: AI fakes, sentiment-conditioned incentives, undisclosed insider endorsements, fake “independent” sites, and deceptive suppression tactics.
  • 3Watch the market response: warning letters went out, but sky-high averages persist—because platform design and enforcement lag still shape what you see.

A star-rating economy that still looks unreal

Star ratings still look like a summer sky: endless blue, barely a cloud. Open almost any marketplace, app store, or local listing and you’ll see the same story—4.6, 4.7, 4.8—an economy seemingly built on universal satisfaction.

Yet the Federal Trade Commission spent 2024 and 2025 acting as if that optimism is a liability. On August 14, 2024, the agency announced a final rule designed to make fake reviews and manufactured social proof legally expensive. The regulation—formally the Rule on the Use of Consumer Reviews and Testimonials, 16 C.F.R. Part 465—was published in the Federal Register on August 22, 2024 (89 FR 68077) and took effect on October 21, 2024.

The shift isn’t philosophical; it’s procedural. The FTC has long argued it can police deceptive review practices under the FTC Act. What changes with a rule is penalty power. The Commission explicitly framed the regulation as a way to unlock civil penalties for knowing violations—an enforcement lever with sharper teeth than guidance documents and one-off cases.

The star-rating economy runs on trust. The FTC’s new rule is an admission that trust has become a target.

— TheMurrow Editorial

The rule that turned review fraud into a penalty problem

The FTC’s new Consumer Reviews and Testimonials Rule did not invent the idea that fake reviews are deceptive. The agency has said that for years. The strategic move was codifying specific review-and-testimonial practices into a formal rule so the FTC can seek civil penalties when companies “knew or should have known” they were crossing the line.

The timeline matters because it signals urgency. The final rule was announced August 14, 2024, then formally published August 22, 2024, and became enforceable October 21, 2024. That is a quick march from announcement to effect for a market as sprawling as online reviews.

FTC staff framed the rule as a response to a familiar pattern: consumers increasingly rely on review platforms, while businesses (and third-party vendors) increasingly treat reputation as something to be purchased, engineered, or edited. The agency’s press materials explicitly included AI-generated fakes in the problem set—an acknowledgment that the old image of “review farms” has expanded into something more scalable and less traceable.

Why a rule, not just enforcement?

A rule is a map for courts. It also gives the FTC a cleaner path to civil penalties for violations of defined conduct. The agency’s press release on the final rule emphasized exactly that: deterrence. The message to businesses is less “we can sue” and more “we can fine”—and that difference changes risk calculations in boardrooms and marketing departments.

What readers should take from the timing

If you’re a consumer, the effective date—October 21, 2024—marks a line in the sand: after that, companies can’t plausibly pretend the expectations were unclear. If you run a business, the same date marks a compliance deadline, whether or not you’ve ever thought of your review strategy as “advertising.”

The FTC didn’t discover fake reviews in 2024. It decided to price them.

— TheMurrow Editorial

What the FTC actually banned: a plain-English toolkit

The rule’s strength is specificity. Instead of a general statement that deception is bad, the FTC listed a set of recognizable tactics that turn “reviews” into marketing copy wearing a disguise.

The FTC’s press release and staff Q&A describe several categories of prohibited conduct. Think of them as the modern review-fraud toolkit—common enough that many consumers can spot them, yet persistent enough that regulators felt compelled to write them down.

Fake or false consumer reviews and testimonials (including AI fakes)

The rule targets reviews attributed to non-existent people, people without real experience, or reviews that misrepresent the reviewer’s experience. Notably, the FTC explicitly contemplated AI-generated fakes as part of the issue. The central idea is simple: a consumer review is valuable because it’s a record of lived experience. When that experience is invented—by a person, a bot, or a model—the review becomes an ad.

Buying “sentiment-conditioned” reviews

Incentives aren’t automatically illegal; conditioning them is. The rule prohibits offering compensation or incentives conditioned on the review being positive (or negative). Put plainly: “Leave us a review and we’ll give you 10% off” is a different animal from “Leave us a five-star review and we’ll give you 10% off.”

Insider reviews without the disclosure consumers need

The rule also addresses insider testimonials—officers, managers, and in certain contexts, employees or relatives—when those endorsements are disseminated without proper disclosure. Readers understand why: insiders can have genuine experiences, but they also have built-in incentives that change how consumers interpret praise.

Company-controlled “independent” review sites

The rule bars misrepresenting an entity you control as an independent review site. That matters because independence is the entire point of third-party credibility. A “review site” that’s really a branded marketing page is a different product altogether.

Review suppression and the quiet manipulation of “what you see”

Fake reviews are loud. Review suppression is quieter—and often more effective. A single manufactured five-star rating can be balanced by an authentic one-star complaint. A system that hides or bullies away the negative feedback doesn’t need to manufacture praise; it just needs to curate the record.

The FTC’s rule targets review suppression tactics in two main ways: coercion and misrepresentation. The agency describes conduct such as intimidation, legal threats, or false accusations aimed at removing negative reviews. It also targets the claim that a business shows “all or most” reviews when it has actually suppressed reviews based on rating or negative sentiment.

The deception isn’t only in the removal—it’s in the claim

The core consumer harm is not simply that a review disappeared. The harm is that consumers are led to believe they’re seeing the full picture. A rating average and a handful of glowing testimonials can feel like an objective measure—until you consider what may have been filtered out.

Multiple perspectives: moderation vs. suppression

Platforms and businesses do have legitimate reasons to remove content:

- spam and scams
- profanity or harassment
- irrelevant material
- privacy violations

The FTC’s rule doesn’t require companies to publish abuse. The tension lies in the difference between moderation and sentiment-based suppression. Removing a review because it contains personal information is one thing. Removing it because it’s negative—then implying your reviews are comprehensive—is another.

A five-star average can be honest—or it can be a museum exhibit, curated to keep the messy parts out of view.

— TheMurrow Editorial

Fake social proof: followers, views, and the performance of popularity

Reviews are only one form of reputational currency. The rule also goes after fake indicators of social influence—the bought-and-sold ecosystem of followers, likes, and views that can make a brand look established overnight.

The FTC’s materials describe prohibitions on buying or selling fake social proof in certain commercial contexts. The logic is consistent with the rule’s overall philosophy: if consumers are being nudged by signals of popularity, those signals must reflect reality, not a marketing budget.

Why this matters even if you “ignore influencers”

Plenty of readers roll their eyes at influencer culture and claim they’re immune. Yet platform design makes social proof hard to escape. Star ratings, follower counts, “most popular” badges, and view totals are frictionless shortcuts. They stand in for research, especially on mobile screens, especially when the alternative is reading 200 comments.

A practical lens for consumers

When you see a product with a suspiciously polished presence—high ratings, enthusiastic testimonials, and an aura of inevitable popularity—one question is worth asking: are you looking at demand, or a purchased impression of demand? The FTC’s rule doesn’t eliminate that ambiguity, but it signals that manufacturing it can carry legal consequences.

Penalties: why “up to $53,088 per violation” changes behavior

The most consequential number attached to the rule is not a date; it’s a dollar figure. By late 2025, FTC warning materials repeatedly cited civil penalties “up to $53,088 per violation.” Earlier materials around rollout commonly referenced $51,744, reflecting inflation adjustments that change the ceiling over time.

Either way, the point is scale. “Per violation” is a phrase that can turn a review strategy into a balance-sheet problem.
$53,088
FTC materials in late 2025 cited civil penalties up to $53,088 per violation, underscoring that exposure can scale fast.
$51,744
Earlier rollout materials commonly referenced $51,744, reflecting inflation-adjusted penalty ceilings that change over time.

What “per violation” could mean in practice

The FTC’s Q&A underscores that penalties depend on what a court finds and that the FTC must meet the relevant knowledge standard—often framed as “knew or should have known.” Even with those safeguards, the math can get serious quickly if each act is counted separately:

- each incentivized review conditioned on positivity
- each fake testimonial posted or disseminated
- each instance of suppression coupled with misrepresentation

Courts determine penalties, and the FTC still has to prove its case. The rule doesn’t guarantee astronomical fines. It does, however, make the risk legible enough that companies can no longer treat review fraud as a low-grade, low-consequence tactic.

Why the FTC emphasized penalties after *AMG Capital*

The FTC explicitly framed the rule as part of a stronger deterrent approach, especially given constraints that pushed the agency toward clearer routes for monetary consequences. The rule is, in effect, a bet that predictable penalty exposure will change conduct faster than years of litigation.

Enforcement so far: warning letters and a market that hasn’t blinked

If you expected a dramatic post-rule crackdown—raids on “review farms,” headline-grabbing penalties, a sudden drop in suspiciously high averages—the public record is more restrained. The FTC has signaled enforcement, but not through a barrage of named cases.

In December 2025, FTC staff sent warning letters to 10 companies about potential violations of the Consumer Review Rule. The press materials did not publicly name the recipients. The FTC also emphasized that the letters were based on consumer complaints and company-provided information and were not final determinations of violations.
10
In December 2025, FTC staff sent warning letters to 10 companies about potential Consumer Review Rule violations; recipients were not publicly named.

What warning letters are designed to do

Warning letters are not courtroom drama. They’re leverage. They put companies on notice, establish that the agency is watching, and make future “we didn’t know” defenses harder to sustain. They also reflect resource choices: it’s cheaper to warn ten firms than to litigate against ten firms.

FTC staff messaging around the warning push was pointed: fake reviews and incentives for only 5-star reviews were still happening, and the agency was positioning itself to enforce through civil penalties.

The editorial tension: tougher rules, familiar star averages

The market reality remains stubborn. Many categories still show a sea of near-perfect ratings. That doesn’t prove the rule failed; enforcement often lags regulation. It does suggest the deeper challenge: a trust system built on easily gamed metrics will remain gameable unless platforms and businesses treat compliance as a design requirement, not just a legal risk.

What compliance looks like: practical takeaways for businesses and consumers

The rule’s real-world effect will be shaped less by FTC press releases than by day-to-day operational decisions: how businesses request reviews, how platforms display them, and how consumers interpret them.

For businesses: audit incentives, insiders, and displays

Companies don’t need a law degree to reduce risk. The compliance posture implied by the rule points to a short checklist:

Business compliance checklist implied by the rule

  • Stop “positive-only” incentives. Any offer conditioned on a favorable rating is a flashing red light.
  • Disclose insider relationships clearly. If officers, managers, employees, or relatives are providing testimonials in covered contexts, ensure disclosures meet the spirit of transparency.
  • Avoid “independent” theater. If your company controls a review site or page, don’t present it as neutral.
  • Document moderation practices. Remove spam and abuse, not negativity—and don’t claim you show “all or most” reviews if sentiment-based filtering is happening.

For consumers: treat reviews as evidence, not verdicts

A practical way to read reviews in the post-rule era is not “trust nothing,” but “trust structure over sparkle.” Look for:

- mixed ratings with specific details
- reviewers describing tradeoffs, not perfection
- patterns across platforms (not just one site)
- language that sounds like a person, not a slogan

None of this guarantees authenticity. It does help you avoid being guided by the easiest signals to counterfeit.

A real-world scenario you can recognize (and why the rule targets it)

Consider the familiar checkout card: “Leave us a 5-star review and get a discount on your next purchase.” That is the essence of sentiment-conditioned review buying. The rule targets it because it doesn’t simply encourage feedback; it pays for a particular conclusion. The result is less “consumer reviews” and more “commissioned applause.”

The rule doesn’t ask businesses to fear reviews. It asks them to stop treating praise as a procurement problem.

— TheMurrow Editorial

The harder question: can regulation restore trust in ratings?

The FTC’s approach is rational: define the most corrosive practices, then attach penalties strong enough to deter them. Yet trust is not restored by enforcement alone. It also depends on what platforms choose to measure and display.

Star averages compress nuance into a single number. That number is useful—until it becomes the only thing consumers see. When ratings become a currency, counterfeiting becomes a business. The FTC rule is an attempt to raise the cost of counterfeiting.

Skeptics will argue that fraudsters adapt faster than regulators. Supporters will counter that even partial deterrence matters: if the rule nudges more businesses away from “review engineering,” consumers gain a little more signal in the noise.

What’s hard to deny is the rule’s implicit diagnosis. Reviews, testimonials, follower counts—these are no longer casual opinions floating around the internet. They are market infrastructure. The FTC is treating them that way, and it’s telling businesses to do the same.

Key Insight

The FTC rule raises the cost of review counterfeiting, but platform design still decides what counts, what disappears, and what an average can hide.
T
About the Author
TheMurrow Editorial is a writer for TheMurrow covering reviews.

Frequently Asked Questions

What is the FTC’s “fake reviews” rule called?

The rule is formally titled the Rule on the Use of Consumer Reviews and Testimonials and is codified at 16 C.F.R. Part 465. It sets out specific prohibited practices involving reviews, testimonials, and certain forms of social proof, with the goal of deterring deceptive conduct through clearer enforcement and potential civil penalties.

When did the FTC’s consumer reviews rule take effect?

The FTC announced the final rule on August 14, 2024, and it was published in the Federal Register on August 22, 2024 (89 FR 68077). The rule’s effective date was October 21, 2024, meaning covered conduct after that date is subject to the rule’s requirements and enforcement framework.

Does the rule cover AI-generated fake reviews?

Yes. The FTC explicitly contemplated AI-generated fake reviews as part of the problem the rule addresses. The key issue is whether a review is fake or false—for example, attributed to a person who doesn’t exist or who didn’t actually have the experience described—regardless of whether a human or an AI system wrote it.

Are businesses allowed to offer incentives for reviews?

The rule specifically prohibits offering compensation or incentives conditioned on a review being positive (or negative). A general request for reviews may be treated differently than a “five-star only” offer. The compliance risk rises sharply when a business pays for a predetermined sentiment rather than genuine feedback.

What are the penalties for violating the rule?

FTC materials in late 2025 cited civil penalties of up to $53,088 per violation. Earlier references around rollout cited $51,744, reflecting inflation adjustments. Penalties are not automatic; courts determine them, and the FTC must prove the relevant elements, including knowledge standards such as “knew or should have known” for many provisions.

What has the FTC done to enforce the rule so far?

In December 2025, FTC staff sent warning letters to 10 companies about potential violations. The FTC did not publicly name the recipients in its press release materials, and it emphasized the letters were not final determinations. The enforcement signal focused on persistent problems like fake reviews and incentives for only 5-star ratings.

More in Reviews

You Might Also Like