The FTC Just Made Fake Reviews Expensive—So Why Are the Worst Products Still ‘4.6 Stars’ in April 2026? The New Trick Is a Rating That Never Goes Away
The FTC can now seek civil penalties for defined review fraud—but the market’s “endless blue sky” ratings barely moved. The real battleground is what platforms let disappear—and what averages never have to remember.

Key Points
- 1Track the timeline: the FTC finalized the Consumer Reviews Rule in 2024, making defined review fraud eligible for civil penalties.
- 2Understand what’s banned: AI fakes, sentiment-conditioned incentives, undisclosed insider endorsements, fake “independent” sites, and deceptive suppression tactics.
- 3Watch the market response: warning letters went out, but sky-high averages persist—because platform design and enforcement lag still shape what you see.
A star-rating economy that still looks unreal
Yet the Federal Trade Commission spent 2024 and 2025 acting as if that optimism is a liability. On August 14, 2024, the agency announced a final rule designed to make fake reviews and manufactured social proof legally expensive. The regulation—formally the Rule on the Use of Consumer Reviews and Testimonials, 16 C.F.R. Part 465—was published in the Federal Register on August 22, 2024 (89 FR 68077) and took effect on October 21, 2024.
The shift isn’t philosophical; it’s procedural. The FTC has long argued it can police deceptive review practices under the FTC Act. What changes with a rule is penalty power. The Commission explicitly framed the regulation as a way to unlock civil penalties for knowing violations—an enforcement lever with sharper teeth than guidance documents and one-off cases.
The star-rating economy runs on trust. The FTC’s new rule is an admission that trust has become a target.
— — TheMurrow Editorial
The rule that turned review fraud into a penalty problem
The timeline matters because it signals urgency. The final rule was announced August 14, 2024, then formally published August 22, 2024, and became enforceable October 21, 2024. That is a quick march from announcement to effect for a market as sprawling as online reviews.
FTC staff framed the rule as a response to a familiar pattern: consumers increasingly rely on review platforms, while businesses (and third-party vendors) increasingly treat reputation as something to be purchased, engineered, or edited. The agency’s press materials explicitly included AI-generated fakes in the problem set—an acknowledgment that the old image of “review farms” has expanded into something more scalable and less traceable.
Why a rule, not just enforcement?
What readers should take from the timing
The FTC didn’t discover fake reviews in 2024. It decided to price them.
— — TheMurrow Editorial
What the FTC actually banned: a plain-English toolkit
The FTC’s press release and staff Q&A describe several categories of prohibited conduct. Think of them as the modern review-fraud toolkit—common enough that many consumers can spot them, yet persistent enough that regulators felt compelled to write them down.
Fake or false consumer reviews and testimonials (including AI fakes)
Buying “sentiment-conditioned” reviews
Insider reviews without the disclosure consumers need
Company-controlled “independent” review sites
Review suppression and the quiet manipulation of “what you see”
The FTC’s rule targets review suppression tactics in two main ways: coercion and misrepresentation. The agency describes conduct such as intimidation, legal threats, or false accusations aimed at removing negative reviews. It also targets the claim that a business shows “all or most” reviews when it has actually suppressed reviews based on rating or negative sentiment.
The deception isn’t only in the removal—it’s in the claim
Multiple perspectives: moderation vs. suppression
- spam and scams
- profanity or harassment
- irrelevant material
- privacy violations
The FTC’s rule doesn’t require companies to publish abuse. The tension lies in the difference between moderation and sentiment-based suppression. Removing a review because it contains personal information is one thing. Removing it because it’s negative—then implying your reviews are comprehensive—is another.
A five-star average can be honest—or it can be a museum exhibit, curated to keep the messy parts out of view.
— — TheMurrow Editorial
Fake social proof: followers, views, and the performance of popularity
The FTC’s materials describe prohibitions on buying or selling fake social proof in certain commercial contexts. The logic is consistent with the rule’s overall philosophy: if consumers are being nudged by signals of popularity, those signals must reflect reality, not a marketing budget.
Why this matters even if you “ignore influencers”
A practical lens for consumers
Penalties: why “up to $53,088 per violation” changes behavior
Either way, the point is scale. “Per violation” is a phrase that can turn a review strategy into a balance-sheet problem.
What “per violation” could mean in practice
- each incentivized review conditioned on positivity
- each fake testimonial posted or disseminated
- each instance of suppression coupled with misrepresentation
Courts determine penalties, and the FTC still has to prove its case. The rule doesn’t guarantee astronomical fines. It does, however, make the risk legible enough that companies can no longer treat review fraud as a low-grade, low-consequence tactic.
Why the FTC emphasized penalties after *AMG Capital*
Enforcement so far: warning letters and a market that hasn’t blinked
In December 2025, FTC staff sent warning letters to 10 companies about potential violations of the Consumer Review Rule. The press materials did not publicly name the recipients. The FTC also emphasized that the letters were based on consumer complaints and company-provided information and were not final determinations of violations.
What warning letters are designed to do
FTC staff messaging around the warning push was pointed: fake reviews and incentives for only 5-star reviews were still happening, and the agency was positioning itself to enforce through civil penalties.
The editorial tension: tougher rules, familiar star averages
What compliance looks like: practical takeaways for businesses and consumers
For businesses: audit incentives, insiders, and displays
Business compliance checklist implied by the rule
- ✓Stop “positive-only” incentives. Any offer conditioned on a favorable rating is a flashing red light.
- ✓Disclose insider relationships clearly. If officers, managers, employees, or relatives are providing testimonials in covered contexts, ensure disclosures meet the spirit of transparency.
- ✓Avoid “independent” theater. If your company controls a review site or page, don’t present it as neutral.
- ✓Document moderation practices. Remove spam and abuse, not negativity—and don’t claim you show “all or most” reviews if sentiment-based filtering is happening.
For consumers: treat reviews as evidence, not verdicts
- mixed ratings with specific details
- reviewers describing tradeoffs, not perfection
- patterns across platforms (not just one site)
- language that sounds like a person, not a slogan
None of this guarantees authenticity. It does help you avoid being guided by the easiest signals to counterfeit.
A real-world scenario you can recognize (and why the rule targets it)
The rule doesn’t ask businesses to fear reviews. It asks them to stop treating praise as a procurement problem.
— — TheMurrow Editorial
The harder question: can regulation restore trust in ratings?
Star averages compress nuance into a single number. That number is useful—until it becomes the only thing consumers see. When ratings become a currency, counterfeiting becomes a business. The FTC rule is an attempt to raise the cost of counterfeiting.
Skeptics will argue that fraudsters adapt faster than regulators. Supporters will counter that even partial deterrence matters: if the rule nudges more businesses away from “review engineering,” consumers gain a little more signal in the noise.
What’s hard to deny is the rule’s implicit diagnosis. Reviews, testimonials, follower counts—these are no longer casual opinions floating around the internet. They are market infrastructure. The FTC is treating them that way, and it’s telling businesses to do the same.
Key Insight
Frequently Asked Questions
What is the FTC’s “fake reviews” rule called?
The rule is formally titled the Rule on the Use of Consumer Reviews and Testimonials and is codified at 16 C.F.R. Part 465. It sets out specific prohibited practices involving reviews, testimonials, and certain forms of social proof, with the goal of deterring deceptive conduct through clearer enforcement and potential civil penalties.
When did the FTC’s consumer reviews rule take effect?
The FTC announced the final rule on August 14, 2024, and it was published in the Federal Register on August 22, 2024 (89 FR 68077). The rule’s effective date was October 21, 2024, meaning covered conduct after that date is subject to the rule’s requirements and enforcement framework.
Does the rule cover AI-generated fake reviews?
Yes. The FTC explicitly contemplated AI-generated fake reviews as part of the problem the rule addresses. The key issue is whether a review is fake or false—for example, attributed to a person who doesn’t exist or who didn’t actually have the experience described—regardless of whether a human or an AI system wrote it.
Are businesses allowed to offer incentives for reviews?
The rule specifically prohibits offering compensation or incentives conditioned on a review being positive (or negative). A general request for reviews may be treated differently than a “five-star only” offer. The compliance risk rises sharply when a business pays for a predetermined sentiment rather than genuine feedback.
What are the penalties for violating the rule?
FTC materials in late 2025 cited civil penalties of up to $53,088 per violation. Earlier references around rollout cited $51,744, reflecting inflation adjustments. Penalties are not automatic; courts determine them, and the FTC must prove the relevant elements, including knowledge standards such as “knew or should have known” for many provisions.
What has the FTC done to enforce the rule so far?
In December 2025, FTC staff sent warning letters to 10 companies about potential violations. The FTC did not publicly name the recipients in its press release materials, and it emphasized the letters were not final determinations. The enforcement signal focused on persistent problems like fake reviews and incentives for only 5-star ratings.















