TheMurrow

The Privacy Reset

Third‑party cookies faded—but tracking didn’t. Here’s what changed, what didn’t, and how to cut exposure without turning privacy into a second job.

By TheMurrow Editorial
February 25, 2026
The Privacy Reset

Key Points

  • 1Recognize the post-cookie reality: tracking shifted to first-party data, SDKs, partnerships, and fingerprinting that’s harder to see or stop.
  • 2Define your threat model: advertisers, criminals, platforms, and governments collect or exploit data differently—prioritize defenses that match your real risks.
  • 3Harden identity first: secure email, reduce phone-number reliance, enable passkeys/MFA, and lock recovery settings to prevent privacy loss via account takeover.

The great “privacy reset” was supposed to be simple: kill third‑party cookies, and the web would stop trailing you like a shadow. By 2026, that promise looks quaint. Third‑party cookies matter less than they used to—but tracking didn’t vanish. It changed shape.

Most readers already sense the shift. You clear cookies and still see eerily relevant ads. You switch browsers and the “new” web seems to recognize you anyway. The explanation isn’t magic, and it isn’t paranoia. It’s an industry doing what it has always done—extract data—using sturdier tools than the ones privacy headlines taught you to fear.

What follows isn’t a manifesto, and it isn’t a shopping list of “privacy apps.” It’s a practical map of what actually changed, what didn’t, and how to reduce your default exposure without turning your digital life into a full‑time job.

“The privacy reset didn’t end tracking. It redistributed it—away from cookies and toward identifiers you rarely see.”

— TheMurrow Editorial

The 2026 “privacy reset”: cookies faded, surveillance adapted

A useful starting point is the uncomfortable truth: the underlying business model remains data extraction, even as the mechanics evolve. You can see the shift in how browsers and ad tech discuss tracking. Third‑party cookies were once the poster child—easy to understand, easy to demonize, and increasingly easy to block. But ad targeting doesn’t require third‑party cookies. It requires identifiers and leverage.

MDN’s explainer on the end of third‑party cookies notes a key historical detail: Chrome long blocked third‑party cookies by default only in Incognito, while Firefox and Safari moved earlier on cross‑site tracking protections. That staggered timeline matters because it gave the tracking industry years to diversify. Cookies became one tool among many, not the tool.

What replaced them is less visible and often harder to opt out of:

What replaced third‑party cookies (and why it’s harder to see)

  • First‑party tracking: when the site you’re visiting collects data directly and uses it for targeting or measurement.
  • Data partnerships: when multiple companies link data behind the scenes.
  • SDK-based app tracking: when apps embed third‑party software development kits that siphon behavioral data.
  • Fingerprinting: when your device and browser traits are assembled into a probabilistic identity.

Fingerprinting surged because it survives “cookie hygiene”

WIRED’s reporting on fingerprinting captures why this method has surged: fingerprinting can persist even if you clear cookies, use a VPN, or change browsers, because the “identifier” is assembled from many small signals—fonts, screen size, hardware hints, and other attributes.

This is why the lived experience of the web doesn’t match the headline story. If the identifier isn’t stored as a cookie, “clear cookies” becomes a partial fix at best—and sometimes a psychological one. The tracking logic moves from obvious storage to probabilistic recognition, and the burden shifts to the user to understand mechanisms that are intentionally not user-facing.

Regulation is real—but uneven

Regulators have noticed, but unevenly. The EU’s Digital Services Act (DSA) pushes ad transparency and restricts certain targeting practices, especially involving sensitive data and children. In the United States, privacy protections remain fragmented across states—no comprehensive federal privacy law currently governs the whole terrain.

The practical implication for readers is modest but important: “privacy” in 2026 is less about becoming invisible and more about changing your default exposure—reducing passive collection, minimizing persistent identifiers, and hardening accounts so personal data doesn’t spill through the easiest breach of all: account takeover.

Key takeaway

In 2026, the privacy question isn’t “How do I disappear?” It’s “How do I reduce default exposure, weaken identifiers, and prevent account takeover?”

A threat model you can actually use: who are you defending against?

Privacy advice fails when it assumes every reader has the same adversary. A workable “privacy stack” begins with a clear threat model: who wants your data, and what can they do with it? Four groups matter for most professionals.

The point isn’t to become a security researcher. It’s to stop taking random tips and start making coherent tradeoffs. If your biggest risk is data brokerage, you’ll prioritize identifier reduction and location controls. If your biggest risk is account takeover, you’ll prioritize authentication and recovery. If your biggest risk is platform defaults, you’ll audit permissions and telemetry. If your biggest risk is government access, you’ll think about jurisdiction and where your data lives.

Advertisers and data brokers: profiling, inference, resale

Advertisers want predictability—what you’ll buy, read, fear, or vote for. Data brokers want liquidity—the ability to package and resell that predictability. The risk is not only that companies know what you do, but that they can infer sensitive facts you never explicitly shared.

A concrete example: location data. The Verge reported on Federal Trade Commission action against location‑data brokers, including bans on certain companies from selling or using “sensitive” location data. That’s not a niche issue. Location is among the most revealing signals available: patterns can suggest your workplace, your home, your religious practice, your medical visits, or your relationships.

“A single data point is trivia. A week of location traces becomes a biography.”

— TheMurrow Editorial
1 week
Of location traces can be enough to infer home, work, routines, relationships, and sensitive visits—turning “data” into a practical biography.

Criminals: phishing, account takeover, SIM swapping

For many readers, the most immediate privacy risk is not ad targeting. It’s losing control of an account that contains your entire digital life. Stolen passwords, weak authentication, and sloppy recovery settings turn privacy into a downstream casualty: once email is compromised, attackers can reset passwords for everything else.

SIM swapping deserves special attention because it exploits the fragile link between identity and phone numbers. If a phone number is treated as proof of identity—and it often is—then an attacker who hijacks it gains access to reset codes, calls, and sometimes the keys to your financial accounts.

Key Insight

Most “privacy” failures begin as account failures: compromise email or phone-based recovery, and the attacker can unlock the rest.

Platform ecosystems: defaults decide what’s collected

Operating systems, app stores, and cloud providers shape privacy before you opt into anything. Permissions, ad IDs, telemetry settings, and default backup behavior determine what leaves your device, and how easily it can be associated with you.

Even strong personal habits struggle against permissive defaults. The smartest privacy “tool” is often a setting you never touched.

Governments: lawful access and cross‑border data pressure

Government access is not a monolith. Some requests are lawful, specific, and constrained. Others are broad, secretive, or politically motivated. Either way, the modern privacy question is inseparable from jurisdiction and data flows.

The Associated Press reported on a U.S. executive order framing privacy as national security, aiming to restrict sensitive data flows to “countries of concern.” Readers don’t need to become policy experts to see the direction of travel: governments increasingly treat data as a strategic asset, and private companies sit in the middle.
4
Most workable threat models for professionals boil down to four actors: advertisers/brokers, criminals, platform ecosystems, and governments.

The new tracking toolbox: first‑party data, partnerships, SDKs, fingerprinting

Third‑party cookies were easy to spot because the browser could label them “third‑party.” The newer toolbox hides in plain sight.

This matters because user-facing controls often still speak the language of the old era: cookie banners, “reject third parties,” and simple storage-clearing. Meanwhile, the real action is increasingly about first-party collection, relationship graphs between companies, and device-level signals that never present themselves as a single “tracker” you can toggle off.

First‑party tracking: the web’s cleanest loophole

When a site collects data directly, it can argue the relationship is “first‑party” and therefore legitimate. For readers, the distinction can feel academic—your behavior is still being logged and used to shape what you see. But from a technical and regulatory standpoint, first‑party collection is harder to restrict without breaking site functionality.

The practical lesson: cookie banners and “reject third parties” toggles may reduce some cross‑site leakage, but they don’t end profiling on the sites you use daily.

SDK-based app tracking: the silent middlemen

On mobile, tracking often arrives through embedded SDKs—analytics, ad measurement, crash reporting, attribution. These tools can be helpful for developers. They can also create an invisible pipeline from your phone to multiple companies you’ve never heard of.

Because SDK behavior depends on the app’s configuration and the platform’s permission system, user control tends to be blunt: deny permissions, delete the app, or accept the bargain.

Fingerprinting: identification without your consent

WIRED explains why fingerprinting is so durable: the “fingerprint” is assembled from many small characteristics that look harmless alone but become identifying in combination. Clearing cookies doesn’t erase your screen size. A VPN doesn’t change your installed fonts. Switching browsers may not be enough if other signals remain consistent.

Here’s the honest takeaway: you cannot “opt out” of fingerprinting with a single checkbox. You can, however, reduce how stable your fingerprint is by making your environment less distinctive and by choosing tools that limit passive exposure.

“Fingerprinting works because modern devices leak identity through hundreds of tiny tells.”

— TheMurrow Editorial
Hundreds
Fingerprinting can draw from hundreds of small device and browser attributes that remain stable even when you clear cookies or use a VPN.

Regulators versus reality: the DSA, the FTC, and America’s patchwork

Regulation is reshaping incentives, but not evenly—and not always in ways users can feel day to day.

There’s a difference between rules that change platform reporting and rules that change the underlying collection. Some laws increase transparency, forcing platforms to explain why an ad was shown or which categories were used. Others rely on enforcement after especially dangerous practices come to light. Meanwhile, the technical ecosystem evolves quickly, often routing around whatever becomes easiest to regulate.

Europe’s DSA: transparency and sensitive targeting limits

The EU’s Digital Services Act emphasizes ad transparency and restrictions on targeting involving sensitive data, including targeted ads to children. That pressure changes how platforms document ads and how they justify targeting categories. It also forces a degree of sunlight onto systems that historically thrived on obscurity.

Still, transparency isn’t the same as privacy. Disclosing that you were targeted because you “resembled” an audience doesn’t stop the underlying profiling. It may, however, create friction for the most egregious practices.

The U.S.: enforcement actions amid fragmented laws

The U.S. remains a patchwork of state laws and sector-specific rules, with no comprehensive federal privacy statute. That gap shifts the burden toward enforcement actions and case‑by‑case crackdowns.

The FTC’s actions against location‑data brokers are a sign that regulators will intervene when data flows become too obviously dangerous. But enforcement after harm occurs is not the same as a baseline rule that prevents collection in the first place.

What this means for readers

Two perspectives coexist, and readers should hold both:

- Optimistic view: regulatory pressure raises compliance costs and limits the most blatant targeting, especially around sensitive data and children.
- Skeptical view: ad tech adapts faster than rulemaking, moving from obvious identifiers (cookies) to murkier ones (partnership graphs, fingerprinting).

Your best defense is not waiting for perfect policy. It’s reducing exposure where you have leverage—starting with identity and account security.

How to think about privacy regulation in practice

Before
  • Optimistic view—higher compliance costs
  • limits on sensitive targeting
  • more transparency
After
  • Skeptical view—ad tech adapts faster
  • shifts to partnerships and fingerprinting

The privacy stack that matters: identity, authentication, recovery

Privacy conversations often skip to browsers, VPNs, and ad blockers. Those can help. But the highest‑leverage layer is the one most people neglect: identity and account security.

The reason is simple: if someone can take over your account, they can access the very data you’re trying to protect—messages, photos, financial history, location trails, and documents. Even if no one “steals” your data, weak identity practices make you easier to correlate across services. In 2026, the most meaningful privacy upgrades are often boring: how you sign in, how you recover accounts, and how you segment identifiers.

Email is the master key

Email powers password resets, device logins, SaaS access, banking alerts, and “prove it’s you” flows. If a bad actor controls your email, privacy becomes theoretical.

A clean strategy is separation by function, not by obsession. Consider distinct addresses for:

- Finance and critical accounts
- Core identity (government services, healthcare portals, primary device accounts)
- Shopping and newsletters
- Throwaway sign-ups (one-off downloads, trials)

Aliases can help where supported, but the core principle is simple: don’t let every service share the same identifier that can be correlated, breached, and resold.

Reduce phone-number dependence

Phone numbers are convenient—and increasingly dangerous as identity tokens. SIM swapping risk is the obvious threat, but the quieter issue is correlation: phone numbers link datasets across apps, brokers, and platforms.

Where possible, avoid using a phone number as a login identifier or recovery method. Use it when you must, not by default.

Authentication: where privacy meets security

NIST’s work on digital identity guidelines, including the ongoing SP 800‑63‑4 revision process (with public drafts and comment periods documented by NIST), reflects a broad consensus: stronger authentication reduces real-world harm. Readers don’t need to memorize standards to apply the direction of travel.

Prioritize phishing-resistant sign-in where available:

- Passkeys (device‑bound, designed to resist phishing)
- Hardware security keys for high-value accounts
- Strong multi-factor authentication on anything tied to money, identity, or cloud backups

Equally important: lock down recovery. Attackers often don’t “hack” your password—they hijack your recovery channel.

Editor’s Note

Recovery settings are part of your security perimeter. If your recovery channel is weak, your password strength doesn’t matter.

Practical moves for 2026: reduce passive collection without breaking your life

A privacy plan that collapses under its own weight is worse than useless. The goal is not purity. The goal is asymmetric wins—changes that meaningfully reduce risk with minimal ongoing effort.

Think in terms of defaults and leverage. Where do you have a setting that changes behavior permanently? Where does a one-time cleanup prevent years of downstream exposure? The right approach isn’t to install ten tools and hope. It’s to secure identity, reduce sensitive data flow (especially location), and accept that some tracking persists—then focus on making that tracking less stable and less linkable.

Start with “account fortress” steps (high impact, low drama)

1. Turn on passkeys where offered, especially for email and financial services.
2. Enable strong MFA on remaining critical accounts.
3. Audit recovery options: remove old phone numbers, add secure recovery methods, ensure backup codes are stored safely.
4. Separate your email identities so a single breach doesn’t correlate your entire life.

These moves don’t just protect security; they protect privacy. An attacker with your email can extract years of personal history in minutes.

Account fortress checklist

  1. 1.Turn on passkeys where offered (especially email and finance)
  2. 2.Enable strong MFA for other critical accounts
  3. 3.Audit recovery options and store backup codes safely
  4. 4.Separate email identities to limit correlation and blast radius

Treat location as sensitive by default

Given FTC attention to “sensitive” location data and the known broker ecosystem, location deserves special skepticism. If an app doesn’t need location to function, deny it. If it needs location only while in use, choose that option. Persistent location access should be rare.

Accept that some tracking will persist—and focus on stability

Browser-level protections vary. MDN’s historical note about Chrome versus Firefox and Safari underscores the point: browsers differ in how aggressively they block cross-site tracking, and those defaults shape your exposure.

Fingerprinting complicates things. You won’t eliminate it completely, but you can avoid making your setup unusually unique. Extreme customization can backfire by creating a rarer fingerprint. The goal is to look ordinary while limiting what’s shared.

The hard truth: privacy is a posture, not a product

A decade ago, privacy advice was often framed as a gear checklist. Install X, block Y, route traffic through Z. That approach still sells subscriptions, but it misses the structural shift of 2026: tracking has moved toward relationships and identifiers that don’t live in one place.

The more durable framing is posture:

- How many identifiers can be linked back to you?
- How easily can someone take over your core accounts?
- How much sensitive data—especially location—leaves your device by default?
- How dependent are you on a phone number as identity?

Regulation helps, but it moves at the speed of politics. Corporate promises help, but they move at the speed of incentives. Your personal posture moves at the speed of a Saturday afternoon and a willingness to change a few defaults.

The privacy reset didn’t give you invisibility. It gave you a clearer choice: remain passively legible to systems designed to profile you, or become deliberately harder to summarize.

1) Are third-party cookies “gone,” and does that mean I’m not tracked anymore?

Third‑party cookies matter less than they used to, but tracking continues through other methods. First‑party collection, data partnerships, SDK-based app tracking, and browser fingerprinting can all identify or profile you without relying on traditional third‑party cookies. The shift is from one obvious mechanism to several quieter ones.

2) What is browser fingerprinting, and why doesn’t clearing cookies stop it?

Fingerprinting identifies you using attributes of your device and browser—such as screen size, fonts, and hardware hints. WIRED notes fingerprinting can persist even if you clear cookies, use a VPN, or change browsers, because the identifier is assembled from many signals that remain stable. Reducing fingerprinting is about limiting exposure and avoiding highly distinctive setups.

3) What’s the biggest “privacy win” most people can get quickly?

Account hardening. Email is the master key for password resets and access to other services, so securing it has outsized benefits. Use phishing‑resistant sign-in methods like passkeys where possible, enable strong MFA, and clean up recovery methods. Privacy and security overlap heavily at this layer.

4) Why are regulators focused on location data?

Location data is uniquely revealing: it can expose home/work patterns and visits to sensitive places. The Verge reported FTC action banning certain location‑data brokers from selling or using “sensitive” location data, reflecting how risky these datasets are when bought, sold, or leaked. Treat location permissions as sensitive by default.

5) How does the EU’s Digital Services Act affect my online privacy?

The DSA emphasizes ad transparency and limits certain targeting practices, particularly around sensitive data and children. That can reduce the most egregious targeting and force platforms to explain ad delivery more clearly. Transparency, however, doesn’t automatically stop profiling; it mainly changes what platforms must disclose and justify.

6) Why does U.S. privacy feel inconsistent compared to Europe?

U.S. protections remain fragmented across states and sectors, with no comprehensive federal privacy law. That means rules vary depending on where you live and what kind of data is involved. Enforcement actions—like FTC cases—can curb specific abuses, but they don’t provide a uniform baseline across the entire market.

7) Should I stop using my phone number for logins?

Where you have a choice, yes—reduce phone-number dependence. Phone numbers can be used for correlation across datasets and can be vulnerable to SIM swapping, which can compromise SMS-based account recovery. Use phone numbers when required, but prioritize stronger authentication (passkeys, security keys, robust MFA) and safer recovery channels for high-value accounts.
T
About the Author
TheMurrow Editorial is a writer for TheMurrow covering technology.

Frequently Asked Questions

Are third-party cookies “gone,” and does that mean I’m not tracked anymore?

Third‑party cookies matter less than they used to, but tracking continues through other methods. First‑party collection, data partnerships, SDK-based app tracking, and browser fingerprinting can all identify or profile you without relying on traditional third‑party cookies. The shift is from one obvious mechanism to several quieter ones.

What is browser fingerprinting, and why doesn’t clearing cookies stop it?

Fingerprinting identifies you using attributes of your device and browser—such as screen size, fonts, and hardware hints. WIRED notes fingerprinting can persist even if you clear cookies, use a VPN, or change browsers, because the identifier is assembled from many signals that remain stable. Reducing fingerprinting is about limiting exposure and avoiding highly distinctive setups.

What’s the biggest “privacy win” most people can get quickly?

Account hardening. Email is the master key for password resets and access to other services, so securing it has outsized benefits. Use phishing‑resistant sign-in methods like passkeys where possible, enable strong MFA, and clean up recovery methods. Privacy and security overlap heavily at this layer.

Why are regulators focused on location data?

Location data is uniquely revealing: it can expose home/work patterns and visits to sensitive places. The Verge reported FTC action banning certain location‑data brokers from selling or using “sensitive” location data, reflecting how risky these datasets are when bought, sold, or leaked. Treat location permissions as sensitive by default.

How does the EU’s Digital Services Act affect my online privacy?

The DSA emphasizes ad transparency and limits certain targeting practices, particularly around sensitive data and children. That can reduce the most egregious targeting and force platforms to explain ad delivery more clearly. Transparency, however, doesn’t automatically stop profiling; it mainly changes what platforms must disclose and justify.

Should I stop using my phone number for logins?

Where you have a choice, yes—reduce phone-number dependence. Phone numbers can be used for correlation across datasets and can be vulnerable to SIM swapping, which can compromise SMS-based account recovery. Use phone numbers when required, but prioritize stronger authentication (passkeys, security keys, robust MFA) and safer recovery channels for high-value accounts.

More in Technology

You Might Also Like