TheMurrow

Your Data, Your Rules

Privacy in the AI era isn’t just about what platforms store—it’s about what systems learn. Here’s how to reduce exposure and assert real leverage.

By TheMurrow Editorial
January 18, 2026
Your Data, Your Rules

Key Points

  • 1Reframe privacy as leverage: reduce collection, expect copying, and use access, deletion, objection, and portability rights to force compliance.
  • 2Separate account data from model influence: deleting a post may not undo training, embeddings, or downstream artifacts shaped by your information.
  • 3Track jurisdiction timelines: GDPR plus the EU AI Act, shifting UK guidance, and U.S. state-by-state rules determine what leverage you actually have.

Your data is no longer just “data”

Privacy used to mean a simple bargain: you gave a company data, and in exchange you got convenience. The worst-case scenario was predictable—an ad followed you around, or a breach exposed a database.

AI changed the stakes. Your data is no longer valuable only because it describes you; it is valuable because it can help build systems that predict you, imitate you, and sort you. A photo is not just a photo. A message is not just a message. Both can become training material, evaluation data, or a signal in a monitoring pipeline—sometimes long after you’ve forgotten you shared them.

The uncomfortable truth is that “owning” your privacy doesn’t look like locking something in a safe. It looks like managing a messy, cross-border supply chain of data—accounts, backups, third parties, and, increasingly, models. The most practical version of privacy ownership is less about total control (rarely realistic) and more about shaping data flows, reducing exposure, and enforcing rights where the law gives you leverage.

“In the AI era, privacy isn’t just about what platforms store—it’s about what systems learn.”

— TheMurrow Editorial

Privacy ownership: control is the wrong goal, leverage is the right one

People say they want to “own” their privacy the way they own a house or a book. That metaphor breaks fast. Digital life is built on copying—syncing, caching, backing up, sharing with service providers, and, often, repurposing data for new uses.

A more accurate definition treats privacy as risk management with teeth: you try to limit collection, understand reuse, and—when needed—force compliance through rights and remedies. The key tools are mundane but effective: restricting what you share, choosing services with stronger defaults, and asserting legal rights such as access, deletion, objection, and portability across jurisdictions.

Two privacy problems people conflate: accounts vs. models

A major source of confusion is that there are now two different targets:

1) Data in accounts: what platforms store about you—posts, photos, purchase history, messages, location logs.

2) Data in models: whether your data was used to train, fine-tune, evaluate, or monitor an AI system—and whether it can be inferred, extracted, or reproduced later.

Deleting a post might remove it from your profile. It does not automatically answer the harder question: did your post shape a system that has already absorbed its patterns?

Accounts vs. Models: what you’re really trying to control

Before
  • Data in accounts (posts
  • photos
  • messages
  • location logs)
  • deletion requests
  • backup schedules
After
  • Data in models (training
  • fine-tuning
  • evaluation
  • inference/extraction risks)
  • derived artifacts
  • “untraining” questions

“Deletion used to mean removing a record. Now it can mean unwinding a process.”

— TheMurrow Editorial

Why “delete my data” gets harder when AI is involved

Traditional deletion is conceptually straightforward: remove the record from a database, and clean it from backups on a schedule. Even then, companies can keep some data for legal obligations, security logs, or fraud prevention.

AI-era deletion adds a new category: derived artifacts. Modern AI systems often turn raw data into intermediate representations—embeddings, feature vectors, and other transformations—before training or fine-tuning. Then the trained model itself becomes an artifact: a set of weights shaped by what it has seen.

That raises a practical and legal question regulators are now grappling with: if a company deletes the original personal data, should it also remove the downstream byproducts that incorporated that data? In other words, is deletion only about storage—or also about influence?

Regulators are signaling “downstream” accountability

European regulators have started addressing these realities more directly. The European Data Protection Board (EDPB) published an opinion on 18 December 2024 focused on AI models and GDPR principles, including what it means for a model to be “anonymous,” how legitimate interest might apply, and what happens if a model is trained on unlawfully processed personal data. The EDPB emphasizes case-by-case assessment and highlights the risk of extraction or re-identification.

The implication for readers is concrete: companies may claim their models are “anonymized,” but regulators are skeptical of blanket assurances. If personal data can be inferred or retrieved, the compliance burden doesn’t evaporate—it intensifies.

Key Insight

When a company says a model is “anonymized,” treat it as a claim—not a conclusion. Regulators focus on whether data can be inferred, extracted, or re-identified.

The EU’s two-track regime: GDPR plus the AI Act’s phased deadlines

For readers in Europe—or anyone using services that operate there—privacy ownership now sits at the intersection of two major frameworks: the long-standing GDPR and the newer EU AI Act (Regulation (EU) 2024/1689).

The AI Act was published in the EU’s Official Journal on 12 July 2024 and entered into force 20 days later (often summarized in EU materials as 1 August 2024). The law’s impact is not a single switch-flip. Obligations phase in over time, with key milestones the European Commission has highlighted:

- 2 February 2025: bans on certain AI practices and AI literacy obligations begin applying.
- 2 August 2025: governance rules and obligations for general-purpose AI (GPAI) models begin applying.
- 2 August 2026: the Act becomes “fully applicable” for most obligations.
- 2 August 2027: some high-risk rules for AI embedded in regulated products have extended transition.

Those dates matter because they create an uneven field: the AI systems you use in 2025 may be operating under different legal expectations than the same class of systems in 2027.
12 July 2024
EU AI Act published in the Official Journal, cementing the EU’s regulatory direction for AI systems.
2 Feb 2025
AI Act bans on certain practices and AI literacy obligations begin applying—early behavioral and governance expectations start to bite.
2 Aug 2026
Most EU AI Act obligations become fully applicable, shifting default expectations for transparency, governance, and compliance readiness.

GDPR still does the heavy lifting on personal data

Even with the AI Act in force, GDPR remains central to the basic question readers ask: “Can they use my data for training?” The EDPB’s December 2024 opinion underscores that the status of AI models under GDPR—especially claims of anonymity—requires careful assessment. A company’s confidence statement is not the same as a regulator’s conclusion.

“In Europe, AI compliance is not a substitute for data protection. It’s a second layer.”

— TheMurrow Editorial

The UK: GDPR-like rules, and a fast-moving fight over access and encryption

The UK remains broadly GDPR-shaped, but privacy ownership is complicated by regulatory churn and political pressure around access.

The UK Information Commissioner’s Office (ICO) maintains guidance on AI and data protection, while noting it is under review due to the Data (Use and Access) Act, which came into law on 19 June 2025. That “under review” caveat is more than bureaucratic housekeeping. It signals to organizations—and to citizens—that best practices are not settled, and compliance expectations can shift.

Case study: Apple’s Advanced Data Protection and the geography of privacy

One of the clearest illustrations of privacy ownership colliding with state demands is Apple’s Advanced Data Protection (ADP) for iCloud. Apple states ADP is no longer available to new UK users; Apple also says existing UK users who already enabled ADP will be given time to disable it to keep using iCloud. Apple’s support documentation (published 22 September 2025) says ADP remains available elsewhere.

Readers don’t need to take sides to see the lesson: the privacy features you can “choose” are often constrained by where you live. Privacy ownership can be geographic, not personal.
19 June 2025
UK Data (Use and Access) Act became law, prompting the ICO to flag AI/data protection guidance as “under review.”
22 Sep 2025
Apple support documentation date tied to UK-specific changes for Advanced Data Protection availability and user timelines.

The United States: no single privacy law, so your rights depend on your zip code

Americans often assume privacy works like other consumer rights: one national baseline, then variations. That is not the structure the U.S. has built for consumer privacy. There is no comprehensive federal consumer privacy law, which means practical rights are delivered through a growing patchwork of state statutes.

That fragmentation affects the everyday mechanics of privacy ownership:

- Whether you can opt out of certain uses of your data
- Whether you can demand deletion
- How “sensitive data” is defined and protected
- What rules apply to targeted advertising and related profiling

A “know your rights” approach in the U.S. starts with a less inspiring question: where are you a resident, legally? That answer often determines the strength of your leverage.

What this means for AI-specific concerns

The AI twist is that training and model development are often centralized, while legal rights are local. Your data might be processed in one state, by a company headquartered in another, using infrastructure spread across several more. Without a federal standard, consumer protections can look inconsistent even when the underlying technology is the same.

A sober way to read the U.S. situation is not “you have no rights,” but “your rights are conditional.” The burden shifts to the individual to understand the relevant statute and exercise the available options.

What privacy ownership looks like in practice: a playbook for real people

Privacy ownership is rarely one grand move. It is a sequence of small, repeatable decisions that reduce the amount of data exposed and increase your ability to enforce limits later.

Reduce your exposure where it matters most

Some of the highest-impact moves are also the least dramatic:

- Share less sensitive content in places designed for discovery, virality, or search indexing.
- Treat cloud sync and “memories” features as publication, not storage.
- Assume anything you upload can be repurposed for secondary uses unless a policy or law clearly prevents it.

Not every service is equally risky. The practical question is what the service is built to do. A platform optimized for public sharing will tend to create more downstream reuse opportunities than a service designed for private storage.

High-impact exposure reducers

  • Share less sensitive content in places designed for discovery, virality, or search indexing.
  • Treat cloud sync and “memories” features as publication, not storage.
  • Assume anything you upload can be repurposed for secondary uses unless a policy or law clearly prevents it.

Make your rights actionable, not theoretical

Legal rights are only useful when they are exercised. Under GDPR-style regimes, the most relevant rights for AI-era concerns often include:

- Access: learn what data a company holds and how it is used
- Deletion: remove data where deletion applies
- Objection: challenge certain processing, particularly where “legitimate interest” is invoked
- Portability: retrieve data in a usable format and move it elsewhere

A key mental shift: rights requests aren’t only for worst-case scenarios. They can be a routine part of managing your digital footprint, like checking a credit report.

Rights you can actually use (GDPR-style regimes)

  1. 1.Request access to learn what data is held and how it’s used.
  2. 2.Invoke deletion where it applies—and ask what happens to backups and retention categories.
  3. 3.File an objection where processing relies on “legitimate interest.”
  4. 4.Use portability to retrieve usable exports and move services.

A practical mindset shift

Treat privacy rights requests as routine maintenance—not a last resort. The earlier you act, the more leverage you typically retain.

The model problem: can your data be “untrained” from AI?

Readers are right to be suspicious of easy promises here. Once a model is trained, it is not a folder of records. It is a statistical system shaped by exposure to data. The EDPB’s December 2024 opinion points to the hard edge of the issue: if a model was trained on unlawfully processed personal data, downstream consequences may follow. That framing treats the model as part of the processing story, not a magical escape hatch.

Multiple perspectives: innovation vs. individual rights

Supporters of broad training permissions argue that restricting training data too tightly could slow innovation, entrench incumbents, or make models less representative. They also point out that models do not typically store personal data in a straightforward “database” form.

Privacy advocates counter that “not straightforward” is not the same as “impossible.” Risks like re-identification, extraction, and inference are precisely why regulators keep pressing on anonymity claims. The EDPB’s emphasis on case-by-case assessment reflects a wider institutional view: if a model can be used to recover personal information, then the personal-data story is not over.

The practical takeaway is not despair. It is clarity. When a company says, “We deleted your data,” the sophisticated follow-up question is: deleted from where, and what about the derived artifacts?

Question to ask support teams

When you hear “we deleted your data,” ask: deleted from where (account, logs, backups), and what happens to derived artifacts like embeddings or model influence?

The next two years: why 2025 and 2026 will change the default expectations

The EU AI Act’s timeline matters even for non-Europeans, because large AI providers tend to harmonize compliance across major markets. The dates are not abstract; they signal when stronger governance norms and obligations become non-negotiable for many actors.

Four numbers to keep in your head—and what they imply:

- 12 July 2024: the EU AI Act was published, cementing a regulatory direction.
- 2 February 2025: bans on certain practices and AI literacy obligations start applying.
- 2 August 2025: obligations for GPAI models begin applying, raising expectations for general-purpose systems.
- 2 August 2026: most obligations become fully applicable, pushing standardization and enforcement readiness.

Meanwhile, the UK’s ICO guidance being under review after the 19 June 2025 Data (Use and Access) Act signals that the UK’s approach to AI and privacy will keep evolving. Add Apple’s UK-specific ADP change (support documentation dated 22 September 2025), and the pattern becomes hard to ignore: privacy ownership will increasingly be shaped by regulatory negotiation, not just product design.

The reader’s advantage is timing. The rules are still settling. Choosing services carefully, limiting exposure, and asserting rights now will matter more than trying to retroactively clean up later.

A sharper definition of “owning your privacy” in the AI era

Owning your privacy isn’t a single setting. It is a posture: you treat personal data as something that can travel, be copied, be reused, and—through AI—become difficult to contain after the fact. You aim for leverage rather than fantasy control.

That means separating account data from model influence, understanding that deletion may not reach derived artifacts, and recognizing that law is fragmented. In the EU, GDPR and the AI Act form a two-layer regime with real deadlines. In the UK, guidance is shifting under legislative pressure, and privacy features can disappear depending on geography. In the U.S., rights depend on state lines.

A mature privacy strategy is not paranoia. It’s competence: fewer unnecessary disclosures, better choices about where you store sensitive material, and a willingness to use the legal tools available. AI makes privacy harder—but it also makes passive trust less defensible.
T
About the Author
TheMurrow Editorial is a writer for TheMurrow covering technology.

Frequently Asked Questions

What does “privacy ownership” actually mean?

Privacy ownership is less about perfect control and more about managing risk and enforcing rights. In practice, it means shaping how data flows, reducing exposure, and using legal tools—like access, deletion, objection, and portability—where they apply. AI raises the stakes because data can be reused for training, evaluation, or monitoring long after collection.

If I delete a post or photo, can it still affect an AI model?

Yes, potentially. Deleting content from an account may remove it from public view, but it doesn’t automatically undo downstream use, such as training or derived artifacts like embeddings. Regulators are increasingly focused on whether deletion obligations should extend beyond raw data to what was built from it.

What’s the difference between “data in accounts” and “data in models”?

Account data is what a platform stores about you—messages, photos, purchases, location history. Model data is about influence: whether your information shaped a trained AI system, and whether it can be inferred or extracted later. Many privacy disputes now hinge on that second category, which is harder to audit.

When does the EU AI Act start to matter for regular users?

The EU AI Act entered into force in summer 2024, but obligations phase in. Key dates include 2 Feb 2025 (certain bans and AI literacy obligations), 2 Aug 2025 (governance and GPAI obligations), and 2 Aug 2026 (most rules fully apply). Users may see changes in transparency, policies, and product behavior as companies prepare.

Does GDPR still apply to AI training in Europe?

Yes. GDPR remains central for questions about whether personal data can be used to train or fine-tune models. The EDPB’s 18 Dec 2024 opinion addresses issues such as anonymity claims, legitimate interest, and consequences when models are trained on unlawfully processed personal data. Companies can’t rely on broad “anonymized” assertions without scrutiny.

Why is privacy ownership harder in the U.S. than in the EU?

The U.S. lacks a comprehensive federal consumer privacy law, so rights are governed by a patchwork of state statutes. That affects whether you can opt out, delete data, and limit uses like targeted advertising. For AI-related concerns—training and reuse especially—this fragmentation can make it harder to know what leverage you have without checking your state’s rules.

More in Technology

You Might Also Like