I Clicked “I Agree”, But What Am I Really Consenting To?
Nearly forty years after the birth of the World Wide Web, most of us have become accustomed to blindly clicking “I agree” on privacy policies without a second thought. These consent mechanisms were designed for a simpler digital era when the limits of data use were clearer. But with the rise of generative AI systems that can create human-like content from vast datasets, we’re faced with outdated consent frameworks. The question isn’t just what you're agreeing to anymore, but whether meaningful consent is even possible in the age of AI.
The Main Problems with Consent in AI
When you give a company permission to use your data, you probably have a specific purpose in mind. Maybe you're letting a voice assistant record your commands to improve its service, or allowing a photo app to analyze your pictures to help you find them later. But generative AI creates three fundamental problems that traditional consent frameworks simply weren't built to handle:
- The Scope Problem;
- The Temporality Problem;
- The Autonomy Trap.
The Scope Problem: Consenting to the Unknown
Consider facial recognition technology company Clearview AI, which collected more than 20 billion images from social media, online profiles, and photography websites without obtaining consent from anyone. These images were used to build a facial recognition database for law enforcement and government surveillance – something most people would never have anticipated when posting their vacation photos or professional headshots online.
Even when companies do ask for permission, the possible uses of our data have become impossible to predict. Imagine a voice actor who agrees to record an audiobook. The AI trained on the actor’s voice could later be used to make political endorsements, or provide financial advice – all uses completely outside of what they originally authorized. This phenomenon creates a problem of “representational drift,” meaning that the connection between our initial consent and what our data is eventually used for becomes increasingly vague as AI models evolve.
The Temporality Problem: When “No” Becomes Meaningless
Unlike a discrete transaction with clear boundaries, AI creates an open-ended relationship between people and their digital representations. Once your data enters a training dataset, extracting its influence becomes technically challenging, if not impossible.
During the 2023 Hollywood strikes, screen actors feared that AI systems trained on their performances could generate new content using their likeness indefinitely—well beyond any initial permission. While the Screen Actors Guild-American Federation of Television and Radio Artists (SAG-AFTRA) struck deals with companies like Replica Studios and Narrativ to give actors more control over how their voices are used, a fundamental issue remains: once the AI learns from your data, it can't truly “unlearn” it.
The Autonomy Trap: When Saying “Yes” Undermines Your Future Choices
Perhaps most troubling is how consent in AI contexts can undermine the very autonomy consent claims to protect. By agreeing to let our data train AI systems, we may be inadvertently authorizing systems that later influence how others perceive us.
Remember the famous case of Target identifying a teenage girl's pregnancy before her father knew? The company’s algorithm recognized common purchase patterns among pregnant shoppers and began sending baby product ads to her home. The algorithm inferred sensitive health information from seemingly innocuous data like shopping patterns – something no shopper could reasonably anticipate when buying everyday items.
This generates a feedback loop: our consented data enables AI to make predictions which then shape how services and others interact and perceive us, potentially reinforcing certain aspects of our identity while suppressing others.
Why Current Legal Frameworks Fall Short
The European GDPR represents one of the strongest data protection frameworks in the world. It requires consent to be “freely given, specific, informed, and unambiguous”. But these requirements become almost impossible to fulfill in the context of generative AI:
Specific: How can consent be specific when the potential uses of AI-generated content are virtually limitless?
Informed: When even AI developers sometimes can't explain exactly why their models produce specific outputs (the “black box problem”), how can average users be truly informed?
Unambiguous: The sheer volume of potential outputs from generative systems makes comprehensive disclosure impossible, rendering any consent inherently ambiguous.
Real-World Implications
The Facebook case in Germany illustrates these problems. In 2019, the German competition authority ruled that Facebook had abused its dominant market position by forcing users to consent to extensive data collection across multiple platforms. This highlighted the power imbalance in the digital economy; without meaningful alternatives, users had little choice but to accept whatever terms were presented.
Similarly, voice actors have discovered their synthetic voices being used for unauthorized commercial applications, political messages, or inappropriate content they never would have agreed to. Their autonomy is violated not just through the unauthorized use of original recordings, but by creating entirely new synthetic statements they never said.
Moving Beyond Individual Responsibility
The current model for consent places too much responsibility on individuals while failing to account for the complexities of AI systems. For consent to be meaningful, responsibility must extend beyond individuals to include corporations, developers, and policymakers:
Shift the burden: Organizations should design systems with privacy-preserving defaults rather than requiring constant user vigilance.
Create collective advocacy: Industry consortiums, professional associations, and civil society groups could develop shared standards for ethical AI development that go beyond minimal compliance. The SAG-AFTRA agreements with voice technology companies demonstrate how collective bargaining can help address power imbalances.
Build better technological safeguards: Technical solutions like federated learning (where AI learns from data without it leaving your device) and differential privacy (adding noise to data to protect individuals) could reduce the need for all-or-nothing consent decisions.
Restore power balance: Communities and individuals need tools and platforms that give them meaningful negotiating power in determining how their data is used, rather than facing take-it-or-leave-it terms from technology giants.
Develop data fiduciaries: Similar to financial or legal fiduciaries, we could establish trusted intermediaries with a legal duty to act in people's best interests when managing their data, helping individuals navigate complex AI systems.
The Path Forward
Perhaps it’s time to acknowledge that traditional consent models are insufficient for the AI era. We need new frameworks that recognize the new challenges of generative technologies while still protecting individual rights.
This might include collective governance mechanisms where communities, not just individuals, have a say in how data is used. It could involve technical solutions like “algorithmic guardians” that help manage our digital presence across platforms. Or it might require entirely new legal frameworks that focus less on point-of-collection consent and more on ongoing accountability for how AI systems use and represent personal data.
We need to stop pretending that “I agree” is enough. AI is changing the rules of consent without asking us. The gap between what we can meaningfully consent to and what AI systems can do with our data has grown too wide. Bridging this “consent gap” will require reimagining not just consent itself, but our entire approach to data rights in the age of artificial intelligence.
This blog post is based on an upcoming book chapter, co-written with Bruna Trevelin, exploring the challenges of consent in AI contexts, which will appear in a collective book to be published by Cambridge University Press, and edited by Marios Constantinides and Daniele Quercia (Nokia Bell Labs).