Emory International Law Review
Abstract
The European Union’s Artificial Intelligence Act (AI Act) represents a significant step in regulating AI technologies, but this paper argues that its provisions on manipulation are critically under-inclusive. Through a comprehensive analysis of AI-enabled manipulation and the current EU legal framework, this paper offers an account of how the AI Act’s narrow focus on subliminal techniques and purposeful manipulation fails to address the full spectrum of AI-driven manipulative practices.
The paper develops its argument in four parts. First, it provides an overview of AI-enabled manipulation, highlighting its unique characteristics and the structural changes it introduces to democratic processes. Second, it examines the relevant EU legal framework, including the Digital Services Act and the Regulation on political advertising. Third, it critiques the AI Act’s provisions, exposing the flaws in its assumptions about human rationality and its outdated focus on subliminal techniques. Finally, it proposes solutions to address the Act’s under-inclusiveness, including broader interpretations of key terms and a new definition of manipulation as “using someone as a means against themselves.” These insights offer valuable lessons both for the interpretation of the AI Act and for other jurisdictions. With increasingly agentic systems, it is expected for AI-enabled manipulation to become more pervasive, subtle, and potentially harmful, underscoring the urgency for robust and adaptive regulatory frameworks that can effectively safeguard individual autonomy, dignity, and the integrity of democratic processes.
Recommended Citation
Claire Boine,
The AI Act Manipulation Gap,
39
Emory Int'l L. Rev.
417
(2025).
Available at:
https://scholarlycommons.law.emory.edu/eilr/vol39/iss2/1