[Air-L] Governing AI Search: New AI Forensics Policy Report
Natalia Stanusch
n.b.stanusch at uva.nl
Mon Jan 19 02:34:06 PST 2026
Dear AoIR colleagues,
I wanted to share with you the latest policy report by AI Forensics, which comes after over a year of studying LLM chatbots and their growing role in how people search online, and which might be of interest to some of you.
https://www.aiforensics.org/work/governing-ai-search
While there has been extensive public scrutiny of content moderation on social media platforms, there has been almost no attention paid to how moderation works when AI chatbots become the search interface. Our report addresses this growing gap.
Although AI search shares some similarities with traditional search engines, we believe it creates new risks of deepening the harms already associated with traditional search. And yet we find that the current European regulatory landscape is fragmented. The Digital Services Act’s focus on the ongoing operational oversight of user-generated content and the AI Act’s emphasis on pre-deployment product safety create a regulatory divide that AI search systems traverse uneasily.
Our report, therefore, asks: How can this regulatory fault line be better navigated? How can regulatory gaps be better filled to adapt to this new reality? To address these gaps, we propose a new conceptual framework that responds to these shortcomings with new anticipatory forms of governance in AI search. We also outline a set of policy propositions that address the risks of AI search, based on three case studies: Copilot in Bing, Gemini, and ChatGPT.
Main ideas of the report include:
- AI search systems occupy a regulatory blind spot between the DSA's focus on user-generated content and the AI Act's pre-deployment safety requirements
- We extend the notion of "moderation" to AI Search and map instances of moderation, including socio-technical interventions referred to as "value alignment"
- Current frameworks inadequately address information-related systemic risks posed by AI search at scale, and we need an integrated (DSA & AIA) governance framework for AI search.
AI Forensics is a European non-profit that conducts independent and high-profile technical investigations to uncover and expose the harms caused by their algorithms, with an aim of holding major technology platforms accountable
Best regards,
Natalia
Natalia Stanusch
Researcher, AI Forensics<https://www.aiforensics.org>
PhD Candidate, University of Amsterdam
ASCA | Media Studies
Recent publications:
Stanusch, N. (2025). #TargetedAds, or memeing dataveillance on TikTok: How users comply, oppose, and imagine datafication practices. Big Data & Society, 12(4). DOI: 10.1177/20539517251386041
Stanusch, N., Degeling, M., Romano, S., Schueler, M., & Semenzin, S. (2025). "AI-Generated Algorithmic Virality: How Synthetic Al Imagery and Agentic Al Accounts Try to Game TikTok and Instagram” AI Forensics. www.aiforensics.org/work/gen-ai-slop \ arXiv:2508.01042.
Stanusch, N. (2025). ‘Esoteric AI.’ In Vieira, S., Flynn, P., and Piet, N. (Eds.) Slow AI. AIxDESIGN, the Nehterlands. DOI: 10.13140/RG.2.2.19187.03368.
More information about the Air-L
mailing list