[Air-L] Workshop on the politics of GenAI governance
João C. Magalhães
joao.magalhaes at manchester.ac.uk
Tue Apr 21 02:12:13 PDT 2026
Dear all,
Klara Matusewicz (University of Manchester), Robert Gorwa (WZB Berlin), and I are putting together a two-day workshop on 'The Politics of GenAI Governance' at Mancept 2026 in Manchester, September 2–4.
See the CfA below and, if you’re interested, please send a *500-word abstract by May 21* to joao.magalhaes at manchester.ac.uk.
Full details of Mancept 2026 here: https://research.manchester.ac.uk/en/activities/mancept-workshops-2026/
—
The politics of GenAI governance
The governance of GenAI is often framed as a computational task (how can code and data ensure that AI systems reliably do what designers want them to do?), an abstract normative question (which norms should AI systems behave in accordance with?), or a narrowly legal challenge (how can different regulatory bodies define, codify, and enforce these norms, in computational and institutional terms?). Yet it is also, obviously and perhaps centrally, a political problem: which actors, representing whose interests, have less or more power and legitimacy to influence the mechanisms whereby GenAI is governed into being? Who’s set to benefit and lose from these processes?
The need for perspectives rooted in political theory is made increasingly urgent by two trends in how GenAI governance is defined. First, the discourse about what governance means and what constitutes success is rapidly consolidating around a narrow set of actors and understandings. As it is often spurred by private companies which operate with limited public oversight, this kind of discourse and its related proposals call for an inquiry into the rights, responsibilities, and ideologies that underpin them, particularly as the datasets and training used for governing GenAI have previously resulted in bias, exclusion and the reinforcement of existing systems of oppression. Second, governance proposals aspire to a global scope, implicitly advancing claims about universal values. Such disregard for pluralism raises questions about legitimacy, and concerns about the associated asymmetries in geopolitical power. Importantly, these trends unfold amid a historical crisis of liberal democracy.
As GenAI becomes increasingly embedded in everyday life, its governance is no longer just about preventing reward hacking during training or the hypothetical, potentially catastrophic consequences of mis-specified goals. It is about institutionalising quotidian norms and inequities, thus encoding particular visions of social order into what is quickly becoming a pervasive, all-purpose sort of infrastructure.
To truly understand and intervene in the governance of GenAI, we need first treat it as a collective and contested political practice. This workshop addresses this urgent task by inviting contributions that grapple with the visible and covert disagreement, authorities, and rationalities that shape the emergence of GenAI under conditions of extreme uncertainty.
In particular, we welcome scholars working on topics related, but not limited, to the following guiding questions:
1. How should we conceptualise GenAI governance as a form of power, and in which ways does it differ (or not) from previous control systems?
2. How does GenAI governance intersect with theories and the current realities of democracy and authoritarianism?
3. Can traditional approaches explain the political economy of GenAI?
4. In what ways do existing GenAI governance proposals implicitly rely on contested political assumptions (e.g. moral realism, liberal individualism, etc)?
5. To what extent and how should GenAI governance be democratised?
--
João C. Magalhães
Senior Lecturer in AI Trust and Security | School of Art, Languages, and Cultures<https://www.alc.manchester.ac.uk>
Head of the AI Trust and Security Cluster | Centre for Digital Trust and Society<https://www.humanities.manchester.ac.uk/research/centres-institutes/clusters/ai-trust-and-security/>
University of Manchester
https://jcmagalhaes.com/
Selected publications: The emergence of platform illiberalism, 2026 <https://journals.sagepub.com/doi/10.1177/14614448261424889?_gl=1*97g6hw*_up*MQ..*_ga*OTU5MTk5NDYwLjE3NzM3NDkzMTc.*_ga_60R758KFDG*czE3NzM3NDkzMTYkbzEkZzAkdDE3NzM3NDkzMTYkajYwJGwwJGgxOTcyNDIyMDEy> (New Media & Society, w/ Clara Iglesias Keller and Rob Gorwa) | Socially blind engineering in Facebook's foundational technologies, 2025<https://link.springer.com/article/10.1007/s13347-025-00971-9> (Philosophy & Technology, w/ Nick Couldry) | Open-ended technological inevitability in journalistic discourses about AI, 2025<http://www.tandfonline.com/doi/full/10.1080/21670811.2025.2522281#d1e238> (Digital Journalism, w/ Rik Smit) | A history of objectionability in Twitter’s moderation practices, 2023<https://academic.oup.com/joc/advance-article/doi/10.1093/joc/jqad015/7204763?utm_source=authortollfreelink&utm_campaign=joc&utm_medium=email&guestAccessKey=fe630b65-d137-4378-bfb2-44026df942a7&login=false> (Journal of Communication, w/ Emillie de Keulenaar, Bharath Ganesh) | Social media, social unfreedom, 2022 <https://www.degruyter.com/document/doi/10.1515/commun-2022-0040/html> (Communications, w/ Jun Yu) | Big Tech, data colonialism and the reconfiguration of social good, 2021<https://ijoc.org/index.php/ijoc/article/view/15995> (International Journal of Communication, w/ Nick Couldry) | Considering algorithmic ethical subjectivation, 2018<https://journals.sagepub.com/doi/full/10.1177/2056305118768301> (Social Media + Society)
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://listserv.aoir.org/pipermail/air-l-aoir.org/attachments/20260421/e33936fa/attachment.htm>
More information about the Air-L
mailing list