[Air-L] CfA: Theorizing Platform Content Moderation: Power, Resistance, and Democratic Control

J.C. Vieira Magalhaes j.c.vieira.magalhaes at rug.nl
Tue Jun 6 00:45:46 PDT 2023


Hello all,

Just a reminder that the deadline for this workshop on how to theorise content moderation is fast approaching.

--

CfA: Theorizing Platform Content Moderation: Power, Resistance, and Democratic Control
 
 
Key facts:
-Conference: 2023 MANCEPT (Manchester Centre for Political Theory) Workshops(https://sites.manchester.ac.uk/mancept/mancept-workshops/mancept-workshops-2023/list-of-panels-a-z-2023/theorizing-platform-content-moderation/ <https://sites.manchester.ac.uk/mancept/mancept-workshops/mancept-workshops-2023/list-of-panels-a-z-2023/theorizing-platform-content-moderation/>).
-Deadline for submission of abstracts (300 words): 11th of June (any time zone).
-Dates/location of the workshop: 11th -13th of September 2023. The workshop will take place in Manchester (UK) but remote participation is also possible.
 
 
Workshop description:
Platform content moderation has emerged as a novel form of mass speech governance, able to influence billions of people globally. Much of the growing scholarship on it focuses on describing moderation’s functioning, ambiguities, and technologies, and on how to hold platforms to constitutional values. Yet, despite its obvious political nature, content moderation remains under-theorized as a political practice.
 
This is puzzling as moderation rearticulates key concepts of political theory. Regardless of their unilateral ability to moderate, platforms often seek to appease some actors in the design and enforcement of their rules. These processes are hardly linear, though: not all voices, from all countries, at all times, are equally heard. While moderation has been used against authoritarian actors, it has also been shown to reinforce racist,  sexist, and neo-colonial structures, often to foster companies’ political and economic interests globally. This evidences the need to understand how moderation relates to representation, recognition, and plurality, which are closely associated with matters of justice, equality, and dignity. Similarly, it is unclear what resistance to these systems’ patterns of in- and exclusion might (and ought to) mean.
 
Two factors make platform content moderation challenging to address through usual normative frameworks, such as legal rights. Firstly, platforms are a peculiar kind of organization: globally operating corporations whose immense power is not anchored in processes of political legitimation (e.g., elections) or even a clear polity. Despite the legality of their moderation practices, which are often protected by so-called ‘safe harbour’ laws, these companies still owe us something morally – but what, exactly, and which ‘us’ is this? Further, much of moderation today is automated, commonly through machine learning algorithmic systems. As a consequence, the meaning of “objectionable” or “desirable”, or how to punish those who violate these definitions, may emerge not from direct human reasoning but from probabilistic calculations based on complexly constructed datasets. Whose voice is represented and silenced when thousands of data annotators, moderators, officers, and technologists play some role in the construction of the algorithms that identify and control, say, hate speech? How to account for the cascading layers of rules, institutions, and actors?
 
Workshop aims: This workshop aims to address the urgent task of theorizing platform content moderation. We especially invite scholars working from the perspective of radical democratic theory, democratic resistance, decolonial theory, and political economy to consider three broad questions:
 
(1) How should we conceptualize content moderation as a form of power, and in which ways does it differ from previous forms of speech control?
(2) What does proper resistance to moderation mean, and how can it tackle the multiple dynamics of in- and exclusion? And
(3) To what extent, and how, should democratic control over content moderation be organised?
 
 
How to apply: Send a 300-word abstract to n.appelman at uva.nl <mailto:n.appelman at uva.nl>. The deadline is 11th of June (any time zone). Full papers are not required.
 
When & where: Workshops will take place preferably in-person, between 11th-13th of September 2023, in Manchester (UK). Submissions to present online will be considered.
 
Organizers:
 
·        Naomi Appelman, IViR (Institute for Information Law), University of Amsterdam (n.appelman at uva.nl <mailto:n.appelman at uva.nl>)
· João C. Magalhães, Centre for Media and Journalism Studies, University of Groningen(j.c.vieira.magalhaes at rug.nl <mailto:j.c.vieira.magalhaes at rug.nl>)


--
João C. Magalhães <https://jcmagalhaes.com/>
Assistant Professor in Media, Politics and Democracy
University of Groningen | Centre for Media and Journalism Studies

Selected publications: A history of objectionability in Twitter’s moderation practices <http://mediarxiv.org/wvp8c>, 2023 (forthcoming Journal of Communication, w/ Emillie de Keulenaar, Bharath Ganesh) | Social media, social unfreedom <https://www.degruyter.com/document/doi/10.1515/commun-2022-0040/html>, 2022 (Communications, w/ Jun Yu) | Big Tech, data colonialism and the reconfiguration of social good <https://ijoc.org/index.php/ijoc/article/view/15995>, 2021(International Journal of Communication, w/ Nick Couldry) | Algorithmic visibility and bottom-up authoritarianism in the Brazilian crisis <http://etheses.lse.ac.uk/4042/>, 2019 (PhD, LSE) | Considering algorithmic ethical subjectivation <https://journals.sagepub.com/doi/full/10.1177/2056305118768301>, 2018 (Social Media + Society)





More information about the Air-L mailing list