[Air-L] Call for Papers ATIC Journal: Artificial Intelligence, Commons, and Collaboration
Stéphan-Eloïse Gras
stephan.eloise at gmail.com
Tue Sep 30 09:51:38 PDT 2025
Dear colleagues,
We are pleased to announce the call for papers for the upcoming issue of
the *ATIC Journal *dedicated to the theme: *Artificial Intelligence,
Commons, and Collaboration*.
*Guest Editors:*
-
*Béa Arruabarrena*, CNAM – DICEN Laboratory
-
*Stéphan-Eloïse Gras*, CNAM – DICEN Laboratory
*Important dates:*
-
*Deadline for abstract submission:* October 30, 2025
-
*Notification of acceptance:* November 30, 2025
-
*Full paper submission (30,000–50,000 characters):* April 1, 2026
-
*Publication:* Summer 2026
Although the journal publishes in *French*, we welcome *submissions in
English*.
Further details, including the full call for papers and submission
guidelines, are available here (in French):
https://www.dicen-idf.org/intelligence-artificielle-communs-collaboratif/
We warmly invite contributions that engage critically with the
intersections of artificial intelligence, commons, and collaborative
practices.
Sincerely,
*Stéphan-Eloïse Gras* and *Béa Arruabarrena*
Since the public release of ChatGPT in November 2022, the media sphere has
witnessed intense debates and passionate interventions from experts and
scholars across diverse fields—computer science, linguistics, biology,
statistics, ethics, law, among others. Each field feels directly implicated
in the spread of so-called “generative” artificial intelligence
technologies, which rely on connectionist deep learning techniques. The
proliferation of these artefacts, commonly referred to as AI (by metonymy
of the sub-discipline they originate from), tends to obscure the material
and sociotechnical conditions of their production. Indeed, AI artefacts
emerge from an often-overlooked assemblage of multiple computer engineering
traditions: systems and network computing, robotics, software engineering,
expert systems or symbolic AI, machine learning, and deep learning.
The aim of this issue is precisely to analyze the composite and
heterogeneous nature of AI artefacts, to take into account the social and
organizational dynamics underlying their existence, and to question the
very existence and regimes of AI “commons” (i.e., open datasets, models, or
weights).
Reducing AI to generic objects (as in the case of “general AI”; Julia,
2019), intelligent artefacts (Agostinelli & Riccio, 2023), or nominalist
abstractions (Bachimont, 2014) conceals their composite character as well
as the collective dynamics and power relations that bring them into being.
Conversely, a sociotechnical perspective (Flichy, 2008) reveals that the
making of AI artefacts rests on at least three simultaneous processes: (1)
the elaboration of a global application perspective; (2) data collection,
classification, and processing; and (3) practices and uses. Focusing on
“frames of use” draws attention to alignment phases, where artefacts are
adjusted for social acceptability or a “license to operate” (Alcantara &
Charest, 2023). The sociotechnical approach thus highlights the inseparable
social and technical, material and discursive dimensions of information and
communication systems.
This issue of ATIC calls for critical inquiry into the collective
organizations—firms, research laboratories, public or nonprofit bodies,
independent initiatives—responsible for the design and development of AI
artefacts. In line with Taylor (2011) and from a pragmatic standpoint,
attention is directed to the constitution of organizational forms that
“make” AI: material operations, products, websites, statements, speech
acts, and discursive practices enacted within situated contexts (Cooren,
Brummans & Charrieras, 2008; Cooren, 2024).
Such an approach also requires investigating the asymmetries resulting from
the centralization of design and development in the hands of a few dominant
actors, and their consequences for information and communication practices
(Ertzscheid, 2023). The fragile “bigger is better” narrative has subjected
AI to a purported “law of scale” (Varoquaux et al., 2024), in which the
usefulness and performance of systems are conditioned by the sheer size of
algorithmic architectures, characterized by trillions of parameters.
Similarly, the design, training, and deployment of AI models—today largely
dominated by a small number of private actors—limit possibilities for
collective appropriation. This hyper-concentration directly conflicts with
the logic of digital commons, which rests on openness, sharing, distributed
governance, and the empowerment of user communities. AI as currently
organized tends to consolidate regimes of exclusive property and unilateral
surveillance (Zuboff, 2019), rather than foster co-production and the
circulation of knowledge and technological tools.
Accordingly, this issue of ATIC invites analysis of AI commons and their
sharing regimes: negotiation spaces where AI must serve collective projects
(Pene, 2017). Contributions are encouraged to address collaborative
dynamics, including emerging forms of collaboration and participatory
methodologies enabled by transdisciplinary approaches. Digital commons are
defined as collectively produced and maintained digital resources, governed
by rules that preserve their shared and collective character (Baudoin,
2023). These involve researchers, citizens, and public institutions
confronting the complex challenges of AI commons (Fitzpatrick, 2019). Such
an approach views AI artefacts as human productions dependent on collective
resources and methodologies, serving a more collaborative economy (Benkler,
2011) grounded in open ecosystems (Bauwens, 2005). Collective, diverse
frameworks have demonstrated their capacity to produce and regulate
technologies as commons through shared governance. While research has
documented the role of major corporations in the development of open-source
projects such as Linux (Broca, 2013), the centrality and ambiguity of
open-source artefacts (datasets, models, algorithmic architectures,
weights, etc.) in contemporary generative AI calls for renewed attention.
Engaging the notion of commons also foregrounds the ethical stakes of AI.
Against utilitarian and accelerationist perspectives, which dominate
discourses emphasizing efficiency and profitability (Bostrom, 2014;
Tegmark, 2017) often at the expense of distributive justice and the most
vulnerable (Rawls, 1987), a commons-based ethic, following Haraway (1988),
Star (1999), Zacklad & Rouvroy (2022), highlights power relations and
social asymmetries. It expands debates toward equity, social justice, and
inclusion. Noble (2018) demonstrates how algorithmic biases reflect
historical inequalities embedded in data and design choices, while
participatory and open research approaches help mitigate systemic biases
(Fitzpatrick, 2019). Open-source collectives thus provide concrete examples
to examine both the potentials and the limits of this approach. In a
similar vein, Floridi (2013) advocates deliberative processes where
citizens, developers, regulators, and users co-develop ethical norms
responsive to contemporary challenges.
Finally, this issue of ATIC, by interrogating AI sociotechnics through the
lens of collaboration and commons, invites reconsideration of the status of
knowledge, content, and interactions produced by AI. Under what conditions
can AI artefacts—language models, generative applications, datasets—be
conceived as spaces of creation, collective intelligence, and
experimentation, grounded in co-construction, mutualization, and regulation?
We invite contributions from scholars across disciplines—information and
communication sciences, sociology, anthropology, law, computer science,
design—as well as from digital commons practitioners and communities.
Submissions may be theoretical or empirical, provided they engage with the
ways commons and collaboration reshape the AI landscape.
*Suggested Themes*
*Axis 1 – Sociotechnical approaches to AI artefacts*
Exploring AI as sociotechnical systems by analyzing artefacts,
infrastructures, and conditions of implementation. Contributions may
address materiality, technical inscriptions, and collective arrangements
that render AI operative, as well as issues of access inequalities,
invisibilized labor, translation processes, and technological lock-in.
Topics may include AI design processes, data corpus construction, language
model use, open/expert agent interactions, or shared infrastructures such
as Hugging Face, GitHub, and other platforms. Special attention may be
given to opacity, standardization, and modularity that support large-scale
collaboration, and to tensions between community innovation and dependence
on dominant models.
*Axis 2 – Collaborative and organizational dynamics of AI*
Analyzing AI as organizational, collective, and experimental practice. This
includes forms of coordination, cooperation, and governance in the design,
development, and use of AI. Topics may cover organizational practices,
skills, literacies, participatory AI methodologies, collaborative robotics,
or emergent governance of digital commons, platforms, and data. Empirical
studies documenting usages, experimental contexts, or organizational
innovations are particularly welcome.
*Axis 3 – Epistemological, ethical, and political issues*
Critical and reflexive perspectives on AI’s knowledge regimes, ethics, and
politics. Contributions may focus on discourses and practices around
governance and digital sovereignty, the knowledge and norms structuring AI
artefacts, or the asymmetries of power arising from data economies,
computing infrastructures, or regulation. Submissions may explore
resistance, counter-expertise, or citizen mobilizations (e.g., “Slow AI”,
Data Detox), situated ethics, or the geopolitics and environmental costs of
AI infrastructures. This axis also invites reflection on the conditions for
commons-based AI that reshape relations between technology, power, and
society.
*Submission Guidelines*
Submissions should include:
- Author identity, institutional affiliation, and title on the first
page; anonymized title and text on subsequent pages (doc/odt format).
- A clear and explicit title.
- An abstract (max. 3,000 characters, excluding references) outlining
the research problem, theoretical framework, methodology, and expected
results or contributions.
- A list of references.
*Timeline*
- *Submission of proposals (max. 3,000 characters):* October 30, 2025
- *Notification of acceptance:* November 30, 2025
- *Full paper submission (30,000–50,000 characters):* April 1, 2026
- *Publication:* Summer 2026
Although the journal publishes in *French*, *submissions in English are
welcome*.
*Contact:*
beatrice.arruabarrena at lecnam.net
stephan-eloise.gras at lecnam.net
revue at revue-atic.fr
More information about the Air-L
mailing list