[Air-L] IJOC-CfP: Artificial Intelligence and New Media in Challenges
Seungahn Nah
beatus71 at gmail.com
Thu Feb 10 17:46:46 PST 2022
*Call for Papers*
*Artificial Intelligence and New Media in Challenges: Algorithmic Bias and
Ethical Issues*
Artificial Intelligence (AI) technologies and applications have rapidly
penetrated into all aspects of social, political, civic, and cultural life.
Despite the exponential growth in and development of AI, algorithmic bias
toward gender, age, sexuality, race/ethnicity, and ideology prevails across
digital media platforms, ranging from search engines through news sites to
social media. This omnipresence of algorithmic bias results in social
issues, such as systematic and repeated unfairness, discrimination, and
inequality, privileging certain groups over others, violating privacy, and
reinforcing social and cultural biases.
However, scholarship still remains unexplored in light of the
causes, components, and consequences of algorithmic bias and ethical
issues. Recent successes in AI are based on "black-box" machine learning
models trained with a tremendous amount of data, which makes it extremely
difficult to understand or interpret their underlying mechanisms despite
their remarkable predictive power. Therefore, this special section calls
for papers which can address these structured, longstanding issues toward
individuals, groups, communities, and countries across the socio-economic
and ideological spectrums. The special section is particularly interested
in studies empirically demonstrating the aforementioned issues and
providing solutions for a better design, development, application, and
implication of AI and digital media. Thus, the following research questions
are central to this special section. What are the causes of algorithmic
bias and how do these relate to ethical, policy, and legal issues like
privacy? What are the components of biased algorithms? To what extent does
algorithmic bias produce and reproduce social and cultural biases and vice
versa? How can we understand, predict, and trust the behaviors of AI
systems? What are the intended or unintended consequences of a biased and
discriminatory algorithm? What are the best practices in data collection
and model development of AI applications? How do we detect, understand, and
mitigate algorithmic bias and the connected ethical, policy, and legal
issues so that scholars and developers, alike, use best practices?
In doing so, the special section emphasizes an inextricably
interwoven relationship among data and media bias, model bias, and social
bias. That is, data and media bias leads to unbalanced training resulting
in model bias. Model bias, in turn, reinforces data and media bias and
produces discriminatory impacts on humans and society. Human and societal
bias then produces skewed representation and participation in data and
media bias. This “vicious circle” reinforces bias toward AI system and its
development, use, application, and practice. Given algorithmic bias toward
individuals, groups, and communities across digital media platforms and
ethical issues, this special section examines, but is not limited to, the
following issues:
• Algorithmic bias, discrimination, and inequality toward
underrepresented groups and communities
• Interaction and amplification of bias between AI and media
• Perceived (un)fairness of algorithm-mediated communication, system,
design, and applications
• (Dis)trust in algorithm news curation, recommendation, and
algorithmic gatekeeping
• Algorithm bias in news, filter bubbles or echo chambers, opinion
polarization, and gaps in knowledge and participation
• (Dis)trust in algorithm development, management, and governance
• (Dis)trust in practical applications of and data-driven
decision-making in AI systems
• Impacts of algorithmic bias on cognitive, affectional, attitudinal,
perceptual, and behavioral bias in various social, political, civic, and
cultural contexts
• Impacts of automated surveillance technologies (e.g. face
recognition) on privacy
• Explainable AI for enhancing public trust
• Algorithmic and non-algorithmic solutions to assess and mitigate AI
biases and risks
We are open to diverse methodological approaches such as quantitative,
qualitative, and computational methods. Interested authors should submit an
abstract with 1,000 words maximum, alongside a title page containing a
brief author bio and contact information. Below presents a detailed
timeline.
*Proposed Timeline:*
• Abstract submission due by March 1, 2022
• Decision for full manuscripts due by May 1, 2022
• Full manuscript submission due by September 1, 2022
• Decision for publication due by December 1, 2022
• Final manuscript submission by January 1, 2023
• Anticipated publication by March 1, 2023
The final paper should be in accordance with the Journal’s Guide for
Authors.
https://ijoc.org/index.php/ijoc/about/submissions#authorGuidelines
Special section inquiries can be directed to the guest editors, Dr.
Seungahn Nah at snah at uoregon.edu and Dr. Jungseock Joo at jjoo at comm.ucla.edu
.
*Guest Editors:*
Dr. Seungahn Nah, University of Oregon, snah at uoregon.edu
Dr. Jungseock Joo, University of California-Los Angeles, jjoo at comm.ucla.edu
<seungahn.nah at uky.edu>
More information about the Air-L
mailing list