[Air-L] IUI workshop on Explainable AI for Fairness & Social Justice

Lee, Min Kyung minkyung.lee at austin.utexas.edu
Sun Nov 15 14:18:19 PST 2020


Dear AOIR community members,

If you are interested in the intersection of explainable AI and fairness/social justice, please consider joining our workshop!

IUI Workshop on Explainable AI for Fairness and Social Justice:  https://explainablesystems.comp.nus.edu.sg/2021/
A short position paper is *due December, 23 2020*; the workshop will be held virtually on March 13, 2021.

Best,
Min

——————————————————
Min Kyung Lee
Assistant Professor
School of Information, University of Texas at Austin
http://minlee.net

—————————————————————————————————————

CALL FOR PARTICIPATION

Workshop on Transparency and Explanations in Smart Systems (TExSS)

Explainable AI for Fairness and Social Justice

Held in conjunction with ACM Intelligent User Interfaces (IUI) 2021, April 13-17, Virtual.

Smart systems that apply complex reasoning to make decisions and plan behavior, such as decision support systems and personalized recommendations, are difficult for users to understand. Algorithms allow the exploitation of rich and varied data sources, in order to support human decision-making and/or taking direct actions; however, there are increasing concerns surrounding their transparency and accountability, as these processes are typically opaque to the user - e.g., because they are too technically complex to be explained or are protected trade secrets. The topics of transparency and accountability have attracted increasing interest to provide more effective system training, better reliability and improved usability. This workshop will provide a venue for exploring issues that arise in designing, developing and evaluating intelligent user interfaces that provide system transparency or explanations of their behavior. We will focus specifically on explaining systems and models toward ensuring fairness and social justice, such as approaches to detecting or mitigating algorithmic biases or discrimination  (e.g., awareness, data provenance, and validation).

Suggested themes include, but are not limited to:
- What are explanations? What should they look like? What should be included in explanations and how (and to whom) should they be presented?
- Is transparency (or explainability) always a good idea? Can transparent algorithms or explanations “hurt” the user experience, and in what circumstances?
- How can we build (good) algorithmic systems, particularly those that demonstrate that they are fair, accountable, and unbiased?
- When are the optimal points at which explanations are needed for transparency?
- What are more transparent models that still have good performance in terms of speed and accuracy?
- What is important in user modeling for system transparency and explanations?
- What are possible metrics that can be used when evaluating transparent systems and explanations?
- How can we evaluate explanations and their ability to accurately explain underlying algorithms and overall systems’ behavior, especially for the goals of fairness and accountability?
- How can explanations allow human evaluators to select model(s) that are unbiased, such as by revealing traits or outcomes of the underlying learned system?
- What are important social aspects in interaction design for system transparency and explanations?
- How can we detect biases and discrimination in transparent systems?
- Through explanations, transparency, or other means, how can we raise stakeholders’ awareness of the potential risk for biases and social harms that could result from developing and using intelligent systems?

Researchers and practitioners in academia or industry who have an interest in these areas are invited to submit papers up to 6 pages (not including references) in ACM SIGCHI Paper Format (see http://iui.acm.org/2021/call_for_papers.html). These submissions must be original and relevant contributions and can be of two types: (1) position papers summarizing authors’ existing research in this area and how it relates to the workshop theme and (2) papers offering an industrial perspective on the workshop theme or a real-world approach to the workshop theme. Papers should be submitted via Easychair (https://easychair.org/conferences/?conf=texss2021) by the end of December 23rd, 2020 and will be reviewed by committee members. Position papers do not need to be anonymized. At least one author of each accepted position paper must register for and attend the workshop. It is anticipated that accepted contributions will be published in dedicated workshop proceedings. For further questions please contact the workshop organizers at <texss2021 at easychair.org<mailto:texss2021 at easychair.org>>.

The workshop will feature a keynote by Timnit Gebru (https://ai.stanford.edu/~tgebru/) who co-leads the Ethical Artificial Intelligence Team at Google. Paper authors will then present their work as part of thematic panels. The remainder of the workshop will consist of smaller group activities related to the workshop theme. For more information visit our website at http://explainablesystems.comp.nus.edu.sg/2021

Important Dates
==============
Submission date  Dec 23, 2020
Notifications send   Jan 31, 2021
Camera-ready      Feb 28, 2021
Workshop Date    March 13, 2021

Organizing Committee
===================
Alison Smith-Renner, Machine Learning Visualization Lab, DAC/WBB, United States
Styliani Kleanthous Loizou, Cyprus Centre for Algorithmic Transparency, Open University of Cyprus, Nicosia, Cyprus
Jonathan Dodge, Oregon State University, Corvallis, Oregon, United States
Casey Dugan, IBM Research, Cambridge, Massachusetts, United States
Min Kyung Lee, University of Texas at Austin, Austin, Texas, United States
Brian Y Lim, Department of Computer Science, National University of Singapore, Singapore, Singapore
Tsvi Kuflik, Information Systems, The University of Haifa, Haifa, Israel
Advait Sarkar, Microsoft Research, Cambridge, United Kingdom
Avital Shulner-Tal, Information Systems, The University of Haifa, Haifa, Israel
Simone Stumpf, Centre for HCI Design, City, University of London, London, United Kingdom


More information about the Air-L mailing list