Seminar Explainable AI – Human-Computer Interaction meets Artificial Intelligence

With the increasing use of automation, users tend to delegate more and more tasks to the machines. Complex systems are usually developed with an Artificial Intelligence (AI) and can embed different kinds of models and algorithms including Machine Learning and Deep Learning, which make these systems difficult to understand for the user. This assumption is for instance particularly true in the field of automated driving since the level of automation is increasing or in the health domain where more and more sophisticated AI powered diagnostic tools are used every day. In order to better understand how AI works and build trust in the decisions made by AIs, new techniques in the field are emerging that are referred to as Explainable AI (XAI) [1].
These techniques are intended to make AI transparent and the contents of the “black boxes” accessible. The main purposes of this transparency are to:

  • understand the functioning of algorithms and AIs in order to optimize their design and architecture, their features but also to understand and interpret the results
  • increase human confidence in systems
  • increase and improve cooperation between agents

As shown by [2], providing appropriate explanations to the user increases the user’s confidence in the system and thus allows for better human-IA collaboration.

The goal of this seminar is to investigate the field of Explainable AI (XAI) with a particular focus on the perspective of human interaction since it has not been sufficiently studied in existing explainable approaches [1] [3]. The seminar will address the topics related to the design of human-computer interfaces for XAI. Effective knowledge transfer through an explanation depends on a combination of AI algorithms used, explanation dialogues, and interfaces that can accommodate explanations.
Questions like “what kind of explanation do we need”, “what an explanation should look like? ”, “which is the best trade-off between performance and explainability we want to achieve”, “how granular should the explanations be” and “how to evaluate explanations” will be investigated in this seminar.

This seminar will help the students to improve their research and practical abilities. It will have a strong practical component as students will investigate existing applications as well as develop new concepts in the aforementioned domains.

Details

Code 33562
63562
Type Seminar
ECTS 5
Site Fribourg
Track(s) T3 – Visual Computing
T6 – Data Science
Semester S2024

Teaching

Learning Outcomes
  • Identify and illustrate existing approaches in Explainable AI.
  • Discuss and compare different methods for increasing system interpretability and transparency.
  • Evaluate and select the best existing interactions and interfaces for intelligibility
  • Identify and describe different ways of evaluating system explanability, accountability and intelligibility
  • Identify and describe how to design interfaces to increase AI system predictability
Lecturer(s) Elena Mugellini
Omar Abou Khaled
Rolf Ingold
Language english
Course Page

The course page in ILIAS can be found at https://ilias.unibe.ch/goto_ilias3_unibe_crs_2793438.html.

Schedules and Rooms

Period On Appointment
Location UniFR, PER21

Evaluation

Evaluation type continuous evaluation

Additional information

Comment

First Lecture
Please contact Elena Mugellini (Elena.Mugellini@hefr.ch) to select the date of the first meeting.

References
[1] Amina Adadi and Mohammed Berrada, Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI), IEEE Access Journal, Volume: 6 (2018), 52138 – 52160, DOI: 10.1109/ACCESS.2018.2870052
[2] Serena Villata, Guido Boella, Dov M. Gabbay, and Leendert van der Torre. 2013. A socio-cognitive model of trust using argumentation theory. International Journal of Approximate Reasoning 54, 4 (2013), 541 –559. DOI:http://dx.doi.org/https:
//doi.org/10.1016/j.ijar.2012.09.001 Eleventh European Conference on Symbolic and Quantitative Approaches to Reasoning with Uncertainty (ECSQARU 2011).
[3] Ashraf Abdul, Jo Vermeulen, Danding Wang, Brian Y. Lim, Mohan Kankanhalli, Trends and Trajectories for Explainable, Accountable and Intelligible Systems: An HCI Research Agenda, Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, Paper No. 582, Montreal QC, Canada — April 21 – 26, 2018