AISoLA 2025

Bridging the Gap Between AI and Reality • Rhodes, Greece

Talk

Expressive Limitations of Visual Explanations in XAI

Time: Sunday, 2.11

Room: Room A

Authors: Sara Mann

Abstract: Visual explanations, e.g. saliency maps, are widely employed to explain the outputs of AI systems that process image data. These methods have received much criticism for being susceptible to adversarial attacks (Slack et al., 2020) or for being independent of the underlying model (Kindermans et al., 2017). In this talk, I raise more fundamental worries about visual explanations. These worries stem from the inherent expressive limitations of images and affect visual explanations independent of the XAI method being used.

Paper: Expressive_Limitations_of_Visual_Explanations_in_XAI-paper.pdf