iSee: Intelligent Sharing of Explanation Experiences

  • Kyle Martin
  • , Anjana Wijekoon
  • , Nirmalie Wiratunga
  • , Chamath Palihawadana
  • , Ikechukwu Nkisi-Orji
  • , David Corsar
  • , Belén Díaz-Agudo
  • , Juan A. Recio-García
  • , Marta Caro-Martínez
  • , Derek Bridge
  • , Preeja Pradeep
  • , Anne Liret
  • , Bruno Fleisch

Research output: Contribution to journalArticlepeer-review

Abstract

The right to an explanation of the decision reached by a machine learning (ML) model is now an EU regulation. However, different system stakeholders may have different background knowledge, competencies and goals, thus requiring different kinds of explanations. There is a growing armoury of XAI methods, interpreting ML models and explaining their predictions, recommendations and diagnoses. We refer to these collectively as”explanation strategies”. As these explanation strategies mature, practitioners gain experience in understanding which strategies to deploy in different circumstances. What is lacking, and what the iSee project will address, is the science and technology for capturing, sharing and re-using explanation strategies based on similar user experiences, along with a much-needed route to explainable AI (XAI) compliance. Our vision is to improve every user's experience of AI, by harnessing experiences of best practice in XAI by providing an interactive environment where personalised explanation experiences are accessible to everyone.

Original languageEnglish
Pages (from-to)231-232
Number of pages2
JournalCEUR Workshop Proceedings
Volume3389
Publication statusPublished - 2022
Event30th International Conference on Case-Based Reasoning Workshop, ICCBR-WS 2022 - Virtual, Online, France
Duration: 12 Sep 202215 Sep 2022

Keywords

  • Case-Based Reasoning
  • Explainability
  • Project Showcase

Fingerprint

Dive into the research topics of 'iSee: Intelligent Sharing of Explanation Experiences'. Together they form a unique fingerprint.

Cite this