Artificial intelligence and the value of transparency

Research output: Contribution to journalArticlepeer-review

Abstract

Some recent developments in Artificial Intelligence—especially the use of machine learning systems, trained on big data sets and deployed in socially significant and ethically weighty contexts—have led to a number of calls for “transparency”. This paper explores the epistemological and ethical dimensions of that concept, as well as surveying and taxonomising the variety of ways in which it has been invoked in recent discussions. Whilst “outward” forms of transparency (concerning the relationship between an AI system, its developers, users and the media) may be straightforwardly achieved, what I call “functional” transparency about the inner workings of a system is, in many cases, much harder to attain. In those situations, I argue that contestability may be a possible, acceptable, and useful alternative so that even if we cannot understand how a system came up with a particular output, we at least have the means to challenge it.

Original languageEnglish
Pages (from-to)585-595
Number of pages11
JournalAI and Society
Volume36
Issue number2
DOIs
Publication statusPublished - Jun 2021

Keywords

  • Bias
  • Contestability
  • Explainability
  • Machine learning
  • Transparency

Fingerprint

Dive into the research topics of 'Artificial intelligence and the value of transparency'. Together they form a unique fingerprint.

Cite this