Deepfake false memories

Research output: Contribution to journalArticlepeer-review

Abstract

Machine-learning has enabled the creation of “deepfake videos”; highly-realistic footage that features a person saying or doing something they never did. In recent years, this technology has become more widespread and various apps now allow an average social-media user to create a deepfake video which can be shared online. There are concerns about how this may distort memory for public events, but to date no evidence to support this. Across two experiments, we presented participants (N = 682) with fake news stories in the format of text, text with a photograph or text with a deepfake video. Though participants rated the deepfake videos as convincing, dangerous, and unethical, and some participants did report false memories after viewing deepfakes, the deepfake video format did not consistently increase false memory rates relative to the text-only or text-with-photograph conditions. Further research is needed, but the current findings suggest that while deepfake videos can distort memory for public events, they may not always be more effective than simple misleading text.

Original languageEnglish
Pages (from-to)480-492
Number of pages13
JournalMemory
Volume30
Issue number4
DOIs
Publication statusPublished - 2022

Keywords

  • deepfake
  • False memory
  • memory distortion
  • misinformation

Fingerprint

Dive into the research topics of 'Deepfake false memories'. Together they form a unique fingerprint.

Cite this