Latent space interpretation and visualisation for understanding the decisions of convolutional variational autoencoders trained with EEG topographic maps

Research output: Contribution to journalArticlepeer-review

Abstract

Learning essential features and forming simple representations of electroencephalography (EEG) signals are difficult problems. Variational autoencoders (VAEs) can be used with EEG signals to learn the salient features of EEG data. But explainability should disclose knowledge of how the model makes its decision. The key contribution of this research is the combining of known components in a pipeline that allows us to give meaningful visualisations that help us understand which component of latent space is responsible for capturing which region of brain activation in EEG topographic maps. The results reveal that each component in the latent space contributes to capturing at least two generating factors in topographic maps. This pipeline can be used to produce EEG topographic maps of any scale. Furthermore, assist us in understanding each component of latent space responsible for activating a portion of the brain.
Original languageEnglish
JournalCEUR Workshop Proceedings
Publication statusPublished - 2023

Fingerprint

Dive into the research topics of 'Latent space interpretation and visualisation for understanding the decisions of convolutional variational autoencoders trained with EEG topographic maps'. Together they form a unique fingerprint.

Cite this