Skip to main navigation Skip to search Skip to main content

Investigating the Effect of Pre-processing Methods on Model Decision-Making in EEG-Based Person Identification

  • Technological University Dublin

Research output: Chapter in Book/Report/Conference proceedingsConference proceedingpeer-review

Abstract

Electroencephalography (EEG) data has emerged as a promising modality for biometric applications, offering unique and secure personal identification and authentication methods. This research comprehensively compared EEG data pre-processing techniques, focusing on biometric applications. In tandem with this, the study illuminates the pivotal role of Explainable Artificial Intelligence (XAI) in enhancing the transparency and interpretability of machine learning models. Notably, integrating XAI methodologies contributes significantly to the evolution of more precise, reliable, and ethically sound machine learning systems. An outstanding test accuracy exceeding 99% was observed within the biometric system, corroborating the Graph Neural Network (GNN) model’s ability to distinguish between individuals. However: high accuracy does not unequivocally signify that models have extracted meaningful features from the EEG data. Despite impressive test accuracy, a fundamental need remains for an in-depth comprehension of the models. Attributions proffer initial insights into the decision-making process. Still, they did not allow us to determine why specific channels are more contributory than others and whether the models have discerned genuine cognitive processing discrepancies. Nevertheless, deploying explainability techniques has amplified system-wide interpretability and revealed that models learned to identify noise patterns to distinguish between individuals. Applying XAI techniques and fostering interdisciplinary partnerships that blend the domain expertise from neuroscience and machine learning is necessary to interpret attributions further and illuminate the models’ decision-making processes.

Original languageEnglish
Title of host publicationExplainable Artificial Intelligence - 1st World Conference, xAI 2023, Proceedings
EditorsLuca Longo
PublisherSpringer Science and Business Media Deutschland GmbH
Pages131-152
Number of pages22
ISBN (Print)9783031440694
DOIs
Publication statusPublished - 2023
Externally publishedYes
Event1st World Conference on eXplainable Artificial Intelligence, xAI 2023 - Lisbon, Portugal
Duration: 26 Jul 202328 Jul 2023

Publication series

NameCommunications in Computer and Information Science
Volume1903 CCIS
ISSN (Print)1865-0929
ISSN (Electronic)1865-0937

Conference

Conference1st World Conference on eXplainable Artificial Intelligence, xAI 2023
Country/TerritoryPortugal
CityLisbon
Period26/07/2328/07/23

Keywords

  • attribution xAI methods
  • Biometrics
  • Deep Learning
  • Electroencephalography
  • eXplainable Artificial Intelligence
  • Graph-Neural Network
  • Signal processing
  • signal-to-noise ratio

Fingerprint

Dive into the research topics of 'Investigating the Effect of Pre-processing Methods on Model Decision-Making in EEG-Based Person Identification'. Together they form a unique fingerprint.

Cite this