TY - CHAP
T1 - Modelling Individual Experts in Neonatal Seizure Detection Algorithm Development
AU - Galvez, Sergio Morant
AU - O'Sullivan, Mark
AU - Walsh, Brian Henry
AU - O'Shea, Alison
N1 - Publisher Copyright:
© 2024 IEEE.
PY - 2024
Y1 - 2024
N2 - Developing algorithms to detect seizures in neonatal electroencephalogram (EEG) signals is an important area of research. Identifying neonatal seizures is a time-consuming process that requires specially trained experts. Most neonatal seizure detection algorithms use supervised learning and require large datasets of labelled EEG for training. However, EEG is a complex physiological signal, and expert annotators often have disagreements when identifying seizures in infants. Most studies with multiple expert annotators compress the annotations down to one 'ground truth' set of labels during algorithm training, this may lead to a loss of valuable information. This study investigates if preserving the disagreement of multiple expert annotators during training improves model performance. Three variations of a deep learning architecture are compared experimentally; each one varies in how annotator disagreements are accounted for. The results indicate that there is value in modelling expert annotations separately in supervised learning algorithms. This study proposes architectures that harness expert variability by learning from both the agreement and disagreement in an open-source dataset of neonatal EEGs. Clinical Relevance-This work demonstrates how a more holistic approach to neonatal seizure detection algorithm development, incorporating opinions of all annotators, improves algorithm results and better reflects the standards of clinical care.
AB - Developing algorithms to detect seizures in neonatal electroencephalogram (EEG) signals is an important area of research. Identifying neonatal seizures is a time-consuming process that requires specially trained experts. Most neonatal seizure detection algorithms use supervised learning and require large datasets of labelled EEG for training. However, EEG is a complex physiological signal, and expert annotators often have disagreements when identifying seizures in infants. Most studies with multiple expert annotators compress the annotations down to one 'ground truth' set of labels during algorithm training, this may lead to a loss of valuable information. This study investigates if preserving the disagreement of multiple expert annotators during training improves model performance. Three variations of a deep learning architecture are compared experimentally; each one varies in how annotator disagreements are accounted for. The results indicate that there is value in modelling expert annotations separately in supervised learning algorithms. This study proposes architectures that harness expert variability by learning from both the agreement and disagreement in an open-source dataset of neonatal EEGs. Clinical Relevance-This work demonstrates how a more holistic approach to neonatal seizure detection algorithm development, incorporating opinions of all annotators, improves algorithm results and better reflects the standards of clinical care.
UR - https://www.scopus.com/pages/publications/105001251232
U2 - 10.1109/BHI62660.2024.10913754
DO - 10.1109/BHI62660.2024.10913754
M3 - Chapter
AN - SCOPUS:105001251232
T3 - BHI 2024 - IEEE-EMBS International Conference on Biomedical and Health Informatics, Proceedings
BT - BHI 2024 - IEEE-EMBS International Conference on Biomedical and Health Informatics, Proceedings
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2024 IEEE-EMBS International Conference on Biomedical and Health Informatics, BHI 2024
Y2 - 10 November 2024 through 13 November 2024
ER -