Abstract
Deep learning models perform exceptionally well in time series classification, but their lack of interpretability remains a significant challenge. Although explainable AI (XAI) techniques are well-established for image and tabular data, adapting these methods to time series is challenging due to temporal dependencies. Existing feature importance methods often treat each time step as an independent feature, ignoring temporal relationships or segmenting the series into fixed-size windows, which can lead to the omission of critical subsequences. This paper introduces a novel, local, model-agnostic interpretability method designed explicitly for time series data without segmentation. The technique generates neighbouring samples by randomly perturbing segments of the original instance, assigning weights based on their proximity to the original instance. From these samples, Parametrised Event Primitives (PEPs), such as increasing and decreasing events, and local maxima and minima points, are extracted and clustered into prototypical events, capturing the data’s temporal dynamics. These events, combined with the weights calculated for the samples and the predictions of the black-box model, are used to train a linear surrogate model that provides interpretable local explanations. The approach is evaluated on real-world datasets from the UCR archive, with faithfulness assessed through fidelity metrics. Its performance in identifying relevant time steps is also compared against three well-established XAI methods, demonstrating more interpretable, faithful explanations, and, on some datasets, outperforming competing approaches in highlighting salient subsequences or points critical to the model’s inference.
| Original language | English |
|---|---|
| Journal | IEEE Access |
| DOIs | |
| Publication status | Accepted/In press - 2025 |
Keywords
- Deep learning
- Explainable Artificial Intelligence
- Model-agnostic
- Post hoc
- Time series
- XAI
Fingerprint
Dive into the research topics of 'LOMATCE: LOcal Model-Agnostic Time-series Classification Explanations'. Together they form a unique fingerprint.Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver