Advancing explainability in multivariate time series classification through generative models

Meng, Han (2024) Advancing explainability in multivariate time series classification through generative models. PhD thesis, University of Nottingham.

[img] PDF (Thesis - as examined) - Repository staff only - Requires a PDF viewer such as GSview, Xpdf or Adobe Acrobat Reader
Available under Licence Creative Commons Attribution.
Download (7MB)

Abstract

As Artificial Intelligence (AI) techniques become increasingly integrated in various domains, concerns about their interpretability have increased. This has catalysed the development of eXplainable AI (XAI), which seeks to make AI systems more understandable to humans. However, most XAI research has focused on tabular data, images, or language models, with time series models receiving limited attention despite their pervasiveness. This thesis aims to advance XAI specifically within the context of Multivariate Time Series Classification (MTSC) problems, providing varied explanation types to meet diverse user needs.

Among the various types of explanations, feature importance stands out as one of the most intuitive, where the most relevant features influencing a model's decisions are identified. Most of the existing feature importance explanation techniques generate samples to feed into the model with the aim of probing the model's mechanisms. However, the methods used for generating samples often disrupt the inherent temporal dependencies within time series data. This disruption leads to samples that are misaligned with the distribution of the training data, resulting in misleading explanations.

This thesis introduces a model-agnostic framework that incorporates a generative model trained to estimate the training data distribution, thereby generating within-distribution samples for accurate feature importance evaluation. Furthermore, time series models typically incorporate a large number of input features, challenging the identification of the most relevant ones. To address this difficulty, the framework integrates an optimisation method to efficiently identify the most important features.

In addition to the challenges in accurately identifying the most important features, current research also highlights the lack of stability as a key issue affecting most feature importance explanation methods, such as the well-known Local Interpretation Model-agnostic Explanation (LIME). This means that the explanations they provide for the same instance are often inconsistent. Although various methods have been developed to mitigate this instability, the underlying processes driving the instability in the context of MTSC remain somewhat obscure. This thesis explores this area and highlights that a significant but often overlooked issue in current research affecting stability is also the out-of-distribution samples generated during the explanation process. To address this, a novel framework is proposed which employs a generative model and a local sampling technique to produce within-distribution neighbouring data for the instance to be explained. Additionally, it integrates an adaptive weighting strategy to facilitate hyperparameter optimisation, thereby further enhancing the stability of the explanations.

To advance XAI in the field of MTSC, relying solely on feature importance may not suffice, for example for the general public, who usually seek more straightforward explanations to assist their decision-making processes. In this context, counterfactual explanations are promising. However, current counterfactual methods for MTSC usually fail to provide plausible and meaningful explanations, often overlooking the distribution of training data during the generation of explanations. This thesis proposes two novel approaches to improve the viability of counterfactual explanations for MTSC. The first method estimates the distribution density of the training samples using Gaussian Mixture Models (GMMs) and then trains the generative model to create counterfactual samples in the densest regions. The second approach adopts a generative adversarial network paradigm, where the generator is trained to create counterfactuals and the discriminator is optimised to distinguish real and fake samples. This approach avoids making specific assumptions about the data distribution, such as the GMMs in the first approaches. Both approaches aim to create counterfactuals using generative models, with the aim of aligning created counterfactuals with the distribution of realistic time series. Experiments carried out with real-world data sets highlight the strengths of each approach in different real-world problems.

In summary, this thesis advances XAI in MTSC by addressing key challenges in feature importance and counterfactual explanations. Throughout this thesis, a strong emphasis is placed on leveraging generative models to produce within-distribution samples with the aim of avoiding the out-of-distribution problem and enhancing explanation reliability. This thesis underpins the promising role of advanced generative models in XAI, setting the stage for their expanded contribution and development in future XAI research.

Item Type: Thesis (University of Nottingham only) (PhD)
Supervisors: Triguero, Isaac
Wagner, Christian
Keywords: Human-computer interaction; Artificial intelligence; Time series data; Model-agnostic framework; Counterfactual explanations; Out-of-distribution problem; XAI
Subjects: Q Science > QA Mathematics > QA 75 Electronic computers. Computer science
Faculties/Schools: UK Campuses > Faculty of Science > School of Computer Science
Item ID: 78149
Depositing User: Meng, Han
Date Deposited: 23 Jul 2024 04:40
Last Modified: 23 Jul 2024 04:40
URI: https://eprints.nottingham.ac.uk/id/eprint/78149

Actions (Archive Staff Only)

Edit View Edit View