Explain the world --- towards leveraging causality in fuzzy rule based systems

Zhang, Te (2025) Explain the world --- towards leveraging causality in fuzzy rule based systems. PhD thesis, University of Nottingham.

[thumbnail of This is the corrected version of the thesis]
Preview
PDF (This is the corrected version of the thesis) (Thesis - as examined) - Requires a PDF viewer such as GSview, Xpdf or Adobe Acrobat Reader
Download (6MB) | Preview

Abstract

Artificial intelligence (AI) is increasingly applied across sectors, including risk-sensitive areas such as healthcare and security, driving the growing demand for explainable AI (XAI). Among various XAI approaches, fuzzy rule-based systems provide a tempting architecture for XAI, offering linguistic, human-accessible rules, combined with the capacity for handling complex applications amid varying levels of uncertainty, vagueness and imprecision. Nowadays, rules are frequently obtained through data-driven approaches. However, while these systems explain model behaviour by revealing the relationships between the variables captured, they often capture correlations. Ideally, AI systems are expected to go beyond explainable AI (XAI), that is, systems which not only explain their behaviour, but also communicate their `insights' in respect to the real world. Thus, rules are expected to capture causal relationships between variables.

This thesis argues that fuzzy rule-based systems, where the rules reflect causal relationships between variables, offer unique benefits in terms of performance and explainability, particularly by enhancing the communication of AI insights to people. In other words, ideally, the rules of such systems can explain not only what happens within the model but also what happens in the real world. Based on this, this thesis focuses on how to automatically generate rules which reflect causal relationships between variables from data sets using data-driven approaches. To achieve this goal, the following two problems are investigated: 1) What are the nature and role of causal relationships in the context of artificial intelligence reasoning. 2) how to automatically generate rules leveraging the causal information obtained from a given data set.

The first problem relates to the concept of causality, which is complex and an ongoing topic for discussion across disciplines, with no universally accepted definition. To solve the first problem, this thesis first summarizes different definitions of causality and introduces the definition adopted in this thesis. Then, this thesis introduces the tool used to represent causal relationships—the causal graph—and provides a detailed analysis of the causal information that can be obtained from a causal graph of a given data set. Following that, this thesis discusses different facets of causal relationships can be derived from a causal graph. Finally, this thesis reviews the established data-driven approaches for generating causal graphs from a given data set.

To solve the second problem, a data-driven causal rule generation framework is established in this thesis. The framework is designed to start by generating a causal graph from a given data set using a data-driven approach. Then, the framework uses causal information between variables obtained from the causal graph to remove variables which are not causally related to the target variable. Finally, the framework uses a data-driven rule generation approach to generate rules from the refined data set, thereby achieving causal rule generation.

To enable users to customize, based on their needs, the generation of causal explanations provided by a fuzzy system, this thesis proposes three variants of the framework. These variants leverage different types of causal information to generate rules which reflect different facets of causal explanations. This thesis provides a detailed analysis of the differences in causal explanations provided by the rules generated by these variants and their applicable scenarios.

Furthermore, one meta-variant of the framework is proposed to complement an explanation provided by the rules obtained by the established framework. The meta-variant is designed to generate counterfactual explanations based on rules obtained by a variant of the framework. Uniquely, a counterfactual explanation obtained by the meta-variant of the framework articulates how the given inputs would need to be changed to generate a different output, crucial for lay-user insight, verification and sensitivity-evaluation of XAI systems, for example in decision support around credit risk, cyber security and medical assistance.

Beyond the theoretical framework, a software tool is developed to promote the dissemination and application of the established framework. The software tool is a Python library which contains essential functions to implement different variants of the established framework for solving classification problems. This thesis summarises and describes the features of the developed tool. In addition, this thesis demonstrates how to use the developed Python library to implement different variants of the established framework.

Item Type: Thesis (University of Nottingham only) (PhD)
Supervisors: Wagner, Christian
Garibaldi, Jonathan M.
Keywords: ai, artificial intelligence, causality, fuzzy rule based systems
Subjects: Q Science > QA Mathematics > QA 75 Electronic computers. Computer science
Faculties/Schools: UK Campuses > Faculty of Science > School of Computer Science
Item ID: 81185
Depositing User: Zhang, Te
Date Deposited: 30 Jul 2025 04:40
Last Modified: 30 Jul 2025 04:40
URI: https://eprints.nottingham.ac.uk/id/eprint/81185

Actions (Archive Staff Only)

Edit View Edit View