Bridging the automated machine learning transparency and usability gap for non-experts: a user-centered design and evaluation study

Sahidon, Muhammad Alif Danial (2025) Bridging the automated machine learning transparency and usability gap for non-experts: a user-centered design and evaluation study. PhD thesis, University of Nottingham.

[thumbnail of Sahidon, Alif, 18024387.pdf]
Preview
PDF (Thesis - as examined) - Requires a PDF viewer such as GSview, Xpdf or Adobe Acrobat Reader
Download (5MB) | Preview

Abstract

The inherent complexity of Artificial Intelligence (AI) and Machine Learning (ML) tools creates significant barriers for non-expert users. Traditional ML workflows require specialized programming and statistical knowledge, limiting widespread adoption across various domains where these technologies could provide substantial benefits. This research aimed to develop and evaluate VisAutoML, an automated machine learning tool specifically designed to provide non-expert users with a transparent and user-friendly ML development experience for tabular data. The study sought to identify key factors influencing tool acceptance, understand specific challenges faced by non-experts, and create novel design principles to address these challenges. The research employed a five-stage iterative user-centered design methodology. This included a mixed-method study utilizing an extended Technology Acceptance Model (TAM) to identify acceptance factors and user challenges. The design integrated technology-enhanced scaffolding and Explainable Artificial Intelligence (XAI) principles tailored for non-experts, featuring visualizations of activities, demonstration of scaffold functions, contextually relevant support, and progressive disclosure of XAI visualizations. Two versions of VisAutoML were developed and evaluated against both commercial alternatives and established benchmarks. Initial comparison between VisAutoML 1.0 and H2O AutoML showed significantly higher System Usability Scale scores (61.5 vs 38.5) for VisAutoML and a 20.94% increase in correct answers on knowledge assessments. The redesigned VisAutoML 2.0 demonstrated substantial improvements, with 75% of participants completing ML model development tasks in under 5 minutes. User Experience Questionnaire results showed 'good' scores for pragmatic quality (M=1.60, SD=0.912) and 'excellent' scores for hedonic quality (M=1.59, SD=0.899) and overall usability (M=1.60, SD=0.851). Trust measures were moderate (M=26.11, SD=4.67), while perceived explainability ratings were high (M=161.9, SD=36.24). This research contributes to the field by extending the TAM framework for understanding non-expert AutoML requirements, introducing empirically grounded design principles for usable and transparent AutoML systems, and successfully developing VisAutoML 2.0 with demonstrably enhanced usability and transparency. These contributions provide valuable guidance for making ML more accessible to broader audiences, advancing the democratization of AI technologies beyond technical specialists. Future work should explore additional application domains and further refinements of scaffolding and XAI approaches.

Item Type: Thesis (University of Nottingham only) (PhD)
Supervisors: Ng, Marina
Muschevici, Radu
Keywords: automated machine learning (AutoML); non‑expert users; explainable AI (XAI); user‑centered design; technology acceptance model (TAM)
Subjects: Q Science > QA Mathematics
Faculties/Schools: University of Nottingham, Malaysia > Faculty of Science and Engineering — Science > School of Computer Science
Item ID: 81343
Depositing User: Sahidon, Muhammad
Date Deposited: 26 Jul 2025 04:40
Last Modified: 26 Jul 2025 04:40
URI: https://eprints.nottingham.ac.uk/id/eprint/81343

Actions (Archive Staff Only)

Edit View Edit View