Heaton, Dan
(2024)
Agency, Trust and Blame in Decision-Making Algorithms: An Analysis of Twitter Discourses.
PhD thesis, University of Nottingham.
Abstract
This PhD project focuses on exploring the concepts of agency, blame and trust concerning public-facing decision-making algorithms through an interdisciplinary linguistic approach. Public-facing decision-making algorithms, such as the algorithms underpinning A Level grade calculations in 2020, the NHS Covid-19 contact tracing app and ChatGPT, have increasingly impacted global conversations, yet concerns have emerged regarding their perceived social agency, particularly when negative outcomes arise. Determining responsibility and accountability for these outcomes is challenging due to the complex and opaque nature of how these algorithms are designed and deployed. Trust in such systems is vital for the future development of artificial intelligence technologies, emphasising the need for algorithms to be perceived as trustworthy by design and in practice. Despite this, little exploration exists regarding how trust and blame are influenced by the perceived agency, responsibility and accountability of these systems. By analysing Twitter discourses, where views have been expressed about the three aforementioned public-facing decision-making algorithms, this research aims to provide nuanced insights into how these systems are perceived in society.
The approach used in this PhD thesis involves analysing the relationship between social agency and grammatical agency through computational and discursive linguistics. While popular Natural Language Processing (NLP)-based tools -- like sentiment analysis, topic modelling and emotion detection -- are commonly used for social media research, they struggle to capture the nuanced discursive and conversational aspects of opinions on decision-making algorithms. To address this gap, this research adopts a combined approach, incorporating Corpus Linguistics (CL) and Discourse Analysis (DA), with an aim to provide deeper insights into the nuances of discourses surrounding agency, blame and trust in decision-making algorithms, which traditional NLP-based methods may overlook. Methodologically, the research involved three key steps: initial analysis using NLP-based tools, followed by deeper examination using CL tools to explore grammatical constructions and, finally, employing DA with Social Actor Representation (SAR). Here, active and passive presentations were examined and social actors identified, providing insights into trust or blame attributed to decision-making algorithms on Twitter.
For the first of the three case studies, the 2020 A Level algorithm, the initial NLP analysis highlighted discussions around government involvement, flaws in the algorithm and impact on schools, teachers and students, with sentiment remaining negative throughout the discourse. Through the CL and DA investigation, it was found that Twitter users attributed blame to various social actors, including the algorithm itself, the UK government and Ofqual for the A Level results. Users employed active agency and personalisation in their tweets, with blame shifting more prominently towards these entities as the discourse progressed, which was seen through techniques like assimilation and individualism.
Secondly, the study into the NHS Covid-19 app showed three primary topics, with an increase in discussion related to the government's role and a dip in sentiment occurring at the time of the second national lockdown. CL and DA exploration revealed that Twitter users predominantly portrayed the app as a social actor, particularly in informing, instructing, and disrupting, while also assigning responsibility for users' welfare and safety. Despite occasional passive presentations and instances of ridicule, the discourse consistently emphasised the app's perceived responsibility for processing information, especially during significant events like its launch, subsequent lockdowns, and the `pingdemic' phase, highlighting the app's significant social impact and the public's expectations of its role.
Finally, the ChatGPT discourse saw topics spanning text generation, chatbot development and cryptocurrency, alongside a more positive sentiment trajectory. ChatGPT was, again, predominantly depicted as an active social actor, influencing content creation and information dissemination, while trust in its outputs evolved over time, influenced by perceptions of its agency and occasional blame for errors. There were times where, even though ChatGPT was portrayed actively, its status as a social actor was diluted due to users presenting it solely as an information source.
When looking holistically at the three discourses, while all three algorithms were portrayed actively as social actors, variations in blame attribution and trust were observed. ChatGPT was presented as more trustworthy and less blameworthy compared to the A Level algorithm and the Covid-19 app. The two pandemic-based case studies showcased how the agency that was ascribed to the systems from Twitter users unveiled a more overt degree of accountability and responsibility, resulting in decreased trust and clearer blame.
In terms of the main contributions of this thesis, this work provides insights into the dynamics of discourse surrounding decision-making algorithms through an analysis of Twitter discourses. This research offers a nuanced examination of how these algorithms are portrayed as social actors, highlighting strategic manipulations of blame attribution and foregrounding trust and blame perceptions in response to societal events. Additionally, this thesis demonstrates the effectiveness of integrating computational and discursive linguistic analytics, laying the foundations for future research using this approach. Overall, by understanding the roles, contexts and perceptions associated with decision-making algorithms, researchers can contribute to the responsible development and deployment of decision-making algorithms. This understanding can help foster public trust and effectively address societal concerns, particularly those related to perceptions of social algorithmic agency and its implications for trust and blame attribution.
Item Type: |
Thesis (University of Nottingham only)
(PhD)
|
Supervisors: |
Fischer, Joel E. Clos, Jeremie |
Keywords: |
sentiment analysis, topic modelling, emotion detection, autonomous systems, Ofqual, A Level, NHS, Covid-19, contact tracing, application, chatbot, social media, social actor representation, social action theory, corpus linguistics discourse analysis |
Subjects: |
P Language and literature > P Philology. Linguistics Q Science > QA Mathematics Q Science > QA Mathematics > QA 75 Electronic computers. Computer science |
Faculties/Schools: |
UK Campuses > Faculty of Science > School of Computer Science |
Related URLs: |
|
Item ID: |
79921 |
Depositing User: |
Heaton, Daniel
|
Date Deposited: |
13 Dec 2024 04:40 |
Last Modified: |
13 Dec 2024 04:40 |
URI: |
https://eprints.nottingham.ac.uk/id/eprint/79921 |
Actions (Archive Staff Only)
|
Edit View |