Sprinks, James C.
(2017)
Designing task workflows to ensure the best scientific outcomes in citizen science.
PhD thesis, University of Nottingham.
Abstract
Citizen science, or ‘public participation in scientific research’, can be described as research conducted, in whole or in part, by amateur or nonprofessional participants often through crowd-sourcing techniques. The advance of modern day internet technology that has made the world a more connected place has resulted in a surge of citizen science projects, especially online platforms which allow volunteers to take part in research in both an asynchronous and geographically abstract manner. Due to this increased interest, citizen science is becoming a distinct field of research in its own right beyond the original scientific problems it was devised to address. Although some of this research has considered interface HCI and functionality, there has been relatively little attention paid specifically to human factors issues.
Through this work we attempt to address this shortfall, by considering citizen science as a form of ‘work’. Due to its similarities in terms of repetition with production lines of the early 20th century and more recently on-screen visual inspection tasks, some of the many decades of ergonomics research in this field are applied specifically to the virtual citizen science arena. We make a first step in considering how virtual citizen science systems can be better designed for the needs of the volunteer, exploring how manipulating task flow affects both the quality of information collected, and the volunteers’ experience of using the interface.
A hierarchical task analysis of 12 Zooniverse projects revealed that the types of tasks, judgements and the way they are presented to the volunteer varies greatly, independent of the science discipline involved. Furthermore, through differing designs of the Zooniverse’s ‘Planet Four: Craters’ platform, it was shown that task workflow design factors such as autonomy, variety, task type and volunteer judgement required can influence the amount of data collected, the accuracy of this data and both volunteer engagement and motivation. Simpler tasks with fewer volunteer judgements required resulted in a significantly greater volume of data collected, however accuracy is affected with an increase of false-positive classifications. Volunteers reported a preference for greater autonomy and task variety, a stance reflected in the number of times they visited and returned to the platform, however this also significantly reduced the accuracy of classifications – both in terms of inter-participant agreement and expert judgement comparison.
The interplay of task workflow factors and their effect has been shown to be a complex affair. Through the empirical data collected, a model has been derived predicting the influence of different task workflow configurations on classification numbers over time since a platforms’ launch. It demonstrates that when considering task workflow design, developers of future citizen science platforms will need to perform a balancing act. The importance of user engagement, the data needs of the science case and the resources that can be committed both in terms of time and data reduction will need to be weighed, and balanced with the realistic public reach and promotion the science case can be predicted to generate.
Actions (Archive Staff Only)
|
Edit View |