A multimodal approach to assessing user experiences with agent helpersTools Adolphs, Svenja, Clark, Leigh, Ofemile, Abdulmalik and Rodden, Tom (2016) A multimodal approach to assessing user experiences with agent helpers. ACM Transactions on Interactive Intelligent Systems, 6 (4). 29/1-29/31. ISSN 2160-6463 Full text not available from this repository.
Official URL: http://dl.acm.org/citation.cfm?doid=3015563.2983926
AbstractThe study of agent helpers using linguistic strategies such as vague language and politeness has often come across obstacles. One of these is the quality of the agent's voice and its lack of appropriate fit for using these strategies. The first approach of this article compares human vs. synthesised voices in agents using vague language. This approach analyses the 60,000-word text corpus of participant interviews to investigate the differences of user attitudes towards the agents, their voices and their use of vague language. It discovers that while the acceptance of vague language is still met with resistance in agent instructors, using a human voice yields more positive results than the synthesised alternatives. The second approach in this article discusses the development of a novel multimodal corpus of video and text data to create multiple analyses of human-agent interaction in agent-instructed assembly tasks. The second approach analyses user spontaneous facial actions and gestures during their interaction in the tasks. It found that agents are able to elicit these facial actions and gestures and posits that further analysis of this nonverbal feedback may help to create a more adaptive agent. Finally, the approaches used in this article suggest these can contribute to furthering the understanding of what it means to interact with software agents.
Actions (Archive Staff Only)
|