For instructions on how to apply, please see: PhD Studentships: UKRI Centre for Doctoral Training in Socially Intelligent Artificial Agents.
Supervisors:
- Dale Barr: School of Psychology
- Mary Ellen Foster: School of Computing Science
Human social interaction is increasingly mediated by technology, with many of the signals present in traditional face-to-face interaction being replaced by digital representations (e.g., avatars, nameplates, and emojis). To communicate successfully, participants in a conversational interaction must keep track of the identities of their co-participants, as well as the “common ground” they share with each—the dynamically changing set of mutually held beliefs, knowledge, and suppositions. Perceptual representations of interlocutors may serve as important memory cues to shared information in communicative interaction (Horton & Gerrig, 2016; O’Shea, Martin, & Barr, in press). Our main question concerns how digital representations of users across different interaction modalities (text, voice, video chat) influence the development of and access to common ground during communication. To examine the impact of digital user representations on real-time language production and comprehension, the project will use a variety of behavioral methods including visual world eye-tracking (Tanenhaus, et al. 1995), latency measures, as well as analysis of speech/text content. In the first phase of the project, we will examine how well people can keep track of who said what during a discourse depending on the abstract versus rich nature of user representations (e.g., from abstract symbols to dynamic avatar-based user representations), and how these representations impact people’s ability to tailor messages to their interlocutors, as well as to correctly interpret a communicator’s intended meaning. For example, in one such study, we will test participants’ ability to track “conceptual pacts” (Brennan & Clark, 1996) with a pair of interlocutors during an interactive task where each partner appears (1) through a video stream; (2) as an animated avatar; or (3) as a static user icon. In the second phase, we will examine whether the nature of the user representation during encoding affects the long-term retention of common ground information. In support of the behavioural experiments, this project will also involve developing a range of conversational agents, both embodied and speech-only, and defining appropriate behaviour models to allow those agents to take part in the studies. The defined behaviour will incorporate both verbal interaction as well as non-verbal actions, to replicate the full richness of human face-to-face conversation (Foster, 2019; Bavelas et al., 1997). Insights and techniques developed during the project are intended to improve interfaces for computer-mediated human communication.