Human-robot collaboration (HRC) requires efficient mechanisms to communicate and exchange information from/to users to/from robots, e.g., to allow a user to command a robot, as well as to allow a robot to express its interpretations and intentions. By improving such exchange of information, we could aim to a more transparent communication, and a better mutual understanding, this potentially leading to more trustworthy collaborations.
Commanding a robot to perform tasks can be time-consuming, necessitating operators repeatedly and explicitly to define tasks’ specifications, e.g., parameters such as waypoints or desired motions. As a result, this process can become tedious and influence the perceived task workload, i.e., the requested task might require the users to continuously validate their commands, or for the robots to autonomously make decisions that are above its own capabilities. Therefore, it is necessary to find the appropriate level of detail and specification of the tasks that are given to a robot.
Previous work has focused on increasing robot autonomy whilst reducing users’ cognitive load by providing high-level commands via an Augmented Reality (AR) interface [1]. AR and Mixed Reality (MR) have been shown to be useful in allowing people to command robots and to visualise a robot's planned actions by enabling interaction with virtual objects placed in the real world. Accordingly, MR offers a unique interaction paradigm that seamlessly integrates user direct control capabilities via hand interaction with visual feedback to communicate the robot's actions and intention.
However, such approaches are generally limited to users with full mobility, and they are not inclusive for users with accessibility requirements. Gaze interaction is a compelling modality in contexts where hand input is unavailable, inconvenient, or not as ready to hand. Eye movements are fast and require less energy and effort than input with head or hands [2]. We look at the objects we want to interact with intuitively. Accordingly, eyes-only interfaces have been developed for accessibility, direct object manipulation, and instant control, showcasing gaze input's expressiveness to signal attention and trigger intention [3].
We propose to leverage gaze interaction in Mixed Reality to foster accessibility and effectiveness in future human-robot collaborations. This research project aims to follow a user-centred, robot-agnostic and object-centric approach to investigate new gaze-based interaction techniques that focus on human-robot collaboration in AR/MR. Anticipated results will contribute a constellation of inclusive interaction methods with potential impact in industrial and social companion robots.
Keywords: Mixed Reality, Human-Robot Collaboration, Robot Autonomy, Eye Tracking, Gaze Interaction
Contact for information on the project: Dr Parisa Eslambolchilar ([Email Address Removed]), Dr Juan D. Hernandez Vega ([Email Address Removed]), Dr Argenis Ramirez Gomez ([Email Address Removed])
Academic criteria: A 2:1 Honours undergraduate degree or a master's degree, in computing or a related subject. Applicants with appropriate professional experience are also considered. Degree-level mathematics (or equivalent) is required for research in some project areas.
Applicants for whom English is not their first language must demonstrate proficiency by obtaining an IELTS score of at least 6.5 overall, with a minimum of 6.0 in each skills component.
Application Information: If you would like to be considered for the School Funded Application, please submit your application before the 30th June 2021.
In the funding field of your application, insert “I am applying for 2021 PhD Scholarship in Computer Science and Informatics”, and specify the project title and supervisors of this project in the text box provided.
Apply online: https://www.cardiff.ac.uk/study/postgraduate/research/programmes/programme/computer-science-and-informatics - Please read the "How to apply" instructions carefully prior to application.