Imperial College London Featured PhD Programmes
Gdansk University of Technology Featured PhD Programmes
Solent University Featured PhD Programmes

Towards increased accessibility in human-robot collaboration leveraging gaze interaction in Mixed Reality [Self-Funded Students Only]

   Cardiff School of Computer Science & Informatics

  Dr P Eslambolchilar, Dr Juan Hernandez Vega, Dr Argenis Ramirez Gomez  Applications accepted all year round  Self-Funded PhD Students Only

About the Project

Human-robot collaboration (HRC) requires efficient mechanisms to communicate and exchange information from/to users to/from robots, e.g., to allow a user to command a robot, as well as to allow a robot to express its interpretations and intentions. By improving such exchange of information, we could aim to a more transparent communication, and a better mutual understanding, this potentially leading to more trustworthy collaborations.

Commanding a robot to perform tasks can be time-consuming, necessitating operators repeatedly and explicitly to define tasks’ specifications, e.g., parameters such as waypoints or desired motions. As a result, this process can become tedious and influence the perceived task workload, i.e., the requested task might require the users to continuously validate their commands, or for the robots to autonomously make decisions that are above its own capabilities. Therefore, it is necessary to find the appropriate level of detail and specification of the tasks that are given to a robot.

Previous work has focused on increasing robot autonomy whilst reducing users’ cognitive load by providing high-level commands via an Augmented Reality (AR) interface [1]. AR and Mixed Reality (MR) have been shown to be useful in allowing people to command robots and to visualise a robot's planned actions by enabling interaction with virtual objects placed in the real world. Accordingly, MR offers a unique interaction paradigm that seamlessly integrates user direct control capabilities via hand interaction with visual feedback to communicate the robot's actions and intention.

However, such approaches are generally limited to users with full mobility, and they are not inclusive for users with accessibility requirements. Gaze interaction is a compelling modality in contexts where hand input is unavailable, inconvenient, or not as ready to hand. Eye movements are fast and require less energy and effort than input with head or hands [2]. We look at the objects we want to interact with intuitively. Accordingly, eyes-only interfaces have been developed for accessibility, direct object manipulation, and instant control, showcasing gaze input's expressiveness to signal attention and trigger intention [3].

We propose to leverage gaze interaction in Mixed Reality to foster accessibility and effectiveness in future human-robot collaborations. This research project aims to follow a user-centred, robot-agnostic and object-centric approach to investigate new gaze-based interaction techniques that focus on human-robot collaboration in AR/MR. Anticipated results will contribute a constellation of inclusive interaction methods with potential impact in industrial and social companion robots.

Keywords: Mixed Reality, Human-Robot Collaboration, Robot Autonomy, Eye Tracking, Gaze Interaction

Contact for information on the project: Dr Parisa Eslambolchilar (), Dr Juan D. Hernandez Vega (), Dr Argenis Ramirez Gomez ()

Academic criteria: A 2:1 Honours undergraduate degree or a master's degree, in computing or a related subject.  Applicants with appropriate professional experience are also considered. Degree-level mathematics (or equivalent) is required for research in some project areas.

Applicants for whom English is not their first language must demonstrate proficiency by obtaining an IELTS score of at least 6.5 overall, with a minimum of 6.0 in each skills component.

This project is available for students worldwide.

How to apply:

Please contact the supervisors of the project prior to submitting your application to discuss and develop an individual research proposal that builds on the information provided in this advert. Once you have developed the proposal with support from the supervisors, please submit your application following the instructions provided below

This project is accepting applications all year round, for self-funded candidates via 

In order to be considered candidates must submit the following information: 

  • Supporting statement 
  • CV 
  • In the ‘Research Proposal’ section of the application enter the name of the project you are applying to and upload your Individual research proposal, as mentioned above in BOLD
  • Qualification certificates and Transcripts
  • Proof of Funding. For example, a letter of intent from your sponsor or confirmation of self-funded status (In the funding field of your application, insert Self-Funded)
  • References x 2 
  • Proof of English language (if applicable)

If you have any questions or need more information, please contact 

Funding Notes

This project is offered for self-funded students only, or those with their own sponsorship or scholarship award.


[1] Hernández, J. D., Sobti, S., Sciola, A., Moll, M., & Kavraki, L. E. (2020). Increasing robot autonomy via motion planning and an augmented reality interface. IEEE Robotics and Automation Letters, 5(2), 1017-1023.
[2] Sidenmark, L. and Gellersen, H. 2019. Eye, Head and Torso Coordination During Gaze Shifts in Virtual Reality. ACM Trans. Comput.-Hum. Interact. 27, 1, Article 4 (Dec. 2019), 40 pages.
[3] Ramirez Gomez, A., Clarke, C., Sidenmark, L. and Gellersen, H. 2021. Gaze+Hold: Eyes-only Direct Manipulation with Continuous Gaze Modulated by Closure of One Eye. In Proceedings of the 2021 ACM Symposium on Eye Tracking Research & Applications.

How good is research at Cardiff University in Computer Science and Informatics?

Research output data provided by the Research Excellence Framework (REF)

Click here to see the results for all UK universities
PhD saved successfully
View saved PhDs