Despite much progress, the development of a brain-actuated robotic arm for patients with motor impairments to perform daily living activities using a brain-computer interface (BCI) remains an ambitious target. One key problem is the poor decoding performance of BCIs, particularly in the case of non-invasive BCIs. In this project, we aim to develop a shared control strategy to realize flexible robotic arm control for reaching and grasping of multiple objects. With intelligent assistance provided by robot vision, users are only required to complete a reaching movement and target selection using a simple motor imagery based BCI with binary output. Along with user control, the robotic arm, which can identify and localize the potential targets within the workspace in the background, is capable of furnishing both trajectory correction in the reaching phase to reduce trajectory redundancy and autonomous grasping assistance in the grasping phase.
The ambition of this project is to merge artificial intelligence and human intelligence in the control of brain-actuated robotic arms. BCIs can recognize human motion intention, so human intelligence is reflected by the BCI to control the robotic arm. The robotic arm is an autonomous system, which has artificial or machine intelligence based on vision servo control. For people suffering from severe neuromuscular disorders or accident injuries, a brain-controlled robotic arm is expected to provide assistance in their daily life. A primary bottleneck to achieve this objective is that the information transfer rate of current BCIs is not high enough to produce multiple and reliable commands during online robotic control. In this project, machine autonomy is infused into a BCI-controlled robotic arm system, where a user and a machine can work together to reach and grasp multiple objects in a given task. The intelligent robotic system can autonomously localize the potential targets and provide trajectory correction and grasping assistance accordingly. Meanwhile, the user only needs to complete rough reaching movement and target selection with a basic binary motor imagery based BCI, which can reduce the task difficulty and retain the volitional involvement of users at the same time.
The shared control system consists of three subsystems: the BCI system, the robotic arm system, and the arbitrator. A depth camera will be mounted on the gripper. Human subjects convey their intent by performing motor imagery tasks, and advanced BCI algorithms will be used to recognize the motion intent. Obtained probability values after BCI decoding will be used to generate the user velocity commands. The depth camera can record the scene point clouds, followed by pose estimation of the potential target blocks. Some deep learning methods will be used in the robotic vision. The endpoint position of the robotic arm as well as the estimated locations of potential target blocks are monitored during movement, which is used to identify the user intent and determine the type of assistance. In the process of an object grasping task, two types of intelligent assistance comprising trajectory correction and grasping assistance are available. The arbitrator will make decisions on which kind of intelligent assistance to provide according to a set of predefined rules for user intent identification. After the type of assistance is determined, the corresponding velocity commands derived from the human and the robotic arm separately will then be blended and sent to the controller. The final velocity command sent to the controller is a blend of commands generated from the human and the robotic arm.
This research project will be carried out as part of an interdisciplinary integrated PhD in the UKRI Centre for Doctoral Training in Accountable, Responsible and Transparent AI (ART-AI). The ART-AI CDT aims at producing interdisciplinary graduates who can act as leaders and innovators with the knowledge to make the right decisions on what is possible, what is desirable, and how AI can be ethically, safely and effectively deployed.
Candidates are expected to have or near complete an MSc or MEng in Electrical Engineering, Control Engineering, Robotics, Mechatronics, Computer Science, Mathematics, Physics or related areas.
Informal enquiries about the project should be directed to Dr Dingguo Zhang.
Enquiries about the application process should be sent to art-ai-applications@bath.ac.uk.
Formal applications should be made via the University of Bath’s online application form: https://samis.bath.ac.uk/urd/sits.urd/run/siw_ipp_lgn.login?process=siw_ipp_app&code1=RDUCM-FP02&code2=0003
Start date: 4 October 2021.
The information you submit to University of Bath will only be used by them or their data partners to deal with your enquiry, according to their privacy notice. For more information on how we use and store your data, please read our privacy statement.
Based on your current searches we recommend the following search filters.
Check out our other PhDs in Bath, United Kingdom
Check out our other PhDs in Biophysics
Start a new search with our database of over 4,000 PhDs
Based on your current search criteria we thought you might be interested in these.
Synergy of artificial intelligence and human factors for human-robotic collaboration in intelligent manufacturing
Coventry University
Artificial Intelligence and Machine Learning for Natural Sciences
Karlsruhe Institute of Technology
Artificial Intelligence Solutions for Home Healthcare Planning (Advert Reference: SF19/BL/MOS/QU2)
Northumbria University