Unmanned Autonomous vehicles (UAVs) have been extensively applied for many practical applications that are either too dangerous or unsuitable for humans such as environmental monitoring, security surveillance, and search-and-rescue. These systems, however, consist of many interdependent components, from sensors to motors, operating in highly uncertain environments and exhibiting complex dynamics. This complex interdependency introduces new vulnerabilities within UAVs systems that are sometimes impossible to predict. As a result, a single disturbance in actuators, or in sensors can lead to catastrophic events such as colliding with obstacles. Hence, it is imperative to guarantee that both UAV and humans in the surrounding are always safe during operation even when facing unforeseen and unpredictable events.
Unfortunately, at the moment there are very limited solutions that can guarantee safety at runtime while an UAV explores. It is thus necessary to develop a new theory of safety critical control for UAVs to guarantee resilience and safety against errors, uncertainties and disturbances. In this project, a new theory of learning-based safety critical control will be developed to mitigate or even eliminate the effects of the errors and disturbances during autonomous operations. The safety critical control is employed to guarantee that the UAV will always operate within the designed safe zones during manoeuvring. Meanwhile, learning algorithms (meta learning or reinforcement learning) will be explored to model the uncertainty sources and integrate them within the safety critical control. This will enhance the precision of the tracking control system.
Objectives: The objectives of the project include:
- To design state estimators able to estimate uncertainties and disturbances in the system.
- To design a safety critical control that ensures the UAVs will always operate within the designed safe zones during manoeuvring.
- To develop learning algorithms using meta-learning and/or reinforcement learning concepts, in order to enhance the precision of the safety critical control.
- To implement and validate the proposed algorithms in computer simulation platforms: Gazebo platform or Matlab/Simulink.
- To implement the developed algorithms in a UAV platform, i.e., unmanned aerial vehicles or unmanned ground vehicles.
The research on ‘trustworthy autonomous system’ is hot research topic at the moment. By completing the objectives, the student will obtain multiple skills (i.e., mathematical, hand-on experiments, and programming skills) as well as strong knowledge in robotics, computer vision and machine learning.
Academic Requirements:
A minimum 2.1 honours degree or equivalent in Computer Science or Electrical and Electronic Engineering or relevant degree is required.
Applicants should apply electronically
through the Queen’s online application portal at: https://dap.qub.ac.uk/portal/
Further information available at: https://www.qub.ac.uk/schools/eeecs/Research/PhDStudy/
Closing date for applications is Monday 22nd March