The goal of this project is to achieve a high quality closed-looped map of real-world environment in semantic context, using RGB camera and additional multiple advanced sensors including events, infrared cameras, IMU and LiDAR etc.
Visual SLAM (Simultaneous Localisation and Mapping) is an essential issue of general spatial AI research area, which may concern with incremental estimation of geometry and semantic information of the scene around a mobile robot, and recovering their pose, as well as real-time performance. A typical Visual SLAM algorithm usually stands on slow moving robots within a static scene but those assumptions are inadequate to represent real-world scenarios including applications of formula one and racing drones, where the environment is highly dynamic and large, as well as vehicle would move randomly and very fast. These issues would bring extra noise (e.g. motion blur) into visual channel, and further challenge traditional visual SLAM system. With advances in deep learning and electronics technology, this limited paradigm from single visual channel may no longer be the only source of inspiration for visual SLAM. Further investigation on other sense channels and related learning based synthesised sensing may be promising because they introduce a rich set of information that describes images from different perspectives.
This project aims to develop an deep learning based multi-sensory SLAM framework which (1) enables robust and accurate localisation and dense mapping given difficult real-world environment including various dynamic objects and featureless ground plane, etc. (2) efficiently achieves proper fusion from different advanced sensors including events, infrared cameras, IMU and LiDAR etc., equipped on a very fast moving vehicle e.g. formula one and racing drone. (3) should be power-efficient and run on-board typical computational platforms (e.g. NVIDIA Jetson TX2) for autonomous vehicle research.
The proposed project has natural link to engineering applications, in particular autonomous driving and flight, as visual SLAM is an essential perception approach for most of the autonomous systems. On the other hand, the proposed project and related applications in autonomous driving draws a great implications and discussions for public policy, e.g. licensing and road traffic regulations, revenue and security.
This project is associated with the UKRI CDT in Accountable, Responsible and Transparent AI (ART-AI), which is looking for its first cohort of at least 10 students to start in September 2019. Students will be fully funded for 4 years (stipend, UK/EU tuition fees and research support budget). Further details can be found at: www.bath.ac.uk/research-centres/ukri-centre-for-doctoral-training-in-accountable-responsible-and-transparent-ai/.
Desirable qualities in candidates include intellectual curiosity, a strong background in maths and programming experience.
Applicants should hold, or expect to receive, a First Class or good Upper Second Class Honours degree. A master’s level qualification would also be advantageous.
Informal enquiries about the project should be directed to Dr Wenbin Li: [Email Address Removed].
Enquiries about the application process should be sent to [Email Address Removed].
Formal applications should be made via the University of Bath’s online application form for a PhD in Computer Science: https://samis.bath.ac.uk/urd/sits.urd/run/siw_ipp_lgn.login?process=siw_ipp_app&code1=RDUCM-FP01&code2=0013
Start date: 23 September 2019.
ART-AI CDT studentships are available on a competition basis for UK and EU students for up to 4 years. Funding will cover UK/EU tuition fees as well as providing maintenance at the UKRI doctoral stipend rate (£15,009 per annum for 2019/20) and a training support fee of £1,000 per annum.
We also welcome all-year-round applications from self-funded candidates and candidates who can source their own funding.
 Binbin Xu, Wenbin Li, Dimos Tzoumanikas, Michael Bloesch, Andrew J Davison, Stefan Leutenegger. MID-Fusion: Octree-based Object-Level Multi-Instance Dynamic SLAM. International Conference on Robotics and Automation, ICRA 2019.
 Dimos Tzoumanikas, Wenbin Li, Marius Grimm, Ketao Zhang, Mirko Kovac, Stefan Leutenegger. Fully autonomous micro air vehicle flight and landing on a moving target using visual–inertial estimation and model‐predictive control. J Field Robotics. 2019;36:49–77.
 Tristan Laidlow, Michael Bloesch, Wenbin Li, Stefan Leutenegger, Dense RGB-D-Inertial SLAM with Map Deformations, Intelligent Robots and Systems (IROS), 2017 IEEE/RSJ International Conference on, 2017.
How good is research at University of Bath in Computer Science and Informatics?
FTE Category A staff submitted: 24.00
Research output data provided by the Research Excellence Framework (REF)
Click here to see the results for all UK universities