Visual Simultaneous Localization and Mapping (vSLAM) is a popular technique in robotic vision that enables robots to orient themselves in a new environment by mapping features in captured images of the surrounding. Owing to the great success, vSLAM has already become a common state-of-the-art method for robot navigation and coordination and environment mapping.
However, vSLAM has its own limitations and cannot efficiently handle complex environments or process low-quality captured images. One of the main obstacles for vSLAM is functioning in low-visibility or low-lighting conditions. Since deployment of robots in such conditions is becoming essential in many applications where presence of humans is impossible or highly dangerous, capability of orientation under the low-lighting constraint is gaining importance.
The goal of this work is to investigate potentials, design breakthrough techniques and evaluate performance of vSLAM with additional sensor types (i.e. Time-of-Flight and thermal/infrared cameras) that will collaborate with the classical visual spectrum cameras and will be suitable for navigation of robots in poor visibility. This constraint will be tackled by exploiting the properties of thermal/night and active vision provided by the additional sensors and the fusion of those signals with the ones captured by the visual spectrum cameras.
The prospective candidate should have a background Bachelor or Masters degree in computer science, electronic engineering or other related disciplines. Experience in visual signal processing, camera / sensor technologies or related algorithm development is beneficial.