FREE PhD study and funding virtual fair REGISTER NOW FREE PhD study and funding virtual fair REGISTER NOW

AI to enhance navigation for blind and partially blind persons by exploiting sensory combination of available senses


   Centre for Accountable, Responsible and Transparent AI

This project is no longer listed on FindAPhD.com and may not be available.

Click here to search FindAPhD.com for PhD studentship opportunities
  Dr Karin Petrini, Dr Vinay Namboodiri, Dr Christopher Clarke  Applications accepted all year round  Competition Funded PhD Project (UK Students Only)

About the Project

Research questions:

Can AI be used to exploit multisensory integration of non-visual signals (e.g., sound, touch, and self-motion) to provide the blind and partially blind with a rich representation of the environment for effective spatial navigation?

Background and aim:

Recent reports show that there are at least 2.2 billion people with a vision impairment or who suffer from blindness worldwide (Bourn et al., 2017; Fricke et al., 2018). In Britain alone, it is estimated that nearly 4 million people will be partially sighted or blind by the year 2050 (https://www.rnib.org.uk/sites/default/files/FSUK_Report.pdf). Independent mobility and navigation is one of the most pressing issues for blind individuals (Marston & Golledge, 2003). This limits their acquisition of new and stable spatial knowledge, resulting in negative consequences for self-confidence, social interaction, productivity, and employment (Marston & Golledge, 2003; Walker & Lindsay, 2006).

Sighted adults can improve their performance in tasks such as object recognition, localization, and wayfinding by integrating different senses and reducing sensory uncertainty (e.g., Alais & Burr, 2004; Ernst & Banks, 2002; Nardini et al., 2008; Gori et al., 2008). This multisensory advantage has been shown to occur even when vision is absent (i.e., for touch and sound for sighted and blind individuals, Petrini et al., 2014; Scheller et al., 2020). We have shown recently that blind and partially blind participants can benefit from encoding spatial information, especially during allocentric tasks, by combining available senses such as sound (through the vOICe) and self-motion (Jicol et al., 2020). Similarly, a recent study demonstrated that visually impaired individuals can create a more accurate allocentric spatial representation when using audio-haptic maps compared with audio-only maps (Papadopoulos et al., 2018).

This project will investigate how AI can be used in a multisensory system to help blind and partially blind people confidently navigate their environments. Currently, only unimodal AI systems which involve sound feedback or voice description are available (e.g., Seeing AI, Envision glasses, Horus, Google AI-Powered Audio Navigation Tool). The proposed system will leverage AI at both the input and output stages. At the input stage, state-of-the-art computer vision techniques, such as SLAM and depth estimation, will be used to obtain suitable visual cues that can provide guidance in the navigation task. At the output stage, the AI system will generate lower dimensional output features that can be used to output a congruent multi-modal sensory signal that can enhance navigation. The project will explore and evaluate how orthogonal dimensionality reduction techniques, both with and without supervision, can be used to generate a rich feature set that is interpretable for multi-sensory outputs to improve navigation.

Fundamental to incorporating AI into a navigational system is making sure that it is interpretable, transparent, and accountable to the user. To be interpretable, it is important that there exists a mapping from input to output based on object-based primitives (semantic or geometric). The system must also convey the level of uncertainty introduced by the inputs, which are imperfect in their ability to accurately map the world, so that users can make an informed choice about their reliance on the system and how they navigate. This is non-trivial when developing for users with poor or no vision at all, and we will explore how this can be achieved using non-visual stimuli in a multi-sensory fashion.

Applicants should hold, or expect to receive, a first or upper-second class honours degree or a postgraduate degree in a relevant discipline. Applicants should have taken a mathematics unit or a quantitative methods course at university or have at least grade B in A level maths or international equivalent.

This project is associated with the UKRI Centre for Doctoral Training (CDT) in Accountable, Responsible and Transparent AI (ART-AI). We value people from different life experiences with a passion for research. The CDT's mission is to graduate diverse specialists with perspectives who can go out in the world and make a difference.

Informal enquiries about the research should be directed to Dr Petrini.

Formal applications should be accompanied by a research proposal and made via the University of Bath’s online application form. Enquiries about the application process should be sent to .

Start date: 3 October 2022.


Funding Notes

ART-AI CDT studentships are available on a competition basis and applicants are advised to apply early as offers are made from January onwards. Funding will cover tuition fees and maintenance at the UKRI doctoral stipend rate (£16,062 per annum in 2022/23, increased annually in line with the GDP deflator) for up to 4 years.
We also welcome applications from candidates who can source their own funding.

References

Petrini, K., Remark, A., Smith, L., & Nardini, M. (2014). When vision is not an option: Children’s integration of auditory and haptic information is suboptimal. Developmental Science, 17(3), 376–387. https://doi.org/10.1111/desc.12127
Petrini, K., Caradonna, A., Foster, C., Burgess, N., & Nardini, M. (2016). How vision and self-motion combine or compete during path reproduction changes with age. Scientific Reports, 6, 29163. https://doi.org/10.1038/srep29163
Jicol C, Lloyd-Esenkaya T, Proulx MJ, Lange-Smith S, Scheller M, O'Neill E, Petrini K. (2020). Efficiency of Sensory Substitution Devices Alone and in Combination With Self-Motion for Spatial Navigation in Sighted and Visually Impaired. Front Psychol., 10;11:1443. doi: 10.3389/fpsyg.2020.01443. PMID: 32754082; PMCID: PMC7381305.
Scheller, M., Proulx, M. J., de Haan, M., Dahlmann-Noor, A., & Petrini, K. (2020). Late- but not early-onset blindness impairs the development of audio-haptic multisensory integration. Developmental Science, (September 2019), 1–17. https://doi.org/10.1111/desc.13001.
Badri N. Patro, Anupriy & Vinay P. Namboodiri, Probabilistic framework for solving visual dialog'', Pattern Recognition, Volume 110, February 2021, 107586.
Badri N Patro, Mayank Lunayach, Vinay P Namboodiri (2020).Uncertainty Class Activation Map (U-CAM) using Gradient Certainty method'', IEEE Transactions on Image Processing,2020
Ravindra Yadav, Ashish Sardana, Vinay P Namboodiri, Rajesh M Hegde. "Bridged Variational Autoencoders for Joint Modeling of Images and Attributes." Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), 2020, pp. 1479-1487
Search Suggestions
Search suggestions

Based on your current searches we recommend the following search filters.

PhD saved successfully
View saved PhDs