Don't miss our weekly PhD newsletter | Sign up now Don't miss our weekly PhD newsletter | Sign up now

  Audio Target Calibration and Localisation in Augmented Reality (AR) [Self-Funded Students Only]


   Cardiff School of Computer Science & Informatics

This project is no longer listed on FindAPhD.com and may not be available.

Click here to search FindAPhD.com for PhD studentship opportunities
  Dr P Eslambolchilar, Prof D Marshall, Dr Daniel Finnegan  Applications accepted all year round  Self-Funded PhD Students Only

About the Project

Beyond the initial benefit to the general idea of ‘immersion’, the ‘uses’ of audio within AR technology are many and varied. There are Cognitive, User Experience and User Interface benefits to utilising audio more deeply in AR. Including audio in moving cues (e.g. skill based or shooting games) can reduce reaction time up to 50% (Barde, Ward, Helton & Billinghurst (2016)). Localising targets that emit sounds as users travel through space in games — whether it’s a whirring engine, a shuffling footstep or smooth servo sound adds to immersive experience; audio cues help users to understand the world around them before they have the time to turn their head in the direction the sound is coming from.

For highly immersive spatial audio, these spectral cues can be mimicked by calibrating headphones to head-related transfer functions (HRTFs). Since we all have different-sized heads and torsos, delivering audio through head-tracking headphones, calibrated to our unique, custom-measured HRTFs, is the challenge timely for AR and VR technologies.

One approach for customised HRTF for an individual AR headset-wearer would be HRTF data is captured by placing microphones inside user’s ears, and recording data on how sound waves hit the body as audio is played in a 360-degree manner around the user. This process is too onerous to do at scale, “generalist” HRTFs are encoded into the algorithms of many spatial audio technologies. (The generalist HRTF data comes from publicly available data sets, obtained from a generalist set of people’s heads and torsos.) However, emerging AR technologies are equipped with sensors designed to begin a custom “HRTF anatomy calibration” process.

This proposal aims to investigate the ability to acoustically conform the sound in the Experience to the environment that the device is placing it in. All AR technologies (handheld and head-worn) currently available have restrictions on the field of view that they have with their ability to be aware of the room that the device is in. Whether this is through inside-out tracking or stereo cameras capable of depth perception. Mobile AR is able to detect surfaces and allow movement around AR objects with the double-camera array that is included on iPhone X and AR-ready iPads (2017). Hololens, Meta and other AR headsets are able to detect the extent and depth of a room. Lower end mobile devices with only one camera are still able to detect surfaces, but do not have the same level of depth detection.

This work will investigate the efficiency and efficacy of some capture or calibration of the space to use this data for advanced acoustical rendering (similar to how a user must calibrate a VR setup to ensure the external cameras ‘know’ where the headset is in space). This type of calibration can be either stored on a mobile device to be used for processing or a real-time geometry-based rendering algorithm can be executed on non-tethered devices. Currently, physical rendering is only possible with a small number of software spatialisers (such as SteamAudio and NVIDIA).

This idea could be expanded to:
Sonification of the world visible through the AR window
Haptic feedback in AR / Mobile AR
Understanding and modelling the contents in the AR window

Supervisory team:
Dr Parisa Eslambolchilar (School of Computer Science and Informatics), [Email Address Removed]
Dr Daniel Finnegan (School of Computer Science and Informatics)
[Email Address Removed]
Prof Dave Marshall (School of Computer Science and Informatics) [Email Address Removed]
Computer Science (8) Information Services (20) Mathematics (25)

Funding Notes

Self-Funding Students Only.

Academic Criteria:
A 2:1 Honours undergraduate degree or a master's degree, in computing or a related subject. Applicants with appropriate professional experience are also considered. Degree-level mathematics (or equivalent) is required for research in some project areas.
Applicants for whom English is not their first language must demonstrate proficiency by obtaining an IELTS score of at least 6.5 overall, with a minimum of 6.0 in each skills component.

References

How to Apply:

To apply please complete the online application - https://www.cardiff.ac.uk/study/postgraduate/research/programmes/programme/computer-science-and-informatics, select the ‘self-funding’ option, and state the project title and supervisor name.

How good is research at Cardiff University in Computer Science and Informatics?


Research output data provided by the Research Excellence Framework (REF)

Click here to see the results for all UK universities

Where will I study?

Search Suggestions
Search suggestions

Based on your current searches we recommend the following search filters.