Lag-free audio communication is a pressing issue for developers of multiplayer environments. Audio plays a vital role in contributing to the sense of immersion within a virtual reality (VR) environment, and the use of interactive audio systems now allow the voice of a user to be heard as part of the virtual environment. When users want to interact vocally within a virtual world, the issue of latency becomes even more complex, with consideration of both the latency of the network and the audio system itself.
A robust specifically designed approach is clearly needed for the context of vocal interaction for virtual reality, not only to define the latency thresholds but also to ascertain what is feasible with current and emerging technologies.
This project will explore the margin of error, i.e. the maximum audio delay that can be implemented within a multi-user virtual audio system whereby the vocal interaction is still natural and allows for a perceived synchronous voice production. Working with New Moon Studios, the student will implement and test multi-user networked versions of an existing system designed in the AudioLab which allows a single user to sing as part of a pre-recorded choir, interacting with the acoustic as though they were in the original performance venue: The voice signal of the user is convolved with impulse responses recorded in the venue and head tracking is implemented to enable sound source rotation within the virtual space. Whilst designed for a group singing experience, the technology is applicable across multi-user VR applications, essentially improving the sense of realism and immersion for a speaker in the given environment.
The student will work to optimise the system to enable two users to interact in real-time from remote locations whereby they can sing together. If this can be achieved other vocal interactions applicable across multi-user gaming applications such as speech will be well within thresholds for naturalistic interaction.
The project is partnered with New Moon Studios (https://newmoonstudios.co.uk/
) through the XR Stories AHRC Creative Cluster R&D Partnership (https://xrstories.co.uk
) The successful candidate will be supervised by Dr Helena Daffern and Dr Gavin Kearney in the Department of Electronic Engineering AudioLab (https://audiolab.york.ac.uk/
) at the University of York, with additional supervision from Andrew Nye at New Moon Studios and will spend short periods of time with the XR Stories team in York. There will also be a two week placement at New Moon Studios during the 12 month study period.
The successful applicant should have a strong interest in sound, music, and immersive technology, excellent programming skills and an understanding of DSP, communication networks, and hardware for immersive experiences.
Academic entry requirements:
Candidates must have (or expect to obtain) a minimum of a UK upper second-class honours degree (2:1) or equivalent in Computer Science, ELectronic Engineering, Music Technology, or a related subject.
The MSc is due to start 1st October 2019.
How to apply:
Applicants must apply via the University’s online application system at https://www.york.ac.uk/study/postgraduate-research/apply/
. Please read the application guidance first so that you understand the various steps in the application process. To apply, please select the MSc by Research in Electronic Engineering or the MSc by Research in Music Technology for October 2019 entry. Please specify in your application that you would like to be considered for this studentship.