Acquired hearing loss in adults, affecting over 250 million people worldwide. Far fewer use hearing aids (HAs) and fewer still use report successful outcomes in terms of regular use and satisfaction, a problem traced mostly to insufficient benefit especially in noisy environments. The economic cost of hearing loss to the EU has been estimated to be at least £30 billion per year.
Signal processing research has not significantly improved speech intelligibility for HA users in noise over the last 30 years, but recently novel deep neural network (DNN) methods have been shown to be vastly superior and promise breakthrough solutions in coming years.
Here, we are going to improve DNNs for speech enhancement by using input features derived from our human-auditory perception models and we are going to adapt them to hearing aids for the first time. We have already demonstrated in a collaboration between the Institute of Sound and Vibration Research (in our Audiology clinic) and the School of Computer Science, that this approach can improve speech intelligibility in the lab for hearing impaired people by up to 15%.
Several aspects in this project are particularly challenging: the complexity of the input features of natural speech is extremely high when mimicking the desired human capabilities. For the first time, we aim to use features that reflect neuronal models of the human brain for pitch perception and computational auditory scene analysis. Special tools need to be developed to reduce the dimensionality of the input structure to feed into the DNN. Furthermore, the complexity of the required deep neural networks for training noisy speech requires large computer power.
The specific challenge of this project is to apply next generation computer techniques to the real world of audiological hearing research and helping to develop the hearing aids of the future.
The results of this project are of considerable interest to hearing researchers as well as speech researchers (e.g. automatic speech recognition), and we will publish all algorithms to guarantee reproducibility and collaboration.
Our project is extremely interdisciplinary. The impact is in speech research and Audiology where the results can directly help people communicate better.
If you wish to discuss any details of the project informally, please contact Stefan Bleeck, Institute of Sound and Vibration Research, Email: [email protected]
, Tel: +44 (0) 2380 59 6682.
This project is run through participation in the EPSRC Centre for Doctoral Training in Next Generation Computational Modelling (http://ngcm.soton.ac.uk). For details of our 4 Year PhD programme, please see http://www.findaphd.com/search/PhDDetails.aspx?CAID=331&LID=2652
For a details of available projects click here http://www.ngcm.soton.ac.uk/projects/index.html
Visit our Postgraduate Research Opportunities Afternoon to find out more about Postgraduate Research study within the Faculty of Engineering and the Environment: http://www.southampton.ac.uk/engineering/news/events/2016/02/03-discover-your-future.page