For the one in six people who struggle to hear and thus to communicate, being able to see a talker’s face provides additional information which boosts their ability to understand speech, particularly in noisy environments where hearing aids are often not of much help. Surprisingly, little account is taken of this visual information when treating hearing impairments.
Part of the difficulty is that the benefits are variable, and “Lip-reading” is variable between individuals. Different profiles of hearing loss through age or noise exposure, reduces the auditory information available in different ways. Less obviously, people may differ in how they put the information together, and individual variability in cognition and language may influence this. These factors are influenced by age and hearing loss. However, if we are to best help individuals get the most benefit from seeing a face, one size is unlikely to fit all.
This project will study how individuals vary in this process, with a view to characterising and understanding the benefits that individuals derive from visual cues to speech: with normal hearing, with impaired hearing and the influence of age. We have been developing new statistical models which characterise how audiovisual speech perception depends on both the information from the individual senses and how the information is put together. Applying these to detailed experimental measures in individual participants will allow us to characterise these differences and the circumstances under which they vary. Ultimately, this may facilitate individualised treatment of hearing problems which accounts for and maximises communication as a multisensory process.
We are looking for a motivated and student to apply experimental psychology methods and theoretical models of multisensory integration to a clinical problem. Based in NTU Psychology, this interdisciplinary project would involve lab and on-line data collection, working with hearing impaired participants, and applying advanced analytical methods to these data. It would suit graduates of a wide range of quantitative disciplines (psychology, maths, physics) who are interested in applying rigorous scientific methods to a real-world problem which affects millions in the UK alone.
Stacey P.C., Kitterick P.T., Sumner C.J. (2016). “The contribution of visual information to the perception of speech in noise with and without informative temporal fine structure,” Hearing Research, 336. 17-28.
Tye-Murray, N., Spehar, B., Myerson, J., Hale, S., & Sommers, M. (2016). Lipreading and audiovisual speech recognition across the adult lifespan: Implications for audiovisual integration. Psychology and aging, 31(4), 380–389. https://doi.org/10.1037/pag0000094.