For individuals who have impaired speech or complete loss of speech as a result of neurodegenerative diseases such as motor neurone disease (MND) or amyotrophic lateral sclerosis (ALS), there is an urgent need to develop revolutionary technologies using the latest advances in automatic speech recognition (ASR) and brain signal processing (BSP). This project is an exciting opportunity to spearhead the future of brain-computer interfaces (BCIs), developing the first ‘true’ brain-to-text technologies that require only imagined speech. State-of-the-art technologies in invasive BCIs have opened a new window for decoding speech directly from the intracranial brain signals such as electrocorticography (ECoG) collecting data from sensor arrays placed directly on the surface of the cortex or stereo-electroencephalography (SEEG) using depth electrodes within the brain. Research using sensors placed on the scalp (surface electroencephalography, EEG) produces data with very low resolution and fidelity, which results in speech decoding that rapidly falls to chance-level accuracies as classification tasks extend beyond a simple binary selection. The prospect of BCIs undoubtedly lies in intracranial devices, thus we must focus our efforts on ECoG and SEEG signals to decode covert (imagined) speech.
This project will leverage innovations in cutting-edge machine learning and deep learning technologies for speech decoding. This remains an excitingly nascent and under-explored field, which aims to address by developing novel feature representations of the brain signal, resulting in new prediction coefficients and sequence-to-sequence models for decoding imagined natural speech.
Advanced machine learning methods and novel paradigms in AI will be explored to process intracranial brain signals (SEEG/ECoG) and enhance the efficacy of BCIs for speech decoding. Deep learning-based AI methods may also be used to improve the performance.
Secondly, the ethical issues and safety issues on collecting intracranial brain data on patients including epilepsy patients will require research into how medical devices are regulated and how the designers should or should not be held accountable for the speech decoding devices.
This research project will be carried out as part of an interdisciplinary integrated PhD in the UKRI Centre for Doctoral Training in Accountable, Responsible and Transparent AI (ART-AI). The ART-AI CDT aims at producing interdisciplinary graduates who can act as leaders and innovators with the knowledge to make the right decisions on what is possible, what is desirable, and how AI can be ethically, safely and effectively deployed. We value people from different life experiences with a passion for research. The CDT's mission is to graduate diverse specialists with perspectives who can go out in the world and make a difference.
Candidates are expected to have or be near completion of either a first or upper-second class honours degree in a relevant subject or a masters degree in a relevant subject.
Informal enquiries about the project should be directed to Dr Dingguo Zhang.
Formal applications should include a research proposal and be made via the University of Bath’s online application form. Enquiries about the application process should be sent to firstname.lastname@example.org.
Start date: 2 October 2023.