FindAPhD Weekly PhD Newsletter | JOIN NOW FindAPhD Weekly PhD Newsletter | JOIN NOW

Decoding Speech using Invasive Brain-Computer Interfaces


   Centre for Accountable, Responsible and Transparent AI

  ,  Applications accepted all year round  Competition Funded PhD Project (Students Worldwide)

About the Project

For individuals who have impaired speech or complete loss of speech as a result of neurodegenerative diseases such as motor neurone disease (MND) or amyotrophic lateral sclerosis (ALS), there is an urgent need to develop revolutionary technologies using the latest advances in automatic speech recognition (ASR) and brain signal processing (BSP). This project is an exciting opportunity to spearhead the future of brain-computer interfaces (BCIs), developing the first ‘true’ brain-to-text technologies that require only imagined speech. State-of-the-art technologies in invasive BCIs have opened a new window for decoding speech directly from the intracranial brain signals such as electrocorticography (ECoG) collecting data from sensor arrays placed directly on the surface of the cortex or stereo-electroencephalography (SEEG) using depth electrodes within the brain. Research using sensors placed on the scalp (surface electroencephalography, EEG) produces data with very low resolution and fidelity, which results in speech decoding that rapidly falls to chance-level accuracies as classification tasks extend beyond a simple binary selection. The prospect of BCIs undoubtedly lies in intracranial devices, thus we must focus our efforts on ECoG and SEEG signals to decode covert (imagined) speech.

This project will leverage innovations in cutting-edge machine learning and deep learning technologies for speech decoding. This remains an excitingly nascent and under-explored field, which aims to address by developing novel feature representations of the brain signal, resulting in new prediction coefficients and sequence-to-sequence models for decoding imagined natural speech.

Advanced machine learning methods and novel paradigms in AI will be explored to process intracranial brain signals (SEEG/ECoG) and enhance the efficacy of BCIs for speech decoding. Deep learning-based AI methods may also be used to improve the performance.

Secondly, the ethical issues and safety issues on collecting intracranial brain data on patients including epilepsy patients will require research into how medical devices are regulated and how the designers should or should not be held accountable for the speech decoding devices.

This research project will be carried out as part of an interdisciplinary integrated PhD in the UKRI Centre for Doctoral Training in Accountable, Responsible and Transparent AI (ART-AI). The ART-AI CDT aims at producing interdisciplinary graduates who can act as leaders and innovators with the knowledge to make the right decisions on what is possible, what is desirable, and how AI can be ethically, safely and effectively deployed. We value people from different life experiences with a passion for research. The CDT's mission is to graduate diverse specialists with perspectives who can go out in the world and make a difference.

Candidates are expected to have or be near completion of either a first or upper-second class honours degree in a relevant subject or a masters degree in a relevant subject.

Informal enquiries about the project should be directed to Dr Dingguo Zhang.

Formal applications should include a research proposal and be made via the University of Bath’s online application form. Enquiries about the application process should be sent to .

Start date: 2 October 2023.


Funding Notes

ART-AI CDT studentships are available on a competition basis and applicants are advised to apply early as offers are made from January onwards. Funding will cover tuition fees and maintenance at the UKRI doctoral stipend rate (£17,668 per annum in 2022/23, increased annually in line with the GDP deflator) for up to 4 years.
We also welcome applications from candidates who can source their own funding.

References

[1] X Wu, et al., Decoding continuous kinetic information of grasp from stereo-electroencephalographic (SEEG) recordings, Journal of Neural Engineering, 19 (2), 026047, 2022.
[2] G Li, et al., Assessing differential representation of hand movements in multiple domains using stereo-electroencephalographic recordings, NeuroImage, 250, 118969, 2022.
[3] G Li, et al., Detection of human white matter activation and evaluation of its function in movement decoding using stereo-electroencephalography (SEEG), Journal of neural engineering, 18 (4), 0460c6, 2021.
[4] S Liu, et al., Investigating Data Cleaning Methods to Improve Performance of Brain-Computer Interfaces based on Stereo-electroencephalography, Frontiers in Neuroscience, 1239, 2021.

Email Now


Search Suggestions
Search suggestions

Based on your current searches we recommend the following search filters.

PhD saved successfully
View saved PhDs