The UKRI CDT in Artificial Intelligence, Machine Learning and Advanced Computing (AIMLAC) aims at forming the next generation of AI innovators across a broad range of STEMM disciplines. The CDT provides advanced multi-disciplinary training in an inclusive, caring and open environment that nurture each individual student to achieve their full potential. Applications are encouraged from candidates from a diverse background that can positively contribute to the future of our society.
The UK Research and Innovation (UKRI) fully-funded scholarships cover the full cost of tuition fees, a UKRI standard stipend of £15,921 per annum and additional funding for training, research and conference expenses. The scholarships are open to UK and international candidates.
Closing date for applications is 12 February 2022. For further information on how to apply please click here and select the "UKRI CDT Scholarship in AIMLAC" tab.
It is well known that, when speaking to their asthmatic patients, health care professionals can hear abnormalities in respiratory tract and judge the severity, however, it is difficult to automate this process due to challenge in differentiating speech and breathing automatically in recorded voice data. Very few works have been done based on speech breathing, and traditional statistical methods are insufficient to automate the process.
Previously, we designed a threshold-based mechanism to separate speech and breathing from 323 voice recordings from 26 samples (healthy controls, asthmatics smokers) in controlled environment and developed three machine learning (ML) models: a regression model to predict lung function (percentage predicted forced expiratory volume in 1 second, FEV1%) (mean square error = 10·86), a multi-class classification model to predict severity of lung function abnormality (accuracy = 73%), and a binary classification model to predict lung function abnormality (accuracy = 85%).
(doi: https://doi.org/10.1101/2021.05.11.21256997, under review: FrontiersinDigital Health)
However, while implementing the threshold method on independent samples, the efficacy of the method was found inefficient to separate speech and breathing parts and which lead to the challenge of automatically detect speech and breathing from recorded voice samples irrespective of the surrounding environment and device. A more robust mechanism is warranted to generalise this separation method to identify speech and breath part of recorded voice which can handle this as well as accent, language and culture, and further use them to predict respiratory conditions. Hence, ML can play an important role in developing both separation method and prediction models. Therefore, the aim of this project is to develop an automated and stable system, using ML methods, to detect speech and breathing from recorded voice samples and predict the respiratory conditions with high accuracy.
• To explore different ML methods that is capable of separating speech and breathing from recorded voice sample irrespective from controlled or uncontrolled environment. These methods will be validated on the 200 independent samples (recorded using mobile phones in uncontrolled environment) to check the general usability and handling the challenges of background noise.
• To perform feature extraction for developing prediction models to predict lung function and/or severity of lung abnormalities, once a suitable separation method selected.
• To implement the established workflow on recorded voice samples generated by our partners from UK, Australia, Brazil and Sri Lanka to observe the effect of accent, language and culture.
• To implement the developed ML framework for predicting related pulmonary function and diseases (such as: lung function, asthma from Australian and Brazilian samples, COPD from UK samples, and COVID-19 from Sri Lankan samples).
• To develop an online platform and/or a mobile app prototype based on the ML framework and make available for wider community. This will help to collect recorded voice independentlyfrom public with appropriate questionnaires relating to respiratory health and pave the way for further analysis.
The findings from this project will not only address the challenge of handling voice recording data for appropriately separating speech and breathing to extract features to develop prediction models with high accuracy but also have potential in implementing as a smartphone application in the future, offering a convenient and straightforward way to assess respiratory health for individuals.