Artificial intelligence (AI) has been successfully applied to various applications with the breakthrough of deep learning (DL) in recent years. Despite the impressive DL performance in applications such as face recognition and target tracking etc, AI research is still facing critical challenges such as lack of transparency, inability of model generalisation, and absence of causal reasoning. Part of these difficulties are caused by the black-box nature of DL models that often contain a million levels of parameters.
Explainable AI (XAI)  aims to address these problems by developing approaches for understanding the black-box models to make reliable estimation/predictions. One main stream of state of the art XAI research  focuses on analysing the weights of DL models to evaluate the contribution of input features to model predictions. The project aims to further the research direction from a Bayesian modelling perspective by exploring the potential of producing the desired model outputs with uncertainty quantification by adjusting the input feature pattern/values based on joint multivariate distribution and Bayesian inference.
The project is expected to contribute to the algorithmic/theoretical development of explainable and trustworthy AI research, as well as to improve the reliability and efficiency of some statistical machine learning applications such as biomedical diagnostics, environmental seismic inversion, and smart city etc. Outcomes of this project may involve: (a) research papers that help to improve our understanding of the machine learning black-box system from a Bayesian statistical learning perspective; and (b) application-based software prototypes that can be used to both identify the influence (and/or sensitivity) of the input control variables and to simulate potential outcomes in black-box systems.
This project is associated with the UKRI Centre for Doctoral Training (CDT) in Accountable, Responsible and Transparent AI (ART-AI). We value people from different life experiences with a passion for research. The CDT's mission is to graduate diverse specialists with perspectives who can go out in the world and make a difference.
Applicants should hold, or expect to receive, a first or upper-second class honours degree in statistics, computer science, mathematics, information engineering, or a closely related discipline. A master level qualification, good mathematical or statistical academic profile, or good coding skill would be an advantage. Prior knowledge in machine learning is desirable, but not required.
Informal enquiries about the research should be directed to Dr Xi Chen: [Email Address Removed].
Formal applications should be made via the University of Bath’s online application form and be accompanied by a research proposal. Enquiries about the application process should be sent to [Email Address Removed].
Start date: 3 October 2022.