Don't miss our weekly PhD newsletter | Sign up now Don't miss our weekly PhD newsletter | Sign up now

  Explainable artificial intelligence in Bayesian machine learning research and applications

   Centre for Accountable, Responsible and Transparent AI

This project is no longer listed on and may not be available.

Click here to search for PhD studentship opportunities
  Dr Xi Chen  No more applications being accepted  Self-Funded PhD Students Only

About the Project

Artificial intelligence (AI) has been successfully applied to various applications with the breakthrough of deep learning (DL) in recent years. Despite the impressive DL performance in applications such as face recognition and target tracking etc, AI research is still facing critical challenges such as lack of transparency, inability of model generalisation, and absence of causal reasoning. Part of these difficulties are caused by the black-box nature of DL models that often contain a million levels of parameters.

Explainable AI (XAI) [1] aims to address these problems by developing approaches for understanding the black-box models to make reliable estimation/predictions. One main stream of state of the art XAI research [2] focuses on analysing the weights of DL models to evaluate the contribution of input features to model predictions. The project aims to further the research direction from a Bayesian modelling perspective by exploring the potential of producing the desired model outputs with uncertainty quantification by adjusting the input feature pattern/values based on joint multivariate distribution and Bayesian inference.  

The project is expected to contribute to the algorithmic/theoretical development of explainable and trustworthy AI research, as well as to improve the reliability and efficiency of some statistical machine learning applications such as biomedical diagnostics, environmental seismic inversion, and smart city etc. Outcomes of this project may involve: (a) research papers that help to improve our understanding of the machine learning black-box system from a Bayesian statistical learning perspective; and (b) application-based software prototypes that can be used to both identify the influence (and/or sensitivity) of the input control variables and to simulate potential outcomes in black-box systems. 

Applicants should hold, or expect to receive, a first or upper-second class honours degree in statistics, computer science, mathematics, information engineering, or a closely related discipline. A master level qualification, good mathematical or statistical academic profile, or good coding skill would be an advantage. Prior knowledge in machine learning is desirable, but not required.

Informal enquiries about the research should be directed to Dr Xi Chen: [Email Address Removed].

Formal applications should be accompanied by a research proposal and made via the University of Bath’s online application form. Further information about the application process can be found here.

Start date: Between 8 January and 30 September 2024.

Computer Science (8) Engineering (12) Mathematics (25) Physics (29)

Funding Notes

We welcome applications from candidates who can source their own funding. Tuition fees for the 2023/4 academic year are £4,700 (full-time) for Home students and £26,600 (full-time) for International students. For information about eligibility for Home fee status:


[1] Gunning, David. "Explainable artificial intelligence (xai)." Defense Advanced Research Projects Agency (DARPA), nd Web 2.2 (2017).
[2] Marcus, Gary. "The next decade in ai: four steps towards robust artificial intelligence." arXiv preprint arXiv:2002.06177 (2020).
[3] Some interesting articles and resources at Google Explainable AI site:

How good is research at University of Bath in Computer Science and Informatics?

Research output data provided by the Research Excellence Framework (REF)

Click here to see the results for all UK universities

Where will I study?