Don't miss our weekly PhD newsletter | Sign up now Don't miss our weekly PhD newsletter | Sign up now

  StatXAI - Statistical approaches for explainable machine learning and artificial intelligence


   Department of Mathematics

This project is no longer listed on FindAPhD.com and may not be available.

Click here to search FindAPhD.com for PhD studentship opportunities
  Prof K Strimmer  No more applications being accepted  Competition Funded PhD Project (European/UK Students Only)

About the Project

Machine learning (ML) and artificial intelligence (AI) methodologies are now permeating many parts of science, technology, health and society. At their core these methods rely on highly complex, high-dimensional mathematical and statistical models. However, unfortunately they are generally hard to interpret and and their internal decision making mechanisms are not transparent. The objective of the StatXAI project is to develop statistical tools to understand and help create explainable ML/AI methodologies and models.

Research in StatXAI will focus on four lines of work: i) to investigate advanced nonparametric and algorithmic models such as neural networks and ensemble approaches, ii) to explore diverse strategies for explainable ML/AI using statistical approaches such as LIME/Anchors [1,2], LRP [3], explainable embeddings [4] etc., iii) to develop corresponding effective algorithms and implementing them in open source software (R and Python), and iv) to deploy and test explainable models in biological and medical settings.

The University of Manchester is a partner university of the Alan Turing Institute (ATI, the UK national institute for data science and artificial intelligence). PhD students will have the opportunity to interact and engage with the ATI.

Prerequisites:

Interest in modern machine learning methods and computational statistics, knowledge of multivariate statistics, experience in programming in R and Python. Note the focus of the PhD project lies on methods and algorithms rather than on pure theory.


References

[1] Ribeiro et al. 2016. "Why Should I Trust You?": Explaining the Predictions of Any Classifier. https://arxiv.org/abs/1602.04938 [2] Ribeiro et al. 2018. Anchors: High-Precision Model-Agnostic Explanations. https://homes.cs.washington.edu/~marcotcr/aaai18.pdf
[3] Montavon et al. 2018. Methods for Interpreting and Understanding Deep Neural Networks. Digital Signal Processing, 73:1-15.
https://doi.org/10.1016/j.dsp.2017.10.011
[4] Qi and Li. 2017. Learning Explainable Embeddings for Deep Networks.
http://www.interpretable-ml.org/nips2017workshop/papers/04.pdf

 About the Project