Supervised machine learning models, such as artificial neural networks, are unstructured. They can be trained from data, but there is no possibility to include any additional knowledge, e.g. that two features are independent conditioned on knowing a third. To give a specific scenario, you know that air pressure affects rain which causes your weather station to measure rainfall. But air pressure has no direct affect on the rain measurement, and the two are independent if you know if it is raining or not. This cannot be included in traditional supervised machine learning.
Structure can be modelled using probabilistic graphical models. These are however limited by the inference algorithms (primarily belief propagation, Gibbs sampling and mean field variational methods), which require the use of simple probability distributions that are a poor fit to reality.
This PhD is about exploring more general representations, specifically arbitrary density estimates, that can represent any distribution. Prior work has almost entirely been particle based [1,2,3], and has approximations and/or inefficient search that compromises performance. There are many possible improvements that can be made to the particle approaches; additionally new alternatives can be explored.
Graphical models are a kind of explainable AI, as the structure can be human understandable. Unfortunately their underperformance relative to other models limits their usage, particularly in industry. Generalising their representative capabilities, to match better known models, is one step towards wider usage. Given the ’right to an explanation’ requirement of the GDPR this may become legally necessary. Additionally, a graphical model introduces a modular structure that can be debugged. As AI makes it way into safety critical scenarios, such as self driving cars, graphical models may prove necessary for quality assurance.
This project is associated with the UKRI CDT in Accountable, Responsible and Transparent AI (ART-AI), which is looking for its first cohort of at least 10 students to start in September 2019. Students will be fully funded for 4 years (stipend, UK/EU tuition fees and research support budget). Further details can be found at: http://www.bath.ac.uk/research-centres/ukri-centre-for-doctoral-training-in-accountable-responsible-and-transparent-ai/
Desirable qualities in candidates include intellectual curiosity, a strong background in maths and programming experience.
Applicants should hold, or expect to receive, a First Class or good Upper Second Class Honours degree. A master’s level qualification would also be advantageous.
Informal enquiries about the project are welcome and should be directed to Dr. Tom SF Haines ([email protected]
Enquiries about the application process should be sent to [email protected]
Formal applications should be made via the University of Bath’s online application form for a PhD in Computer Science: https://samis.bath.ac.uk/urd/sits.urd/run/siw_ipp_lgn.login?process=siw_ipp_app&code1=RDUCM-FP01&code2=0013
Start date: 23 September 2019.
 "Nonparametric Belief Propagation", by Sudderth et al., 2003.
 "Proteins, Particles, and Pseudo-Max-Marginals: A Submodular Approach", by Pacheco & Sudderth, 2015.
 "Stein Variational Message Passing for Continuous Graphical Models", by Wang et al., 2017.