FREE Virtual Study Fair | 1 - 2 March | REGISTER NOW FREE Virtual Study Fair | 1 - 2 March | REGISTER NOW

Transparency in neural ordinary differential equations


   Centre for Accountable, Responsible and Transparent AI

  , Dr Pranav Singh  Applications accepted all year round  Competition Funded PhD Project (Students Worldwide)

About the Project

Neural ordinary differential equations (neural ODEs) are a relatively recent development (https://arxiv.org/abs/1806.07366) in the field of machine learning, where the hidden state dynamics of certain neural network architectures can be reformulated as numerical solutions of a differential equation. Techniques based on this approach promise several advantages over previous deep learning implementations, such as greater memory efficiency and extrapolation ability. In addition, many decades of mathematical research have been put into the study and solution of ODEs, and consequently a very wide variety of modern computational ODE solvers exist that are very efficient, powerful and have theoretical guarantees. These factors may confer a substantial benefit to neural ODE based approaches compared with “normal” deep neural networks. The most relevant applications for neural ODEs are problems involving time-series data, for example machine translation, text generation or object detection and tracking in videos. Chemical problems that involve time-series data which have previously been modelled with recurrent neural networks include generating novel chemical reactions (https://www.nature.com/articles/s41598-021-81889-y) and predicting atomic trajectories in molecular dynamics simulations (https://www.nature.com/articles/s41467-020-18959-8). Given these previous successes in chemistry, as well as the promise of neural ODEs, their application to these problems seems apt.

However, there is a well-known concern that neural networks are not readily interpretable and their predictions are not easily understood by human users, which presents a major barrier towards the further uptake of these models. Recently though, there has been significant progress in this direction and there is now a wealth of literature concerning techniques and algorithms that aim to bring explainability to neural networks. Unfortunately, neural ODEs are also associated with a lack of transparency, and therefore, the primary aim of this project will be to explore how neural ODEs may be understood, and what insights might be gained by undertaking this, both into the model's internal workings as well as the problems for which the neural ODEs are trained. This project will examine how current methods of neural network interpretation may be adapted and applied to neural ODEs. Examples of these could include the popular saliency mapping (https://arxiv.org/abs/1312.6034), layer-wise relevance propagation (https://doi.org/10.1371/journal.pone.0130140) or the Local Interpretable Model-Agnostic Explanation (LIME) (https://doi.org/10.1145/2939672.2939778) techniques. It may potentially be that full understanding of the workings of neural ODEs requires interpretation of the network in a global sense, that is, how the input leads to the overall output, but also on the level of the individual time steps, that is, how the internal state of the network changes from one time interval to the next. Therefore, interpretive methods that are suitable for both of these levels will be investigated during the course of this project. The context for neural ODE interpretation in this work will be chemical problems such as predicting chemical reactions and molecular dynamics, however, the neural ODE framework is highly general and consequently the developments made here will allow for further use of these promising models in many more domains.

This project is associated with the UKRI Centre for Doctoral Training (CDT) in Accountable, Responsible and Transparent AI (ART-AI). We value people from different life experiences with a passion for research. The CDT's mission is to graduate diverse specialists with perspectives who can go out in the world and make a difference.

Applicants should hold, or expect to receive, a first or upper-second class honours degree in a relevant subject. Applicants should have taken a mathematics unit or a quantitative methods course at university or have at least grade B in A level maths or international equivalent. Experience with coding (any language) is desirable.

Formal applications should be accompanied by a research proposal and made via the University of Bath’s online application form. Enquiries about the application process should be sent to .

Start date: 2 October 2023.


Funding Notes

ART-AI CDT studentships are available on a competition basis and applicants are advised to apply early as offers are made from January onwards. Funding will cover tuition fees and maintenance at the UKRI doctoral stipend rate (£17,668 per annum in 2022/23, increased annually in line with the GDP deflator) for up to 4 years.
We also welcome applications from candidates who can source their own funding.

Email Now


Search Suggestions
Search suggestions

Based on your current searches we recommend the following search filters.

PhD saved successfully
View saved PhDs