Don't miss our weekly PhD newsletter | Sign up now Don't miss our weekly PhD newsletter | Sign up now

This project is no longer listed on FindAPhD.com and may not be available.

Click here to search FindAPhD.com for PhD studentship opportunities
  Dr Nicolas Wu  Applications accepted all year round  Funded PhD Project (UK Students Only)

About the Project

This project is part of a unique cohort based 4-year fully funded PhD programme at the UKRI Centre for Doctoral Training in Safe and Trusted AI.

The UKRI Centre for Doctoral Training in Safe and Trusted AI brings together world leading experts from King’s College London and Imperial College London to train a new generation of researchers and is focused on the use of symbolic artificial intelligence for ensuring the safety and trustworthiness of AI systems.

Project Description:

Deep learning has shown huge potential in terms of delivering AI with real-world impact. Most current projects are built in either PyTorch, Tensorflow, or similar platforms. These tend to be written in languages where the correct description of neural networks relies on programmer skill.

However, as such systems become more powerful, useful, and closer to deployment, the associated risks become more apparent. While much of the development will involve theoretical work and testing on standard benchmark datasets (e.g. MNIST), we are explicitly, and specifically, interested in use in the medical domain. DL-based image analysis has shown significant promise in medicine, especially in medical imaging and slide interpretation. Clinical deployments of these systems are rapidly developing, and thus the need for robust verification is becoming increasingly important. Current approaches to assuring medical AI are based around development process and testing, rather than formal models of software development.

The models described by typical libraries for deep learning usually involve complex neural networks that are connected together to enable a process of forward and backward propagation of data in order to train the network hyperparameters. These models inherently contain much structure, but in languages such as PyTorch, this structure is not described in a type system that can be verified statically at compile time.

Our goal is to explore the use of dependent types to describe the underlying neural network structure in order to improve the safety and trustworthiness of model-based AI systems. The description of the structure brings three major benefits: first, it becomes possible to ensure, statically at compile time, that the networks have been appropriately constructed; second, it becomes possible to rectify misfits between neural network input and data sizes, by applying appropriate coercions; third, exposing the structure of the network enables parallel optimisations to be specialised. While structure verification with dependent types for neural networks has already been accomplished [1], our focus will be on generating, through type-driven development and generic programming [2], the coercisions that rectify input size misfits. Such rectification will allow neural networks to be automatically applied to a wider range of inputs than using traditional methods, thus increasing their applicability whilst ensuring that the underlying system is reliable and robust.

At the core of this project is the application of state-of-the-art formal methods to the verification of model-based machine learning. Formal methods such as the use of dependent types are beginning to be employed in safety-critical systems, and yet to be extensively deployed in the context of machine learning. Models whose inputs have been specified using dependent types will not only have additional assurance properties of their correctness, but will also be more flexible in that they will be able to accept a wider range of input training data which improves the accuracy of the model parameters, and the trustworthiness of the ensuing systems.

As the final part of the project, we will explore a reimplementation of an existing, clinical DL system (written in Tensorflow & Python) [3], to explore the real-world use of formally sound DL [4]. This will also give the student the chance to talk to staff and patients about their expectations of such systems as part of the PPIE training embedded within the PhD.

How to Apply:

The deadline for Round A for entry in October 2023 is Monday 28th November. See here for further round deadlines.

We will also be holding an online information session on Tuesday 15 November 1-2pm: UKRI Centre for Doctoral Training in Safe and Trusted AI - Info Session Tickets, Tue 15 Nov 2022 at 13:00 | Eventbrite

Committed to providing an inclusive environment in which diverse students can thrive, we particularly encourage applications from women, disabled and Black, Asian and Minority Ethnic (BAME) candidates, who are currently under-represented in the sector.

We encourage you to contact Dr Nicolas Wu ([Email Address Removed]) to discuss your interest before you apply, quoting the project code: STAI-CDT-2022-IC-5.

When you are ready to apply, please follow the application steps on our website here. Our 'How to Apply' page offers further guidance on the PhD application process.

Computer Science (8)

Funding Notes

See Fees and Funding section for more information - https://safeandtrustedai.org/apply-now/
Search Suggestions
Search suggestions

Based on your current searches we recommend the following search filters.