Coventry University Featured PhD Programmes
Norwich Research Park Featured PhD Programmes
Imperial College London Featured PhD Programmes
Norwich Research Park Featured PhD Programmes
Cardiff University Featured PhD Programmes

Stable deep neural network architectures

  • Full or part time
  • Application Deadline
    Applications accepted all year round
  • Self-Funded PhD Students Only
    Self-Funded PhD Students Only

Project Description

The School of Mathematical Sciences of Queen Mary University of London invite applications for a PhD project commencing either in September 2019 for students seeking funding, or at any point in the academic year for self-funded students. The deadline for funded applications was 31 January 2019.

This project will be supervised by Dr. Martin Benning.

Deep neural networks are computing systems that have outperformed traditional machine learning methods in a wide range of applications over the past years. Even research areas in the applied sciences that were originally not affected by machine learning have been transformed substantially.

The success of deep learning techniques and the opportunities they pose makes it difficult for mathematicians to ignore them. However, a key disadvantage compared to more traditional machine learning techniques is the lack of mathematical theory and provable guarantees.

The aim of this PhD project is to reduce this gap, and to develop neural networks with provable stability guarantees that have superior generalisation properties compared to their traditional counterparts. A particular focus is laid on the connections between deep neural networks and incremental gradient methods, iterative regularisation methods, and discretised systems of Ordinary Differential Equations (ODEs) and Partial Differential Equations (PDEs). Variational networks -- a class of deep neural network architectures -- have been linked to a special form of incremental gradient descent proposed in. An integral part of this project is the establishment of a similar link for more general architectures.

Linking neural networks to the world of inverse problems and differential equations enables the re-use and evolution of existing theory in order to develop novel stability and generalisability results. If, for example, a deep learning network architecture can be interpreted as a discretisation of a differential equation, other - potentially more stable - discretisations, backed by extensive mathematical theory, can be applied in order to improve the neural network architecture. Or if a neural network is constrained to be an incremental gradient method, an associated energy can be computed and used to decide whether new data samples match the prior assumptions of the original training data samples.

The ideal candidate will hold an MSc (or an equivalent degree) in applied mathematics, will have a strong background in numerical analysis, optimisation, inverse problems or numerical methods for differential equations, and should have excellent programming skills, preferably in Python. Prior knowledge in statistical analysis as well as experience in programming with Tensorflow or PyTorch are desirable but not mandatory. The successful candidate is expected to publish research outcomes in top-ranked journals, present their research results at selected conferences and workshops, and to contribute their findings in multidisciplinary research collaborations.

The application procedure is described on the School website. For further inquiries please contact Dr Martin Benning at .

Funding Notes

This project can be undertaken as a self-funded project, either through your own funds or through a body external to Queen Mary University of London. Self-funded applications are accepted year-round.

The School of Mathematical Sciences is committed to the equality of opportunities and to advancing women’s careers. As holders of a Bronze Athena SWAN award we offer family friendly benefits and support part-time study. Further information is available here. We strongly encourage applications from women as they are underrepresented within the School.

We particularly welcome applicants through the China Scholarship Council Scheme.


[1] M. Benning and M. Burger. Modern regularization methods for inverse problems. Acta Numerica, 27:1–111, 2018.

[2] D. P. Bertsekas. Incremental gradient, subgradient, and proximal methods for convex optimization: A survey. Optimization for Machine Learning, 2010(1-38):3, 2011.

[3] Y. Chen, W. Yu, and T. Pock. On learning optimized reaction diffusion processes for effective image restoration. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 5261–5269, 2015.

[4] E. Haber and L. Ruthotto. Stable architectures for deep neural networks. Inverse Problems, 34(1):014004, 2017.

[5] E. Kobler, T. Klatzer, K. Hammernik, and T. Pock. Variational networks: connecting variational methods and deep learning. In German Conference on Pattern Recognition, pages 281–293. Springer, 2017.

[6] L. Ruthotto and E. Haber. Deep neural networks motivated by partial differential equations. arXiv preprint arXiv:1804.04272, 2018.

[7] S. Sra. Scalable nonconvex inexact proximal splitting. In Advances in Neural Information Processing Systems, pages 530–538, 2012.

Related Subjects

How good is research at Queen Mary University of London in Mathematical Sciences?

FTE Category A staff submitted: 34.80

Research output data provided by the Research Excellence Framework (REF)

Click here to see the results for all UK universities

Email Now

Insert previous message below for editing? 
You haven’t included a message. Providing a specific message means universities will take your enquiry more seriously and helps them provide the information you need.
Why not add a message here
* required field
Send a copy to me for my own records.

Your enquiry has been emailed successfully

FindAPhD. Copyright 2005-2019
All rights reserved.