The School of Mathematical Sciences of Queen Mary University of London invite applications for a PhD project commencing either in September 2019 for students seeking funding, or at any point in the academic year for self-funded students. The deadline for funded applications is 31 January 2019.
This project will be supervised by Dr. Martin Benning.
Deep neural networks are computing systems that have outperformed traditional machine learning methods in a wide range of applications over the past years. Even research areas in the applied sciences that were originally not affected by machine learning have been transformed substantially.
The success of deep learning techniques and the opportunities they pose makes it difficult for mathematicians to ignore them. However, a key disadvantage compared to more traditional machine learning techniques is the lack of mathematical theory and provable guarantees.
The aim of this PhD project is to reduce this gap, and to develop neural networks with provable stability guarantees that have superior generalisation properties compared to their traditional counterparts. A particular focus is laid on the connections between deep neural networks and incremental gradient methods, iterative regularisation methods, and discretised systems of Ordinary Differential Equations (ODEs) and Partial Differential Equations (PDEs). Variational networks -- a class of deep neural network architectures -- have been linked to a special form of incremental gradient descent proposed in. An integral part of this project is the establishment of a similar link for more general architectures.
Linking neural networks to the world of inverse problems and differential equations enables the re-use and evolution of existing theory in order to develop novel stability and generalisability results. If, for example, a deep learning network architecture can be interpreted as a discretisation of a differential equation, other - potentially more stable - discretisations, backed by extensive mathematical theory, can be applied in order to improve the neural network architecture. Or if a neural network is constrained to be an incremental gradient method, an associated energy can be computed and used to decide whether new data samples match the prior assumptions of the original training data samples.
The ideal candidate will hold an MSc (or an equivalent degree) in applied mathematics, will have a strong background in numerical analysis, optimisation, inverse problems or numerical methods for differential equations, and should have excellent programming skills, preferably in Python. Prior knowledge in statistical analysis as well as experience in programming with Tensorflow or PyTorch are desirable but not mandatory. The successful candidate is expected to publish research outcomes in top-ranked journals, present their research results at selected conferences and workshops, and to contribute their findings in multidisciplinary research collaborations.
The application procedure is described on the School website. For further inquiries please contact Dr Martin Benning at [email protected]
. This project is eligible for full funding, including support for 3.5 years’ study, additional funds for conference and research visits and funding for relevant IT needs. Applicants interested in the full funding will have to participate in a highly competitive selection process.
 M. Benning and M. Burger. Modern regularization methods for inverse problems. Acta Numerica, 27:1–111, 2018.
 D. P. Bertsekas. Incremental gradient, subgradient, and proximal methods for convex optimization: A survey. Optimization for Machine Learning, 2010(1-38):3, 2011.
 Y. Chen, W. Yu, and T. Pock. On learning optimized reaction diffusion processes for effective image restoration. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 5261–5269, 2015.
 E. Haber and L. Ruthotto. Stable architectures for deep neural networks. Inverse Problems, 34(1):014004, 2017.
 E. Kobler, T. Klatzer, K. Hammernik, and T. Pock. Variational networks: connecting variational methods and deep learning. In German Conference on Pattern Recognition, pages 281–293. Springer, 2017.
 L. Ruthotto and E. Haber. Deep neural networks motivated by partial differential equations. arXiv preprint arXiv:1804.04272, 2018.
 S. Sra. Scalable nonconvex inexact proximal splitting. In Advances in Neural Information Processing Systems, pages 530–538, 2012.