Looking to list your PhD opportunities? Log in here.
This project is no longer listed on FindAPhD.com and may not be available.
Click here to search FindAPhD.com for PhD studentship opportunitiesAbout the Project
Adaptive finite elements are an excellent choice for multiscale problems due to their exponential convergence. These methods require effective a posteriori error estimates driving the adaptive refinement. Such algorithms are a type of nonlinear approximation and exhibit exponential convergence rates in some situations.
On the other hand, ReLU deep neural networks have transformed artificial intelligence. New approximation theory results and their relation with finite elements are starting to explain their excellent properties. They generate piece-wise linear functions on meshes defined as the intersection of hyper-planes that depend on weights, which can be optimised via stochastic gradient descent. These techniques also fall into the category of nonlinear approximation techniques and have the potential to become a breakthrough in numerical PDEs, as an alternative to adaptive finite elements. However, it is unclear how to deal with complex geometries in such settings; the deep networks produce approximations on n-cubes whereas PDE applications usually involve complex domains. On the other hand, many of the ingredients required in the simulation pipeline (e.g., efficient and scalabler (non)linear solvers) have not even been considered yet for these discretisations.
This project aims to combine ReLU deep neural network functional discretisation of the n-cube (for n=3 for space and n=4 for spacetime problems) with unfitted techniques (weak imposition of boundary conditions, integration on cut cells, etc) to generate PDE solvers on general geometries. In order to compute the solution of the resulting discrete problems, the student will work on the combination of stochastic gradient descent and (non)linear iterative solvers and analyse the well-posedness of the discrete problems.
The project involves the numerical analysis of the proposed algorithms and the implementation of the formulations in the Gridap project (a Julia library for the discretisation of PDEs using advanced numerical methods) using Flux (machine learning Julia suite).
The project provides the opportunity to be part of an active group on numerical PDEs and high performance scientific computing. The student will be trained in scientific computing, numerical analysis of PDEs, and advanced software implementations.
Entry requirementsWe are looking for self-motivated students with an excellent academic performance and strong interest in scientific computing for PDEs. Applicants must have
* 1st class honours or Master degree in mathematics, physics, engineering, computer science, or related disciplines;
* A solid background in numerical PDEs;
* Programming experience in at least one of the following languages: Julia, Python, C, C++, Fortran03-08;
* Strong oral and written communication skills in English.
Additionally, experience in high-performance scientific computing and machine learning would be an asset.
How to applyCandidates can contact [Email Address Removed], providing their academic record, to know whether they satisfy the 1st class honours equivalence (which is a must to be eligible for the position).
Eligible candidates can make informal inquiries by sending an email to Prof Santiago Badia [Email Address Removed] with the subject "Inquiries - PhD MLPDEs".
Eligible candidates can apply for this project by sending their CV (including academic records and two references) to Prof Santiago Badia [Email Address Removed] with the subject "Application - PhD MLPDEs - STUDENT_NAME".