Neural Networks are a powerful machine learning model that can reach outstanding performance in many problems, such as classification and regression. Since the rise of deep neural networks, especially used in computer vision, there are several open questions on what the optimal network architecture should be. Most of the state-of-the-art deep architectures are not optimised and typically result in over-parametrized models, as was recently demonstrated in several works. We want to investigate how deep neural network architectures can be optimised, by reducing the number of layers – and thus the number of parameters to be optimized – and their interconnections. In particular, we want to study neural network optimsation using graph and network theories.
Academic qualifications: A first degree (at least a 2.1) ideally in computer science, or maths, with a good fundamental knowledge of neural networks and graph theory.
English language requirement: IELTS score must be at least 6.5 (with not less than 6.0 in each of the four components). Other, equivalent qualifications will be accepted. Full details of the University’s policy are available online.
Essential attributes: Experience of fundamental neural networks. • Competent in graph theory. • Knowledge of Python and at least one neural network framework (e.g., Keras, tensorflow, pytorch) • Good written and oral communication skills • Strong motivation, with evidence of independent research skills relevant to the project • Good time management
Desirable attributes: The applicants should motivate their willingness to obtain a PhD degree, attaching a research proposal (max 1 A4 page), describing their ideas and how these align with the scholarships aims and objectives
This PhD call is fully-funded for 3 years and covers the tuition fees of UK/UE applicants.
Li et al. “Federated Learning: Challenges, Methods, and Future Directions”, Arxiv preprint https://arxiv.org/abs/1908.07873. Peterson et al. “Private Federated Learning with Domain Adaptation”, NIPS 2019. https://arxiv.org/pdf/1912.06733.pdf. Csurka. “Domain adaptation for visual applications: A comprehensive survey”, Advances in Computer Vision and Pattern Recognition 2017 Mocanu et al. “Scalable training of artificial neural networks with adaptive sparse connectivity inspired by network science”, Nature Communications 2018. https://doi.org/10.1038/s41467-018-04316-3.