Intelligence is one of the pillars that allows animals to master their environment. Scientific approaches recently proposed have been capable of simulating networked systems that reproduce similar emergent manifestations of behavior as those observed in the brain. This big scientific area was coined as Artificial Intelligence (AI). Our society is today intrinsically connected to it. Why and how a neural network can be trained to process information and produce a logically intelligent output is today still a big mystery, despite the explosive growth in this area. Its success in solving tasks cannot today be fully explained in physical or mathematical terms. Contributing to this challenge is the grand goal of this PhD project: the creation of a general theory that describes the fundamental mathematical rules and physical laws, relevant properties and features behind the “intelligent” functioning of trained neural networks. To this goal, this project will focus in a simpler but also successful type of machine learning approach, named Reservoir Computing (RC). However other more popular approaches will also be considered. In RC, the learning phase to train a dynamical neural network to process information about an input signal only deals with the much easier task of understanding how the neural network needs to be observed, without dealing with the more difficult task of doing structural changes in it (e.g. as no deep learning). Recent evidences have linked RC to some computation our Brain does. We aim at showing with mathematical rigour the contribution of the network graph configurations and emergent collective behaviour of a dynamical network into the informational processing of an input signal leading to an intelligent response about it. We want to show why chaos in neural networks can enhance the smart behaviour of trained neural networks. Another goal will be to determine how “intelligence” (i.e., optimal minimization of the error between trained output and target function) depends on the particular way a network is observed to construct the output functions. The outputs of this project will potentially contribute to a better understanding of how our own brain computes. But, it will also contribute towards industrial exploitation of neural networks, by developing a mathematical formalism to create simpler but smarter neural networks that can process quicker more information with less computational resources.
Candidates should have (or expect to achieve) a UK honours degree at 2.1 or above (or equivalent) in Natural and Computing Sciences, Mathematics, or Engineering.
Preference will be given for students who have also expertise in one or more of the following topics:Machine learning methods, the theory of Dynamical Systems, Complex networks, Information theory, Synchronisation.
• Apply for Degree of Doctor of Philosophy in Physics
• State name of the lead supervisor as the Name of Proposed Supervisor
• State ‘Self-funded’ as Intended Source of Funding
• State the exact project title on the application form
When applying please ensure all required documents are attached:
• All degree certificates and transcripts (Undergraduate AND Postgraduate MSc-officially translated into English where necessary)
• Detailed CV
Informal inquiries can be made to Dr M Baptista ([email protected]
), with a copy of your curriculum vitae and cover letter. All general enquiries should be directed to the Postgraduate Research School ([email protected]