Intelligence is one of the pillars that allows animals to master their environment. Scientific approaches recently proposed have been capable of simulating networked systems that reproduce similar emergent manifestations of behavior as those observed in the brain. This big scientific area was coined as Artificial Intelligence (AI). Our society is today intrinsically connected to it. Why and how a neural network can be trained to process information and produce a logically intelligent output is today still a big mystery, despite the explosive growth in this area. Its success in solving tasks cannot today be fully explained in physical or mathematical terms. Contributing to this challenge is the grand goal of this PhD project: the creation of a general theory that describes the fundamental mathematical rules and physical laws, relevant properties and features behind the “intelligent” functioning of trained neural networks. To this goal, this project will focus in a simpler but also successful type of machine learning approach, named Reservoir Computing (RC). However other more popular approaches will also be considered. In RC, the learning phase to train a dynamical neural network to process information about an input signal only deals with the much easier task of understanding how the neural network needs to be observed, without dealing with the more difficult task of doing structural changes in it (e.g. as no deep learning). We aim at showing with mathematical rigour the contribution of the configurations and emergent behaviour of a dynamical network into the informational processing of an input signal leading to an intelligent response about it. We want to show why chaos in neural networks can enhance the smart behaviour of trained neural networks. Another goal will be to determine how “intelligence” depends on the particular way a network is observed to construct the output functions. Today, output functions are constructed based on the randomly chosen observation of some neurons in the network. The outputs of this project will potentially contribute to a better understanding of how our own brain computes. But, it will also contribute towards industrial exploitation of neural networks, by developing a mathematical formalism to create simpler but smarter neural networks that can process quicker more information with less computational resources.
Candidates should have (or expect to achieve) a UK honours degree at 2.1 or above (or equivalent) in Natural and Computing Sciences, Mathematics or Engineering.
Preference will be given for students who have also expertise in one or more of the following topics: Machine learning methods, the theory of Dynamical Systems, Complex networks, Information theory, Synchronisation.
• Apply for Degree of Doctor of Philosophy in Physics
• State name of the lead supervisor as the Name of Proposed Supervisor
• State ‘Self-funded’ as Intended Source of Funding
• State the exact project title on the application form
When applying please ensure all required documents are attached:
• All degree certificates and transcripts (Undergraduate AND Postgraduate MSc-officially translated into English where necessary)
• Detailed CV
• Details of 2 academic referees
Informal inquiries can be made to Dr Murilo S Baptista ([email protected]
) with a copy of your curriculum vitae and cover letter. All general enquiries should be directed to the Postgraduate Research School ([email protected]
Additional research costs of £300 per annum are required to allow the successful candidate to attend conferences/workshops, these need to be met by the candidate.
• Hai-Peng Ren, Chao Bai, M. S. Baptista, Celso Grebogi, "Weak connections form an infinite number of patterns in the brain", Nature Scientific Reports, 7, 46472 (2017).
• Lukosevicius, et al. KI-Künstliche Intelligenz, 26, 365 (2012).
• Y. LeCun, et al. Nature, 521, 436 (2015).
• M. C. Soriano et al., Front Comput Neurosci, 9, 68 (2015).
• Z. Lu and et al., Chaos, 27, 041102 (2017)