Supervisor: Prof Andrew Brown
Scaling simulations of biological and chemical systems is reaching limits: the computational cost of simulating a physical system scales at least as fast as the cube of its linear dimension. Simulating atomic systems for longer does not help because they have correlations that span many scales in space and time. New forces emerge on micron length scales that are largely independent of the molecules’ precise atomic structure. A mesoscale simulation technique called DPD - Dissipative Particle Dynamics - predicts how specific molecular details influence the behaviour of greater (~microns) volumes of matter, without considering each individual atom. This enables simulations of important biological phenomena such as bacterial toxin entry into cells during infectious disease.
The power of mesoscale simulations lies in their ability to capture accurately effects such as thermal fluctuations and long-range correlations not apparent in atomic-scale simulations. Such systems are too slow to simulate on a single CPU core, and existing parallel simulations require all cores to run in lock step, resulting in load balancing problems. An alternative approach is to simulate as nature does -- asynchronously and in parallel -- using an event-based computing engine.
By modelling the physical system under study as a network (graph) and assigning a dedicated core to each vertex in the graph, we get a computing system with remarkable properties:
- Each core only needs knowledge of the part of the network it represents.
- Each core only communicates with its immediate neighbours.
- Calculations performed at/on an individual core are small, simple, fast and easy.
These three components are scalable: for more accuracy, or to analyse a bigger system, we just add more cores.
Practical simulations typically divide 3D space into a grid of small cubes and assign each cube to one core: each core integrates the equations of motion for all particles in its cube. Cubes communicate by passing messages containing particle positions, velocities and forces. In a conventional supercomputer simulation this method is limited by the high delay of message passing, not just the compute time. System speedup is therefore limited by the compute/messaging ratio of the processors. POETS overcomes this limitation in two ways: (1) reducing each message to a small fixed size with very low message passing delay, and (2) dividing the simulated volume so finely that each cube contains only a few atoms or molecules. Local computational cost per core is thus small and only one or two messages must be exchanged per core-pair to advance the simulation - all cores operate in parallel. By exploiting high speed messaging we can reduce the compute costs, translating into much larger systems being simulated for longer periods, at shorter runtimes.
This 3.5 year studentship covers UK tuition fees and provides an annual tax-free stipend at the standard EPSRC rate, which is £15,009 for 2019/20.
Applicants must be UK residents with no restrictions on how long they can stay in the UK and have lived here for at least 3 years prior to the start of the studentship. This residence cannot be mainly for the purpose of receiving full-time education.
For further guidance on funding, please contact [email protected]