Human experts have opinions that are the product of enviable experience and pronounced cognitive biases. Calibrated computer simulations can make impressively precise predictions albeit with questionable accuracy. Given that both experts and computers each answer queries with different unknown biases and accuracies, it is unclear how we make best aggregate the information to assess what we know and what we would gain from further inputs from either a computer or an expert.
Techniques (e.g., Gaussian Processes) exist to use data from either an expert or a computer and to interpolate between previously measured data-points. These techniques rely on some notion of the extent to which fluctuations in input parameters can result in fluctuations in output data. This notion can be captured mathematically in the “kernel”. It is possible to use recently-developed distributed numerical Bayesian techniques (Sequential Monte Carlo samplers) to efficiently search the huge space of kernels: these techniques are better able to exploit distributed hardware than pre-existing alternatives (e.g., Markov chain Monte Carlo). This ability to search efficiently is of paramount importance when the kernel also captures the extent to which biases can exist between the output of an expert and a computer.
The problem of aggregating computer outputs and expert judgement is pertinent to Unilever’s ability to accelerate the development of new products. Such aggregation would make it possible to: ascertain the utility of requesting additional expert input and/or running additional computer simulation; estimate the biases present; identify the optimal compromise between trusting the experts and relying on the apparent fidelity of the computer simulations and ultimately the level of acceptance and adoption of such simulations to the scientists in their product design activities. Unilever will work with the student to identify and access data pertinent to a specific instance of this challenge as well as to understand how the technology being developed could be deployed in the context of the formulation of new products.
This project is part of the EPSRC Funded CDT in Distributed Algorithms: The What, How and where of Next-Generation Data Science. https://www.liverpool.ac.uk/research/research-at-liverpool/research-themes/digital/cdt-distributed-algorithms/
The University of Liverpool is working in partnership with the STFC Hartree Centre and other industrial partners from the manufacturing, defence and security sectors to provide a 4 year innovative PhD training course that will equip over 60 students with the essential skills needed to become future leaders in data science, be it in academia or industry.
Every project within the centre is offered in collaboration with an Industrial partner who as well as providing co-supervision will also offer the unique opportunity for students to access state of the art computing platforms, work on real world problems, benchmarking and data. Our graduates will gain unparalleled experiences working across academic disciplines in highly sought-after topic areas, answering industry need.
As well as learning from academic and industrial world leaders, the centre has a dedicated programme of interdisciplinary research training including the opportunity to undertake modules at the global pinnacle of Data science teaching. A large number of events and training sessions are undertaken as a cohort of PhD students, allowing you to build personal and professional relationships that we hope will lead to research collaboration either now or in your future.
The learning nurtured at this centre will be based upon anticipation of the hardware recourses arriving on desks of students after they graduate, rather than the hardware available today.
For informal enquires please contact Dr Xinping Yi [email protected]
or [email protected]
To apply for this Studentship please submit an application for an Electrical Engineering PhD via our online platform (https://www.liverpool.ac.uk/study/postgraduate-research/how-to-apply/
) and provide the studentship title and supervisor details when prompted. Should you wish to apply for more than one project, please provide a ranked list of those you are interested in.
For a full list of the entry criteria and a recruitment timeline (including interview dates etc), Please see our website https://www.liverpool.ac.uk/research/research-at-liverpool/research-themes/digital/cdt-distributed-algorithms/
Xinping Yi received a PhD. degree in telecommunications from the École Nationale Supérieure des Télécommunications (now Télécom ParisTech), Paris, France, in 2014.
Since 2017, he has been a Lecturer in the Department of Electrical Engineering and Electronics at the University of Liverpool, Liverpool, UK. Prior to Liverpool, he was a research associate at Technische Universität Berlin, Berlin, Germany (2014-2017), a research assistant at EURECOM, Sophia Antipolis, France (2011-2014), and a research engineer at Huawei Technologies, Shenzhen, China (2009-2011).
His main research interests include information theory, graph theory, optimization and machine learning, and their applications in wireless communications and data science. In particular, his recent research activities lie in the theoretical understanding of deep learning, deep Gaussian processes, and Bayesian inference.