Summary
You will (i) develop and apply a data-driven approach to developing statistical models for estimating the epicentre, location and felt area of historical earthquakes, (ii) use modern data to calibrate the results to instrumentally recorded magnitudes in the UK, and (iii) quantify the uncertainties.
For information on how to apply please read the information provided at https://www.ed.ac.uk/geosciences/study/degrees/research-degrees/phd-projects/physical-sciences?item=1485
Project background
The analysis of seismic hazard depends critically on the quality of the catalogue of events on which it is based. In the modern digital era, since around 1980, we can estimate earthquake source parameters very accurately using digital broadband recordings of the ground shaking. Charles Richter introduced the earthquake magnitude scale in 1930, based on continuous analogue recordings of local earthquakes following the invention of the seismograph a few decades earlier. This means we have at most just over a century of direct physical recordings of ground shaking. Before that, we have only the historical archive, for example as in the newspaper reports and early photograph above. These sources can define the intensity of the felt and damaging effects as a proxy for ground motion.
The question is – how do we translate this archive into an estimate of the earthquake source parameters? This is important because the largest and often most damaging events occur very rarely, and are hence go unreported in the instrumental catalogue. This is particularly acute in the UK, which is an area of intermediate seismicity, meaning small earthquakes do occur relatively commonly, but larger ones recur on average on timescales greater than 100 years or so.
Recently, it has become possible to apply a data driven approach to estimating the parameters of historical earthquakes from the archive. We use Bayesian, data-driven approach to find the optimal boundary of the felt area at a given intensity of ground shaking, along with a family of possible solutions representing the uncertainty. The location is defined by the centroid of the ellipse, and the magnitude estimated by calibration against modern data.
Research questions
Can we improve the accuracy of felt area as a proxy for earthquake magnitude?
Can we improve our estimates of epicentre location?
What are the uncertainties involved?
Does this change our estimates of the likelihood of large events in the UK?
Methodology
You will complete a literature review, and familiarise yourself with software in the statistical package ‘R’ already developed in previously funded project funded by the NERC ‘Probability and Uncertainty in Risk Estimation’ programme. You will develop and apply the method to raw data provided by the British Geological Survey, and hence determine the optimal felt area, epicentre and depth for historical events in the UK, along with their uncertainties. From there you will use the felt area as a proxy for magnitude by developing an appropriate calibration against modern data. You will use a data driven approach, informed by prior knowledge and experience in a Bayesian framework, and critically analyse the effect of prior assumptions on the results. Specifically, you will calculate the aleatory uncertainty (random or statistical error) based on Markov-Chain Monte-Carlo simulation, and the epistemic uncertainty (due to lack of knowledge or data) by varying the choice of prior probability. While the emphasis is on UK seismicity, you will also calibrate the results with data from a region with good instrumental and historical data, e.g. Italy. You will interact with an engaged supervisory team of seismologists and statisticians, and become a member of a growing group of statistical seismologists hosted in the Schools of Geoscience or Mathematics and Statistics (see e.g. complementary work in ref 1). As such you will be plugged into of a global movement galvanised by the possibilities that have opened up in data-driven science.
Training
A comprehensive training programme will be provided comprising both specialist scientific training and generic transferable and professional skills. Key project-specific training will be given in handling the existing software in terms of theory, methods, algorithms and visualisation. The data analysis will be carried out using Bayesian techniques that account formally for prior information and constraints, after appropriate training. This will include both specialist scientific training and generic transferable and professional skills, through attendance at relevant University- level and School courses. The time spent at BGS will provide the student with an opportunity to become familiar with the nature of the primary data, the present state of the art, and to receive guidance and insight into the practical significance of the results obtained.
Requirements
This is a fully-funded project (fees+stipend+research expenses), but the funding only covers UK/Home fees. We may accept applications from Overseas students who are able to provide evidence of additional funding to cover the Overseas fees at the point of application. You will have a first degree in statistics, and be comfortable with analysis of uncertainty in complex systems. You will also be competent in computer programming, and be willing to learn more, especially in ‘R’. Evidence of good communication skills in oral and written presentations would be a significant advantage.
Please read more at https://www.ed.ac.uk/geosciences/study/degrees/research-degrees/phd-projects/physical-sciences?item=1485