Osteoporosis is a common skeletal disorder that increases the risk of fractures and causes significant morbidity and mortality. Vertebral fractures (VFs) are often an early manifestation of osteoporosis but are significantly under-diagnosed in clinical practice. The University of Manchester is collaborating with Optasia Medical Ltd. and our local NHS trust to develop ASPIRE™, an out-sourced radiology reporting service for VFs. This uses appearance-modelling technology to assist radiologists by identifying VFs in computed tomography (CT) images. A first step in this process is to localise the overall path of the spine through the image, so that individual vertebrae can be identified and assessed.
Recently, convolutional neural networks (CNNs) have shown superior performance compared to appearance-modelling algorithms in many image analysis tasks. However, they do not equal the performance of human radiologists, so it is vital that they also provide a reliable measure of confidence, allowing errors to be identified and referred for manual correction. Otherwise, extensive checking is required, adding to radiologist workloads. Therefore, this project has two aims: 1) To develop CNNs that identify the spine in CT image volumes. 2) To develop a novel, statistical understanding of CNNs, supporting reliable error estimation.
During the ASPIRE™ project we have assembled a database of over 2000 CT images with vertebral annotations. The VerSe’19 Grand Challenge (verse2019.grand-challenge.org) has also made 120 CT volumes with detailed vertebral annotations available, for comparison of different segmentation approaches. The algorithms will be trained and tested using these images, allowing comparison to the accuracy of our current technology and a variety of algorithms from other groups. The overall aim will be to improve the accuracy and efficiency of ASPIRE™, reducing the cost to the NHS. However, successful development of error estimation techniques for CNNs would have broader implications for the entire computer vision research community.
Training/techniques to be provided:
The student will join a well-established, interdisciplinary team including academic, clinical and industrial partners. They will gain extensive knowledge of state-of-the-art computer vision algorithm development for clinical imaging problems. In particular, they will receive training in computer vision and medical image analysis, C/C++ and Python programming, and will gain experience of CNNs and of the statistical foundations of algorithm development necessary to develop error estimation methodologies for computer vision algorithms. The student will have the opportunity to gain experience of working with an industrial partner, and the translational skills required to transfer research results between the academic and industrial environments. Finally, they will work with our clinical partners from the Manchester Royal Infirmary and gain extensive knowledge of osteoporosis, particularly the clinical imaging, diagnosis and epidemiology of vertebral fractures and other spinal pathologies.
Candidates are expected to hold (or be about to obtain) a minimum upper second class honours degree (or equivalent) in a related area / subject. Candidates should be competent in C/C++ and Python programming. Previous experience of medical image analysis would be an advantage.
For international students we also offer a unique 4 year PhD programme that gives you the opportunity to undertake an accredited Teaching Certificate whilst carrying out an independent research project across a range of biological, medical and health sciences. For more information please visit http://www.internationalphd.manchester.ac.uk
Applications are invited from self-funded students. This project has a Band 1 fee. Details of our different fee bands can be found on our website (View Website). For information on how to apply for this project, please visit the Faculty of Biology, Medicine and Health Doctoral Academy website (View Website).
As an equal opportunities institution we welcome applicants from all sections of the community regardless of gender, ethnicity, disability, sexual orientation and transgender status. All appointments are made on merit.
• P. Tar, N.A. Thacker, J. Gilmour and M. Jones. “Automated quantitative measurements and associated error covariances for planetary image analysis”. Advances in Space Research 2015;56(1):92-105. DOI: 10.1016/j.asr.2015.03.043
• P.A. Bromiley, E.P. Kariki, J.E. Adams and T.F. Cootes. “Fully Automatic Localisation of Vertebrae in CT images using Random Forest Regression Voting”. Springer LNCS. Mar. 2017;10182:51-63. DOI: 10.1007/978-3-319-55050-3_5
• R. Pathak, H. Ragheb, N.A. Thacker, D.M. Morris, H. Amiri, J. Kuijer, N.M. Desouza, A. Heerschap, A. Jackson. “A data-driven statistical model that estimates measurement uncertainty improves interpretation of ADC reproducibility: A multi-site study of liver metastases”. Scientific Reports 2017;7:14084. DOI: 10.1038/s41598-017-14625-0
• P. Tar, R. Bugiolacchi, N.A. Thacker and J. Gilmour. “Estimating False Positive Contamination in Crater Annotations from Citizen Science Data”. Earth, Moon and Planets. 2017;119(2):47-63. DOI: 10.1007/s11038-016-9499-9
• P.A. Bromiley, E.P. Kariki, and T.F. Cootes. “Error Estimation for Appearance Model Segmentation of Musculoskeletal Structures using Multiple, Independent Sub-models”. Springer LNCS Mar. 2019;22397:53-65. DOI: 10.1007/978-3-030-13736-6_5