Imperial College London Featured PhD Programmes
Gdansk University of Technology Featured PhD Programmes
Queen’s University Belfast Featured PhD Programmes

Explainable AI: Improving Understanding of Automated Medical Image Classification

   School of Computing, Engineering & the Built Environment

This project is no longer listed on and may not be available.

Click here to search for PhD studentship opportunities
  Dr Sajid Nazir, Dr Diane Dickson, Prof Mike Mannion  Applications accepted all year round  Self-Funded PhD Students Only

About the Project

Project Reference Number: SCEBE-20-025-SNSF


There are many image modalities such as conventional imaging, Ultrasound (US), Magnetic Resonance Imaging (MRI), and Computed Tomography (CT) that are routinely utilised by the clinicians to determine the presence and nature of a disease. Artificial Intelligence (AI) application in medical imaging is based on complex models that provide state of the art results in image classification to positively identify the onset of a disease. Recent AI techniques have transformed the classification process and can provide results comparable to or better than human expert. However, compared to earlier machine learning models that had explainability, it is no longer possible to observe and trace the steps followed by a model. 

The complexity of the models and their ‘black box’ nature makes it difficult to understand why the model made a particular decision which also affects the accompanying trust. This information is very important due to the criticality of the correct diagnosis of a disease where an incorrect diagnosis can be fatal, or result in unnecessary or inappropriate treatment affecting patient outcomes. Explainable AI aims to make it easier for the human in the loop to understand not only ‘what’ decision was made but ‘why’ it was made. This can not only help in making the process understandable but also increases the degree of trust and confidence level in the output of an AI based system.


 This project will develop algorithms and models for making medical image classification process understandable for medical professionals.  The major objectives are as follows:

(i)           Critically analysing the state of the art and the gap of knowledge in medical image classification and explainable AI in medical applications;

(ii)          Designing a medical image analysis framework with justifiable explainability levels for medical experts;

 (iii)        Experimentally evaluating the effectiveness and performance of explainable medical image classification system

There is also a possibility of working with National Health Service (NHS) Scotland for this project.

Funding Notes

Applicants are expected to find external funding sources to cover the tuition fees and living expenses. Alumni and International students new to GCU who are self-funding are eligible for fee discounts. See more on fees and funding.


For further information on this project, please contact:
Director of Studies Dr Sajid Nazir -
GCU Research Online URL:
2nd Supervisor Dr Diane Dickson -
GCU Research Online URL:
3rd Supervisor Professor Mike Mannion -
GCU Research Online URL:
Search Suggestions
Search suggestions

Based on your current searches we recommend the following search filters.

PhD saved successfully
View saved PhDs