Don't miss our weekly PhD newsletter | Sign up now Don't miss our weekly PhD newsletter | Sign up now

  Accountability and transparency in AI: Application to Health Care Diagnostics.


   School of Medicine, Medical Sciences & Nutrition

This project is no longer listed on FindAPhD.com and may not be available.

Click here to search FindAPhD.com for PhD studentship opportunities
  Prof C Black, Dr N Oren, Prof L Locock  No more applications being accepted  Competition Funded PhD Project (Students Worldwide)

About the Project

The burden of multiple long-term conditions and an ageing population is placing ever growing pressure on health care systems globally. Health care systems struggle to coordinate care, identify those most at need and diagnose early enough in the disease pathway to prevent illness or optimise treatment. Health care is rapidly digitalising through the use of electronic health records, digital diagnostics and, increasingly, mobile and sensor based technologies creating potential new opportunities for early diagnosis and person-centred care.

Artificial Intelligence (AI) is set to transform health care. Innovation projects are underway in Aberdeen, to develop computer assisted technology to detect abnormalities in X-ray or CT scan radiology images assisting in the early diagnosis of breast cancer [iCAIRDNorth - Industrial Centre for Artificial Intelligence Research in Digital Diagnostics]. We are developing clinical tools to better predicted risk of readmission to hospital using AI to drawing together information from across the electronic health records and laboratory test results. [Aberdeen Centre for Health Data Science]

However, there is increasing recognition that AI systems should not be dealt with as a “black box”. For successful translation into clinical practice, trust, transparency and accountability of AI systems are emerging as key components [1]. Clear definition of AI system strengths and weaknesses, accuracy, bias and reliability are important. The optimal methods to explain AI systems for health care remain uncertain but there is growing evidence from other domains including the automobile industry and Aberdeen have a programme of work developing and evaluating methods for AI accountability in driverless car technology.

This PhD offers a mixed methods approach, bringing together a multidisciplinary supervisory team with expertise in computing science, qualitative methodology and health informatics within the Centre for Health Data Science. It comprises three phases:

1. Evidence review: using systematic literature review methodology to synthesise a growing published discussion about trust in AI for healthcare and identify the key emerging themes.
2. Explanation and simulation: Building on computing science methodologies of argumentation [2], dialogue [3] and natural language generation [4], the PhD will apply and advance techniques for explaining why diagnostic actions are, or are not, taken in an AI system diagnostic care pathway (workflow). The PhD will use ‘desk top’ simulations including theoretical and by developing a simple diagnostic process AI system ‘model’ using synthetic health care data to allow a care pathway to be simulated. Then applying explanatory methods to real world exemplars drawn from existing AI projects within the Aberdeen Centre for Health Data Science including, for example, iCAIRD which is developing AI diagnostics with a focus on radiology driven diagnosis in cancer and Modernising Outpatients programme developing AI to support patient centred care at the interface between hospitals and the general practitioner.

3. Qualitative interviews: the PhD will use qualitative interviews to help understand what would support users to trust AI in clinical decision-making, and use codesign techniques to work with clinicians, patients and healthcare planners to refine the AI system explanations for different audiences.

The PhD also offers the opportunity for an industrial placement with Canon Medical (Dominic Ashmole, Aberdeen Office) to understand how the methodology being developed in the PhD could be introduced into state of the art AI assisted health digital technologies. The PhD will include public engagement activity as part of the Aberdeen Centre for Health Data Science engagement programme.

As the diagnosis process becomes increasingly automated, the techniques developed in this PhD will be fundamental to improve acceptance by regulatory bodies, professionals and society at large. The opportunity for impact is high and the PhD student will be able to engage with the UK health data science landscape through our Health Data Research UK (HDR UK) partner site status.

APPLICATION PROCEDURE:
This project is advertised in relation to the research areas of APPLIED HEALTH SCIENCE. Formal applications can be completed online: https://www.abdn.ac.uk/pgap/login.php. You should apply for Degree of Doctor of Philosophy in Applied Health Science, to ensure that your application is passed to the correct person for processing.

NOTE CLEARLY THE NAME OF THE SUPERVISOR AND EXACT PROJECT TITLE ON THE APPLICATION FORM.

Candidates should contact the lead supervisor to discuss the project in advance of submitting an application, as supervisors will be expected to provide a letter of support for suitable applicants. Candidates will be informed after the application deadline if they have been shortlisted for interview.

Funding Notes

This project is part of a competition funded by the Institute of Applied Health Sciences. Full funding is available to UK/EU candidates only. Overseas candidates can apply for this studentship but will have to find additional funding to cover the difference between overseas and home fees (approximately £15,680 per annum).

Candidates should have (or expect to achieve) a minimum of a First Class Honours degree in a relevant subject. Applicants with a minimum of a 2:1 Honours degree may be considered provided they have a Distinction at Masters level.

References

[1] Our data-driven future in healthcare Academy of Medical Science December 2018 https://acmedsci.ac.uk/file-download/45020825
[2] Phan Minh Dung. 1995. On the acceptability of arguments and its fundamental role in nonmonotonic reasoning, logic programming and n-person games. Artif. Intell. 77, 2 (September 1995), 321-357.
[3] Henry Prakken, Chris Reed, and Douglas Walton. 2005. Dialogues about the burden of proof. In Proceedings of the 10th international conference on Artificial intelligence and law (ICAIL '05). ACM, New York, NY, USA, 115-124.
[4] Ehud Reiter and Robert Dale. 2000. Building Natural Language Generation Systems. Cambridge University Press, New York, NY, USA.

Where will I study?