Don't miss our weekly PhD newsletter | Sign up now Don't miss our weekly PhD newsletter | Sign up now

  Rendering AI more transparent and explainable: Which explanations do humans prefer?


   Centre for Accountable, Responsible and Transparent AI

This project is no longer listed on FindAPhD.com and may not be available.

Click here to search FindAPhD.com for PhD studentship opportunities
  Dr Janina Hoffmann, Dr Özgür Şimşek  No more applications being accepted  Self-Funded PhD Students Only

About the Project

Despite successful applications of artificial intelligence (AI) in object labelling or face recognition tasks, individuals, companies, and governmental agencies are still wary of a wide deployment of AI. Individuals often expect their fellows to provide more competent judgments than AI especially in many human-centred decisions, such as hiring decisions, although those decisions do not necessarily build upon more transparent or explainable procedures than the AI black box. To harness the collaborative benefits of human-AI teams, decision makers may thus need to achieve a deeper understanding of when and why AI succeeds and humans fail, or vice versa.

In the search for creating more explainable and transparent AI, machine learning and AI research has proposed a variety of techniques to enhance explainability of the AI, such as imposing causal structures or regularization. Those techniques increase transparency by generating simpler rule-based explanations; other techniques seek to render AI more transparent by providing examples or illustrations. Yet, it is rarely tested which features of the AI make an AI explainable, nor does explainability enhance humans’ trust in AI, per se. Instead, human preferences for rule-based explanations or example-based learning may moderate the degree to which they trust the AI advice. The current project aims to evaluate which features render AI more transparent and explainable by studying human preferences for distinct types of explanations. We experimentally explore the hypothesis that a match between humans’ explanation preferences and the explanations provided about and by the AI enhances transparency and ultimately allow humans to better calibrate their trust in AI. 

Outcomes of the project may consist of (a) research papers that improve our understanding of how humans build trust in AI and (b) novel techniques to bridge the explainability divide between human decision makers and AI and to better calibrate trust in AI.

Applicants should hold, or expect to receive, a First or Upper Second Class Honours degree or a Master's degree in a relevant subject. You will also need to have taken a mathematics course or a quantitative methods course at university or have at least grade B in A level maths or international equivalent. Programming experience would also be desirable. 

Enquiries about the project should be sent to Dr Hoffmann.

Formal applications should be accompanied by a research proposal and made via the University of Bath’s online application form. Further information about the application process can be found here

Start date: Between 8 January and 30 September 2024.


Computer Science (8) Mathematics (25) Psychology (31)

Funding Notes

We welcome applications from candidates who can source their own funding. Tuition fees for the 2023/4 academic year are £4,700 (full-time) for Home students and £26,600 (full-time) for International students. For information about eligibility for Home fee status: https://www.bath.ac.uk/guides/understanding-your-tuition-fee-status/.

How good is research at University of Bath in Psychology, Psychiatry and Neuroscience?


Research output data provided by the Research Excellence Framework (REF)

Click here to see the results for all UK universities

Where will I study?