Are you applying to universities? | SHARE YOUR EXPERIENCE Are you applying to universities? | SHARE YOUR EXPERIENCE

Rendering AI more transparent and explainable: Which explanations do humans prefer?


   Centre for Accountable, Responsible and Transparent AI

  Dr Janina Hoffmann,  Applications accepted all year round  Competition Funded PhD Project (Students Worldwide)

About the Project

Despite successful applications of artificial intelligence (AI) in object labelling or face recognition tasks, individuals, companies, and governmental agencies are still wary of a wide deployment of AI. Individuals often expect their fellows to provide more competent judgments than AI especially in many human-centred decisions, such as hiring decisions, although those decisions do not necessarily build upon more transparent or explainable procedures than the AI black box. To harness the collaborative benefits of human-AI teams, decision makers may thus need to achieve a deeper understanding of when and why AI succeeds and humans fail, or vice versa.

In the search for creating more explainable and transparent AI, machine learning and AI research has proposed a variety of techniques to enhance explainability of the AI, such as imposing causal structures or regularization. Those techniques increase transparency by generating simpler rule-based explanations; other techniques seek to render AI more transparent by providing examples or illustrations. Yet, it is rarely tested which features of the AI make an AI explainable, nor does explainability enhance humans’ trust in AI, per se. Instead, human preferences for rule-based explanations or example-based learning may moderate the degree to which they trust the AI advice. The current project aims to evaluate which features render AI more transparent and explainable by studying human preferences for distinct types of explanations. We experimentally explore the hypothesis that a match between humans’ explanation preferences and the explanations provided about and by the AI enhances transparency and ultimately allow humans to better calibrate their trust in AI. 

Outcomes of the project may consist of (a) research papers that improve our understanding of how humans build trust in AI and (b) novel techniques to bridge the explainability divide between human decision makers and AI and to better calibrate trust in AI.

This project is associated with the UKRI Centre for Doctoral Training (CDT) in Accountable, Responsible, and Transparent AI (ART-AI).

Applicants should hold, or expect to receive, a First or Upper Second Class Honours degree or a Master's degree in a relevant subject. You will also need to have taken a mathematics course or a quantitative methods course at university or have at least grade B in A level maths or international equivalent. Programming experience would also be desirable. 

Enquiries about the project should be sent to Dr Hoffmann.

Formal applications should be accompanied by a research proposal and made via the University of Bath’s online application form. Enquiries about the application process should be sent to .

Start date: 2 October 2023.


Funding Notes

ART-AI CDT studentships are available on a competition basis and applicants are advised to apply early as offers are made from January onwards. Funding will cover tuition fees and maintenance at the UKRI doctoral stipend rate (£17,668 per annum in 2022/23, increased annually in line with the GDP deflator) for up to 4 years.
We also welcome applications from candidates who can source their own funding.

How good is research at University of Bath in Psychology, Psychiatry and Neuroscience?


Research output data provided by the Research Excellence Framework (REF)

Click here to see the results for all UK universities

Email Now


Search Suggestions
Search suggestions

Based on your current searches we recommend the following search filters.

PhD saved successfully
View saved PhDs