Looking to list your PhD opportunities? Log in here.
About the Project
Despite successful applications of artificial intelligence (AI) in object labelling or face recognition tasks, individuals, companies, and governmental agencies are still wary of a wide deployment of AI. Individuals often expect their fellows to provide more competent judgments than AI especially in many human-centred decisions, such as hiring decisions, although those decisions do not necessarily build upon more transparent or explainable procedures than the AI black box. To harness the collaborative benefits of human-AI teams, decision makers may thus need to achieve a deeper understanding of when and why AI succeeds and humans fail, or vice versa.
In the search for creating more explainable and transparent AI, machine learning and AI research has proposed a variety of techniques to enhance explainability of the AI, such as imposing causal structures or regularization. Those techniques increase transparency by generating simpler rule-based explanations; other techniques seek to render AI more transparent by providing examples or illustrations. Yet, it is rarely tested which features of the AI make an AI explainable, nor does explainability enhance humans’ trust in AI, per se. Instead, human preferences for rule-based explanations or example-based learning may moderate the degree to which they trust the AI advice. The current project aims to evaluate which features render AI more transparent and explainable by studying human preferences for distinct types of explanations. We experimentally explore the hypothesis that a match between humans’ explanation preferences and the explanations provided about and by the AI enhances transparency and ultimately allow humans to better calibrate their trust in AI.
Outcomes of the project may consist of (a) research papers that improve our understanding of how humans build trust in AI and (b) novel techniques to bridge the explainability divide between human decision makers and AI and to better calibrate trust in AI.
This project is associated with the UKRI Centre for Doctoral Training (CDT) in Accountable, Responsible, and Transparent AI (ART-AI).
Applicants should hold, or expect to receive, a First or Upper Second Class Honours degree or a Master's degree in a relevant subject. You will also need to have taken a mathematics course or a quantitative methods course at university or have at least grade B in A level maths or international equivalent. Programming experience would also be desirable.
Enquiries about the project should be sent to Dr Hoffmann.
Formal applications should be accompanied by a research proposal and made via the University of Bath’s online application form. Enquiries about the application process should be sent to art-ai-applications@bath.ac.uk.
Start date: 2 October 2023.
Funding Notes
We also welcome applications from candidates who can source their own funding.
How good is research at University of Bath in Psychology, Psychiatry and Neuroscience?
Research output data provided by the Research Excellence Framework (REF)
Click here to see the results for all UK universitiesEmail Now
Why not add a message here
The information you submit to University of Bath will only be used by them or their data partners to deal with your enquiry, according to their privacy notice. For more information on how we use and store your data, please read our privacy statement.

Search suggestions
Based on your current searches we recommend the following search filters.
Check out our other PhDs in Bath, United Kingdom
Check out our other PhDs in United Kingdom
Start a New search with our database of over 4,000 PhDs

PhD suggestions
Based on your current search criteria we thought you might be interested in these.
Examining the extent to which planning moderates intention-behaviour relations for various health-related behaviours
University of Sheffield
Algorithms for AI inspired by the bounded rationality of humans
University of Bath
Explainable artificial intelligence in Bayesian machine learning research and applications
University of Bath