Don't miss our weekly PhD newsletter | Sign up now Don't miss our weekly PhD newsletter | Sign up now

  Natural language explanations for artificial intelligence


   Centre for Accountable, Responsible and Transparent AI

This project is no longer listed on FindAPhD.com and may not be available.

Click here to search FindAPhD.com for PhD studentship opportunities
  Dr Zheng Yuan  No more applications being accepted  Competition Funded PhD Project (Students Worldwide)

About the Project

In recent years, artificial intelligence (AI) has been successfully applied to various applications with the breakthrough of deep learning (DL). Despite impressive performance, the decision-making processes of DL models are still generally not transparent or interpretable to humans due to their ‘black-box’ nature.

Explainability is becoming an inevitable part of machine learning systems. This is especially important in domains like healthcare, education, finance and law where it is crucial to understand the decisions made by AI systems and build up trust in AI. Several directions for explainable artificial intelligence (XAI) have been explored and the majority of explainability methods focus on providing explanations at the input feature level, which consist of assessing the importance or contribution of each input feature, after the models have been trained and fixed. However, these methods may 1) fail to provide human-readable explanations as the underlying features used by AI models can be hard to comprehend even by expert users (e.g. tokens for text and pixels for images); 2) only detect the incorrect learned behaviour, without providing any general solution for improvement.

As an appealing new research direction, this project will focus on generating human-friendly and comprehensive natural language explanations (NLEs) for AI, where NLEs normally consist of natural language sentences that provide human-like arguments supporting a decision or prediction. In particular, the aim of this project is to develop AI models that can make use of NLEs to provide better performance, counteract existing biases in the data, and provide human-readable explanations for the decisions made by the models. The AI models produced will have the advantage of making use of explanations and providing human-level explanations, in a similar way to how humans both learn from explanations and explain their decisions.

The project will focus on natural language processing as the primary application area and start from publicly available NLEs datasets. However, the AI models and techniques produced will be sufficiently generic such that they can be applied to other areas, such as computer vision, speech processing, policy learning, and planning.

This project is associated with the UKRI Centre for Doctoral Training (CDT) in Accountable, Responsible and Transparent AI (ART-AI).

Applicants should hold, or expect to receive, a First or Upper Second Class Honours degree. A master’s level qualification would also be advantageous. Desirable qualities in candidates include intellectual curiosity, a strong background in mathematics and programming experience.

Informal enquires about the research or project should be directed to Dr Zheng Yuan.

Formal applications should be accompanied by a research proposal and made via the University of Bath’s online application form. Enquiries about the application process should be sent to [Email Address Removed].

Start date: 3 October 2022.


Computer Science (8)

Funding Notes

ART-AI CDT studentships are available on a competition basis and applicants are advised to apply early as offers are made from January onwards. Funding will cover tuition fees and maintenance at the UKRI doctoral stipend rate (£15,609 per annum in 2021/22, increased annually in line with the GDP deflator) for up to 4 years.
We also welcome applications from candidates who can source their own funding.

References

• Arrieta et. al. Explainable artificial intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI. Information Fusion 2020.
• Camburu et. al. e-SNLI: Natural Language Inference with Natural Language Explanations. NeurIPS 2018.
• Liu et al. Towards Explainable NLP: A Generative Explanation Framework for Text Classification. ACL 2019.

Where will I study?