Don't miss our weekly PhD newsletter | Sign up now Don't miss our weekly PhD newsletter | Sign up now

  Automatic Testing and Fixing Learning-based Conversational Agents with Knowledge Graphs


   UKRI Centre for Doctoral Training in Safe and Trusted Artificial Intelligence

This project is no longer listed on FindAPhD.com and may not be available.

Click here to search FindAPhD.com for PhD studentship opportunities
  Dr Jie Zhang  Applications accepted all year round  Funded PhD Project (UK Students Only)

About the Project

This project is part of a unique cohort based 4-year fully-funded PhD programme at the UKRI Centre for Doctoral Training in Safe and Trusted AI.

The UKRI Centre for Doctoral Training in Safe and Trusted AI brings together world leading experts from King’s College London and Imperial College London to train a new generation of researchers and is focused on the use of symbolic artificial intelligence for ensuring the safety and trustworthiness of AI systems.

Project Description:

Background: Learning-based conversational agents can generate conversations that violate basic logical rules and common sense, which can seriously affect user experience and lead to mistrust and frustration. To create accurate, smart, and trustworthy conversational agents, it is essential to adequately evaluate conversational agents. Existing evaluation methods mainly rely on goal completion rate, customer satisfaction, or automatic similarity comparison between conversational agent responses and human responses. These methods are labour-intensive, slow, and not scalable. There are few works to automatically and systematically test learning-based conversational agents.

Proposal: This project aims to automatically test and improve learning-based conversational agents via knowledge graphs. We will develop knowledge graph-based test input generation techniques for conversational agents. We will also design test coverage criteria based on graph coverage criteria. The test oracles are derived from knowledge graphs as well as metamorphic testing techniques. The automatically generated test inputs and oracles will also be used to augment training data or fine-tune the model to improve the performance of conversational agents.

In addition, the test oracle deriving approach can also aid real-time testing and fixing of agent conversations. Different from offline testing mentioned above, the response of agents to user inputs is compared against the test oracle derived from the knowledge graphs. Once the test oracle is violated, we use the test oracle to guide response editing to produce more correct and trustworthy agent responses which obey logical rules and common sense.

WP1: Knowledge graph-based test input generation for learning-based conversational agents. This package focuses on testing conversational agents offline. The primary contents are knowledge graph-based coverage criteria, graph node mutation for test input generation, and automatic test oracles through metamorphic relations.

WP2: Training data augmentation for more logical and trustworthy conversational agents. This package uses the generated test inputs and the derived test oracles from WP1 to either augment training data or fine-tune the learning model to improve the performance of conversational agents.

WP3: Testing and fixing chatbot conversations on the fly. This package targets the testing and fixing of agent conversations with real users. Instead of generating test inputs to systematically test conversational agents, this package checks whether the chatbots under test respond logically to real user inputs. To fulfill real-time testing, test oracles will be automatically generated through knowledge graph retrieval. Once a non-logical response is detected, the response will be fixed and guided by the test oracles. The repaired response will replace the original buggy response to be handed over to end users.

How to Apply:

The deadline for Round A for entry in October 2023 is Monday 28th November. See here for further round deadlines.

We will also be holding an online information session on Tuesday 15 November 1-2pm: UKRI Centre for Doctoral Training in Safe and Trusted AI - Info Session Tickets, Tue 15 Nov 2022 at 13:00 | Eventbrite

Committed to providing an inclusive environment in which diverse students can thrive, we particularly encourage applications from women, disabled and Black, Asian and Minority Ethnic (BAME) candidates, who are currently under-represented in the sector.

We encourage you to contact Dr Jie Zhang ([Email Address Removed]) to discuss your interest before you apply, quoting the project code: STAI-CDT-2023-KCL-10.

When you are ready to apply, please follow the application steps on our website here. Our 'How to Apply' page offers further guidance on the PhD application process.

Computer Science (8)

Funding Notes

See Fees and Funding section for more information - https://safeandtrustedai.org/apply-now/

References

Maroengsit, Wari, Thanarath Piyakulpinyo, Korawat Phonyiam, Suporn Pongnumkul, Pimwadee Chaovalit, and Thanaruk Theeramunkong. “A survey on evaluation methods for chatbots.” In Proceedings of the 2019 7th International conference on information and education technology, pp. 111-119. 2019.
Liu, Haochen, Jamell Dacon, Wenqi Fan, Hui Liu, Zitao Liu, and Jiliang Tang. “Does Gender Matter? Towards Fairness in Dialogue Systems.” In Proceedings of the 28th International Conference on Computational Linguistics, pp. 4403-4416. 2020.
RADZIWILL, NICOLE, and MORGAN BENTON. “Evaluating Quality of Chatbots and Intelligent Conversational Agents.” Software Quality Professional 19, no. 3 (2017).
Search Suggestions
Search suggestions

Based on your current searches we recommend the following search filters.