Weekly PhD Newsletter | SIGN UP NOW Weekly PhD Newsletter | SIGN UP NOW
  Prof Carmine Ventre  Applications accepted all year round  Funded PhD Project (UK Students Only)

About the Project

This project is part of a unique cohort based 4-year fully funded PhD programme at the UKRI Centre for Doctoral Training in Safe and Trusted AI.

The UKRI Centre for Doctoral Training in Safe and Trusted AI brings together world leading experts from King’s College London and Imperial College London to train a new generation of researchers and is focused on the use of symbolic artificial intelligence for ensuring the safety and trustworthiness of AI systems.

Project Description:

AI systems often collect their input from humans. For example, parents are asked to input their preferences over primary schools before a centralised algorithm allocates children to schools. Should the AI trust the input provided by parents who may try to game the system? Should the parents trust that the AI system has optimised their interests? Would it be safe to run the algorithm with a potentially misleading input?

Algorithmic Game Theory (AGT) is a research field that attempts to add safety and trustworthiness to AI systems vis-a-vis strategic reasoning. With its set of symbolic tools, one aims to align the goals of the AI system (e.g., the allocation algorithm above) with those of the agents (e.g., the parents above) involved. The AI will then be safe, in that we can analytically predict end states of the system, and trustworthy, since no rational agent will attempt to misguide the system and the system will work on truthful inputs.

One assumption underlying much of the work in AGT is, however, pretty limiting: agents need to be fully rational. This is unrealistic in many real-life scenarios; we, in fact, have empirical evidence that people often misunderstand the incentives and try to game the system even when it is against their own interest. Moreover, modern software agents, often built on top of AI tools, are seldom able to perfectly optimise their rewards.

This project will look at novel approaches to deal with imperfect rationality, including analysis of known AI systems and the design of novel ones. This will involve theoretical work that builds on the recent advances on mechanism design for imperfectly rational agents (namely obvious strategyproofness and not obvious manipulability) to include more complex domains and the modelling of further behavioural biases in mechanism design.

How to Apply:

The deadline for Round A for entry in October 2023 is Monday 28th November. See here for further round deadlines.

We will also be holding an online information session on Tuesday 15 November 1-2pm: UKRI Centre for Doctoral Training in Safe and Trusted AI - Info Session Tickets, Tue 15 Nov 2022 at 13:00 | Eventbrite

Committed to providing an inclusive environment in which diverse students can thrive, we particularly encourage applications from women, disabled and Black, Asian and Minority Ethnic (BAME) candidates, who are currently under-represented in the sector.

We encourage you to contact Prof Carmine Ventre () to discuss your interest before you apply, quoting the project code: STAI-CDT-2023-KCL-13.

When you are ready to apply, please follow the application steps on our website here. Our 'How to Apply' page offers further guidance on the PhD application process.


Funding Notes

For more information on funding please see the Fees and Funding section here - View Website

References

https://kclpure.kcl.ac.uk/portal/files/174629484/OSP_MOR.pdf
https://link.springer.com/article/10.1007/s00224-022-10071-2
https://arxiv.org/pdf/2202.06660.pdf
https://arxiv.org/pdf/2007.11868.pdf

Email Now


Search Suggestions
Search suggestions

Based on your current searches we recommend the following search filters.

PhD saved successfully
View saved PhDs