The UKRI Centre for Doctoral Training (CDT) in Safe and Trusted Artificial Intelligence (STAI) brings together world leading experts from King’s College London and Imperial College London to train a new generation of researchers in safe and trusted artificial intelligence (AI).
AI technologies are increasingly ubiquitous in modern society, with the potential to fundamentally change all aspects of our lives. While there is great interest in deploying AI in existing and new applications, serious concerns remain about the safety and trustworthiness of current AI technologies. These concerns are well-founded: there is now ample evidence in several application domains (autonomous vehicles, image recognition, etc.) that AI systems may currently be unsafe because of the lack of assurance over their behaviour.
Even in areas where AI methods function to high standards of correctness, there remain challenges. AI decisions are often not explained to users, do not always appear to adhere to social norms and conventions, can be distorted by bias in their data or algorithms and, at times, cannot even be understood by their engineers.
The overarching aim of the CDT is to train the first generation of AI scientists and engineers in methods of safe and trusted AI. An AI system is considered to be safe when we can provide some assurance about the correctness of its behaviour, and it is considered to be trusted if the average user can have confidence in the system and its decision making.
The CDT offers a unique four-year PhD programme, focussed on the use of model-based AI techniques for ensuring the safety and trustworthiness of AI systems. Model-based AI techniques provide an explicit language for representing, analysing and reasoning about systems and their behaviours.
Students will engage in various training activities (e.g. technical training in model-based techniques for safe and trusted AI, philosophy and ethics of AI, entrepreneurial mindset training) alongside their individual PhD project, ensuring that not only are they trained in state-of-the-art AI techniques, but also that they acquire a deep understanding of ethical, societal, and legal implications of AI in a research and industrial setting.
Through engagement with the STAI CDT’s diverse range of industrial partners, students will be exposed to the different experiences, challenges, and technical problems involved in both start-ups and large corporations. Students will graduate as experts in safe and trusted AI, able to consider the implications of AI systems in a deep and serious fashion, to recognise this as a key part of the AI development process and equipped to meet the needs of industrial and public sector organisations.
The CDT will fund approximately 12 students to join the programme in September 2020. Studentships are 4 year awards; fully-funded studentships include tuition fees, a competitive tax-free stipend set at the UKRI rate plus London-weighting, and a generous allowance for research consumables and conference travel.
Applications are now open for September 2020 entry. The CDT will be considering applications in several rounds, until all CDT places have been filled. Application deadlines for each round are indicated here and the next application deadline is the 20th April 2020 .
Committed to providing an inclusive environment in which diverse students can thrive, the CDT particularly encourages applications from women, disabled and Black, Asian and Minority Ethnic (BAME) candidates, who are currently under-represented in the sector.
For further details on how to apply via this link:
For any queries please contact the CDT office: