Looking to list your PhD opportunities? Log in here.
About the Project
This project is part of a unique cohort based 4-year fully-funded PhD programme at the UKRI Centre for Doctoral Training in Safe and Trusted AI.
The UKRI Centre for Doctoral Training in Safe and Trusted AI brings together world leading experts from King’s College London and Imperial College London to train a new generation of researchers and is focused on the use of symbolic artificial intelligence for ensuring the safety and trustworthiness of AI systems.
Project Description:
Data-driven approaches have been proven powerful for security detection tasks, eg, for malware detection [a]. However, the arms race between attackers and defenders causes an ever-growing distribution shift in the main characteristics of the detection task, although the expert knowledge modifies slowly.
This project will define new symbolic AI framework based on logic-based reasoning relying on expert knowledge in the security domain to devise techniques for understanding the root causes of distribution shift [b,c,d]. Understanding how to change reasoning frameworks so to embed expert knowledge and verify whether a certain type of drift is happening (co-variate shift, label shift, or concept shift [b]). This reasoning framework will rely also on knowledge extraction from threat reports.
The final objective of this symbolic AI framework is to gather a more understanding of the phenomenon and its root causes, as well as understanding logic-based constraints that could be later embedded in data-driven algorithms to improve their resilience against distribution shift.
How to Apply
The deadline for Round A for entry in October 2023 is Monday 28th November. See here for further round deadlines.
We will also be holding an online information session on Tuesday 15 November 1-2pm: UKRI Centre for Doctoral Training in Safe and Trusted AI - Info Session Tickets, Tue 15 Nov 2022 at 13:00 | Eventbrite
Committed to providing an inclusive environment in which diverse students can thrive, we particularly encourage applications from women, disabled and Black, Asian and Minority Ethnic (BAME) candidates, who are currently under-represented in the sector.
We encourage you to contact Dr Fabio Pierazzi (fabio.pierazzi@kcl.ac.uk) to discuss your interest before you apply, quoting the project code: STAI-CDT-2023-KCL-17.
When you are ready to apply, please follow the application steps on our website here. Our 'How to Apply' page offers further guidance on the PhD application process.
References
[b] Moreno-Torres, Jose G., et al. “A unifying view on dataset shift in classification.” _Pattern recognition_, 2012
[c] Jajodia, Sushil, et al. “A probabilistic logic of cyber deception.” _IEEE Transactions on Information Forensics and Security_, 2017.
[d] Poon, Hoifung, and Pedro Domingos. “Sound and efficient inference with probabilistic and deterministic dependencies.” _AAAI_, 2006.
Email Now
Why not add a message here
The information you submit to King’s College London will only be used by them or their data partners to deal with your enquiry, according to their privacy notice. For more information on how we use and store your data, please read our privacy statement.

Search suggestions
Based on your current searches we recommend the following search filters.
Check out our other PhDs in London, United Kingdom
Check out our other PhDs in United Kingdom
Start a New search with our database of over 4,000 PhDs

PhD suggestions
Based on your current search criteria we thought you might be interested in these.
Building Full Body Illusion in Immersive Virtual Reality with Physically Based Avatars
Kingston University
Regulation of water and nutrient cycles with Nature-Based Solutions (NBS) in a changing environment
Glasgow Caledonian University
Designing an Internet-based intervention for carers and adults with acquired visual loss
Anglia Ruskin University ARU