Don't miss our weekly PhD newsletter | Sign up now Don't miss our weekly PhD newsletter | Sign up now

This project is no longer listed on FindAPhD.com and may not be available.

Click here to search FindAPhD.com for PhD studentship opportunities
  Dr Mohammad Mousavi  Applications accepted all year round  Funded PhD Project (UK Students Only)

About the Project

This project is part of a unique cohort based 4-year fully-funded PhD programme at the UKRI Centre for Doctoral Training in Safe and Trusted AI.

The UKRI Centre for Doctoral Training in Safe and Trusted AI brings together world leading experts from King’s College London and Imperial College London to train a new generation of researchers and is focused on the use of symbolic artificial intelligence for ensuring the safety and trustworthiness of AI systems.

Project Description:

The main objective of this project is to develop AI techniques to analyse the behaviour recorded in the past Stop and Search (S&S) operations. The AI system will be used to inform future operations, avoid unnecessary escalations that jeopardise the safety of the officers (and the searched people) and increase trust in S&S operations.

To this end, we will develop our techniques based on the causal analysis techniques based on causal theories elaborated by Halpern [1], Pearl [2], and later extensions by Mousavi [3]. Our theories will build upon behavioural models provided by the Mayor’s Office for Policing And Crime (MOPAC). These models are the result of coding a rich dataset of videos from past Stop and Search operations by the Metropolitan Police. Coding enables analysis and extractions of rules of behaviour to inform model development. We will develop intuitive visualisations of the identified rules and causal relations, explaining the role of different events in a potential (de-)escalation of the Stop and Search operation. We quantify the responsibility of these events building upon the theory of responsibility developed by Chockler and Halpern [4,5].

The theories and tools developed in this project will be initially developed and evaluated for the specific abstractions (e.g., events and durations) in the particular domain of Stop and Search operations. However, we will ensure that they are transferable to other systems, e.g., using the case studies from the TAS Hub and Verifiability Node, ensuring the transformative value of the research for the broader domain of Safe and Trusted AI. In particular, our project will lead to a set of novel techniques for detecting causation and generating explanations. This will both enhance the general understanding of the issue of explainability and lead to new algorithms for generating explanation for AI models.

How to Apply:

The deadline for Round A for entry in October 2023 is Monday 28th November. See here for further round deadlines.

We will also be holding an online information session on Tuesday 15 November 1-2pm: UKRI Centre for Doctoral Training in Safe and Trusted AI - Info Session Tickets, Tue 15 Nov 2022 at 13:00 | Eventbrite

Committed to providing an inclusive environment in which diverse students can thrive, we particularly encourage applications from women, disabled and Black, Asian and Minority Ethnic (BAME) candidates, who are currently under-represented in the sector.

We encourage you to contact Dr Mohammad Mousavi ([Email Address Removed]) to discuss your interest before you apply, quoting the project code: STAI-CDT-2022-KCL-9.

When you are ready to apply, please follow the application steps on our website here. Our 'How to Apply' page offers further guidance on the PhD application process.

Computer Science (8)

Funding Notes

See Fees and Funding section for more information - https://safeandtrustedai.org/apply-now/

References

[1] J.Y. Halpern, Actual Causality, MIT, 2019.
[2] J. Pearl, Causality: Models, Reasoning and Inference, Cambridge University Press, 2009. [3] G. Caltais, M.R. Mousavi and H. Singh. Causal Reasoning for Safety in Hennessy-Milner Logic. Fundamenta Informaticae, 173(2-3): 217-251, 2020.
[4] Hana Chockler, Norman E. Fenton, Jeroen Keppens, David A. Lagnado:
Causal analysis for attributing responsibility in legal cases. ICAIL 2015: 33-42.
[5] Hana Chockler, Joseph Y. Halpern. Responsibility and Blame: A Structural-Model Approach. J. Artif. Intell. Res. 22: 93-115, 2004.
Search Suggestions
Search suggestions

Based on your current searches we recommend the following search filters.