Don't miss our weekly PhD newsletter | Sign up now Don't miss our weekly PhD newsletter | Sign up now

  Explainable AI for Safety Critical Engineering Systems

   School of Physics, Engineering and Technology

   Applications accepted all year round  Self-Funded PhD Students Only

About the Project

The University of York is at the forefront of transformative research in data-centric engineering, digital twins, and AI. Our pioneering work is revolutionizing the design of complex systems by harnessing the power of data. By synergizing these two powerful tools, we are gaining new insights into future systems, enabling us to approach them with greater realism.

We are currently inviting applications for a Ph.D. degree in Artificial Intelligence with a specific focus on Model Explainability. In today's world, AI models are deployed to either provide decision support for users or autonomously execute tasks without user intervention. However, the impact of these models on system performance and safety can be profound. Therefore, there is a pressing need to develop transparent, interpretable, and explainable AI models for users of safety-critical engineering systems.

Over the past decade, AI techniques, including machine learning, have made significant advancements and found applications across various engineering domains such as aircraft design, operation, production, and maintenance. However, it is crucial to ensure the behaviour and output of AI models can be reliably ascertained and validated in real-time environments. Explainable AI (XAI), a sub-field of AI, is actively exploring methods to expose complex AI models to human users and operators in interpretable and understandable ways. This entails enhancing and optimizing XAI techniques like Shapley Additive exPlanations (SHAP) and Local Interpretable Model-agnostic Explanations (LIME) to improve transparency in model construction, interpretability of important features, and understandability of model results. By improving XAI techniques, we aim to generate a high level of confidence in the system and foster trust in the collaboration between AI and users.

This research project aims to verify the principles of Explainable AI and develop an adaptable XAI framework that accurately represents explainable AI models from the users' perspective, emphasizing transparency, interpretability, and understandability for safety-critical engineering systems.

The University of York takes immense pride in its placement in the top ten UK universities in the REF, affirming our commitment to research excellence with social impact. As a University for the Public Good, we strive to establish strong partnerships and share knowledge to create local and global benefits. The overarching ambition of this project and its potential impact on Safety Critical Engineering Systems align perfectly with our principles of inclusion, internationalism, and collaboration.

Entry requirements:

Candidates should have (or expect to obtain) a minimum of a UK upper second class honours degree (2.1) or equivalent in Engineering, Physics, Computer Science, Mathematics or a closely-related subject.

How to apply:

Applicants should apply via the University’s online application system at Please read the application guidance first so that you understand the various steps in the application process.

Engineering (12)

Funding Notes

This is a self-funded project and you will need to have sufficient funds in place (eg from scholarships, personal funds and/or other sources) to cover the tuition fees and living expenses for the duration of the research degree programme. Please check the School of Physics, Engineering and Technology website for details about funding opportunities at York. View Website

How good is research at University of York in Engineering?

Research output data provided by the Research Excellence Framework (REF)

Click here to see the results for all UK universities

Register your interest for this project

Where will I study?

Search Suggestions
Search suggestions

Based on your current searches we recommend the following search filters.