Looking to list your PhD opportunities? Log in here.
This project is no longer listed on FindAPhD.com and may not be available.
Click here to search FindAPhD.com for PhD studentship opportunitiesAbout the Project
Deep Neural Networks (DNNs) have demonstrated human-level capabilities in several challenging machine learning tasks including image classification, natural language processing and speech recognition. Despite the anticipated benefits from the widespread adoption of DNNs, their deployment in safety-critical systems must be characterized by a high degree of dependability. Deviations from the expected behaviour or correct operation can endanger human lives or cause significant financial loss. Preliminary reports for recent unfortunate events involving autonomous vehicles [1], underline not only the challenges associated with using DL systems but also the urgent need for improved assurance evaluation practices. Hence, there is a need for systematic and effective safety assurance of DNNs.
This PhD project will contribute significantly to addressing this challenge by devising a theoretical foundation for the engineering of assured DL systems and their quality evaluation. In particular, the project will develop techniques for testing and verifying DNNs as a means of identifying safety violations and performing risk assessment, i.e., using testing and verification results as explanations of safety violations. Building on state-of-the-art research from our research team [2,3] the project will
- explore ways to adapt testing strategies from traditional software testing to DNN testing
- investigate methods to verify DNNs and ensure that they are robust to adversarial inputs [6] (e.g., local adversarial robustness)
- investigate methods to generate inputs that increase the robustness of DNNs (using benign inputs and adversarial inputs)
- apply the proposed testing and verification strategies in state-of-the-art DNNs software (e.g., Tensorflow, Keras) and validate their feasibility both using image datasets (MNIST, Cifar) and robotic/game simulators (e.g., Apollo, Carla) [4, 5]
References
[2] Gerasimou, S., Eniser, H. F., Sen, A. & Hakan, A. (2020). DeepImportance: Importance-Driven Deep Learning System Testing. In 42nd International Conference on Software Engineering.
[3] Eniser, H. F., Gerasimou, S., & Sen, A. (2019, April). DeepFault: Fault localization for deep neural networks. In International Conference on Fundamental Approaches to Software Engineering (pp. 171-191). Springer, Cham.
[4] Carla simulator https://carla.readthedocs.io/en/latest/getting_started/
[5] The Udacity open-source self-driving car project. udacity/self-driving-car
[6] Huang, X., Kwiatkowska, M., Wang, S., & Wu, M. (2017, July). Safety verification of deep neural networks. In International Conference on Computer Aided Verification (pp. 3-29). Springer, Cham.
How good is research at University of York in Computer Science and Informatics?
Research output data provided by the Research Excellence Framework (REF)
Click here to see the results for all UK universities
Search suggestions
Based on your current searches we recommend the following search filters.
Check out our other PhDs in York, United Kingdom
Check out our other PhDs in United Kingdom
Start a New search with our database of over 4,000 PhDs

PhD suggestions
Based on your current search criteria we thought you might be interested in these.
Computational neuroscience: Statistical signal processing for multivariate neuronal data, Neural computing with Spiking neural networks.
University of York
Intelligent Medium Access Control for Underwater Acoustic Communication Networks
University of York
Development of Convolutional Neural Networks and Associated Error Estimation Techniques for Vertebral Segmentation in Axial Computed Tomography Image Slices
The University of Manchester