Don't miss our weekly PhD newsletter | Sign up now Don't miss our weekly PhD newsletter | Sign up now

  Intelligent and Efficient Testing and Verification of Deep Neural Networks


   Department of Computer Science

   Applications accepted all year round  Self-Funded PhD Students Only

About the Project

About the Project

Deep Neural Networks (DNNs) have demonstrated human-level capabilities in several challenging machine learning tasks including image classification, natural language processing and speech recognition. Despite the anticipated benefits from the widespread adoption of DNNs, their deployment in safety-critical systems must be characterized by a high degree of dependability. Deviations from the expected behaviour or correct operation can endanger human lives or cause significant financial loss. Preliminary reports for recent unfortunate events involving autonomous vehicles [1], underline not only the challenges associated with using DL systems but also the urgent need for improved assurance evaluation practices. Hence, there is a need for systematic and effective safety assurance of DNNs.

This PhD project will contribute significantly to addressing this challenge by devising a theoretical foundation for the engineering of assured DL systems and their quality evaluation. In particular, the project will develop techniques for testing and verifying DNNs as a means of identifying safety violations and performing risk assessment, i.e., using testing and verification results as explanations of safety violations. Building on state-of-the-art research from our research team [2,3] the project will

  • explore ways to adapt testing strategies from traditional software testing to DNN testing 
  • investigate methods to verify DNNs and ensure that they are robust to adversarial inputs [6] (e.g., local adversarial robustness)
  • investigate methods to generate inputs that increase the robustness of DNNs (using benign inputs and adversarial inputs)
  • apply the proposed testing and verification strategies in state-of-the-art DNNs software (e.g., Tensorflow, Keras) and validate their feasibility both using image datasets (MNIST, Cifar) and robotic/game simulators (e.g., Apollo, Carla) [4, 5]


Computer Science (8) Engineering (12)

References

[1] National Transportation Safety Board.Preliminary report: Highway HWY18MH010. https://www. ntsb.gov/investigations/AccidentReports/Reports/HWY18MH010-prelim.pdf
[2] Gerasimou, S., Eniser, H. F., Sen, A. & Hakan, A. (2020). DeepImportance: Importance-Driven Deep Learning System Testing. In 42nd International Conference on Software Engineering.
[3] Eniser, H. F., Gerasimou, S., & Sen, A. (2019, April). DeepFault: Fault localization for deep neural networks. In International Conference on Fundamental Approaches to Software Engineering (pp. 171-191). Springer, Cham.
[4] Carla simulator https://carla.readthedocs.io/en/latest/getting_started/
[5] The Udacity open-source self-driving car project. udacity/self-driving-car
[6] Huang, X., Kwiatkowska, M., Wang, S., & Wu, M. (2017, July). Safety verification of deep neural networks. In International Conference on Computer Aided Verification (pp. 3-29). Springer, Cham.

Register your interest for this project



How good is research at University of York in Computer Science and Informatics?


Research output data provided by the Research Excellence Framework (REF)

Click here to see the results for all UK universities
Search Suggestions
Search suggestions

Based on your current searches we recommend the following search filters.