Deep Neural Networks (DNNs) have demonstrated human-level capabilities in several challenging machine learning tasks including image classification, natural language processing and speech recognition. Despite the anticipated benefits from the widespread adoption of DNNs, their deployment in safety-critical systems must be characterized by a high degree of dependability. Deviations from the expected behaviour or correct operation can endanger human lives or cause significant financial loss. Preliminary reports for recent unfortunate events involving autonomous vehicles , underline not only the challenges associated with using DL systems but also the urgent need for improved assurance evaluation practices. Hence, there is a need for systematic and effective safety assurance of DNNs.
This PhD project will contribute significantly to addressing this challenge by devising a theoretical foundation for the engineering of assured DL systems and their quality evaluation. In particular, the project will develop techniques for testing and verifying DNNs as a means of identifying safety violations and performing risk assessment, i.e., using testing and verification results as explanations of safety violations. Building on state-of-the-art research from our research team [2,3] the project will
- explore ways to adapt testing strategies from traditional software testing to DNN testing
- investigate methods to verify DNNs and ensure that they are robust to adversarial inputs  (e.g., local adversarial robustness)
- investigate methods to generate inputs that increase the robustness of DNNs (using benign inputs and adversarial inputs)
- apply the proposed testing and verification strategies in state-of-the-art DNNs software (e.g., Tensorflow, Keras) and validate their feasibility both using image datasets (MNIST, Cifar) and robotic/game simulators (e.g., Apollo, Carla) [4, 5]