or
Looking to list your PhD opportunities? Log in here.
These projects are open to students worldwide, but have no funding attached. Therefore, the successful applicant will be expected to fund tuition fees at the relevant level (home or international) and any applicable additional research costs. Please consider this before applying.
This project proposes the development of an AI-based security fuzzer aimed at enhancing the security of blackbox software, hardware, or their combinations. Traditional fuzzing techniques, although effective, have limitations in their ability to adapt and learn from each test iteration. Our AI fuzzer addresses these challenges by incorporating machine learning algorithms that enable it to evaluate and refine its testing strategies continuously.
By leveraging the advancements in AI, particularly in large language models (LLMs), the fuzzer can understand and exploit internal system states, such as memory accesses, network traffic, and CPU execution cycles, providing a deeper insight into system vulnerabilities during a fuzzing attack. This innovative approach increases the efficiency of vulnerability discovery and broadens the scope of testing, potentially uncovering previously undetected security flaws.
The significance of this project lies in its potential to revolutionise how we approach software and hardware security. With a growing reliance on digital systems, ensuring their security is paramount. This AI-driven fuzzing tool aims to enhance the robustness of these systems against evolving cybersecurity threats, benefiting a wide range of industries and users.
Recent advancements in AI-powered fuzzing, such as Google's integration of LLMs with OSS-Fuzz, have shown promising results, including increased code coverage and the rediscovery of known vulnerabilities in areas previously not covered by fuzzing [1]. This project aims to build upon these pioneering efforts, pushing the boundaries of automated vulnerability detection.
Essential Background:
Decisions will be based on academic merit. The successful applicant should have, or expect to obtain, a UK Honours Degree at 2.1 (or equivalent) in computer science, with a focus on cybersecurity and artificial intelligence. Essential skills include:
.
Desirable knowledge:.
Prior experience in AI, security research, or development of security tools would be advantageous but not essential for this project. The student should be motivated to contribute to cutting-edge research in the field of AI-enhanced security and could work both independently and collaboratively.
Application Procedure:
Formal applications can be completed online: https://www.abdn.ac.uk/pgap/login.php.
You should apply for Computing Science (PhD) to ensure your application is passed to the correct team for processing.
Please clearly note the name of the lead supervisor and project title on the application form. If you do not include these details, it may not be considered for the studentship.
Your application must include: A personal statement, an up-to-date copy of your academic CV, and clear copies of your educational certificates and transcripts.
Please note: you DO NOT need to provide a research proposal with this application.
If you require any additional assistance in submitting your application or have any queries about the application process, please don't hesitate to contact us at [Email Address Removed]
This is a self-funding project open to students worldwide. Our typical start dates for this programme are February or October.
Fees for this programme can be found here Finance and Funding | Study Here | The University of Aberdeen (abdn.ac.uk).
Additional research costs / bench fees may also apply and will be discussed prior to any offer being made.
Based on your current searches we recommend the following search filters.
Check out our other PhDs in Aberdeen, United Kingdom
Start a New search with our database of over 4,000 PhDs
Based on your current search criteria we thought you might be interested in these.
AI-Driven Cloud Security: Advancing Sustainable and Zero Trust Frameworks through Automation
Kingston University
Development, Analysis, and AI-Augmentation of Steganographic Blockchain Protocols in Conjunction with Secure Multi-party Computation for Enhanced Privacy and Security
Kingston University
AI-Enhanced Iris Recognition: Transforming Biometric Security for a Safer World
Kingston University