or
Looking to list your PhD opportunities? Log in here.
The surge in the adoption of Large Language Models (LLMs) across diverse industries has highlighted significant cybersecurity concerns. These AI-driven tools, exemplified by models like OpenAI's GPT series, are pivotal in shaping advancements in natural language processing, content generation, and automated decision-making. However, the inherent complexities of LLMs, combined with their reliance on extensive datasets for training, have exposed a multitude of security vulnerabilities. These range from the generation of biased or manipulative content to the risk of personal data exposure, thus posing severe risks to user privacy, data integrity, and the ethical deployment of AI technologies.
This PhD proposal aims to delve deep into the cybersecurity challenges posed by LLMs, focusing on identifying and addressing the multifaceted vulnerabilities inherent in these systems. The research will begin with an exhaustive literature review to consolidate existing knowledge on LLM security, covering academic papers, industry reports, and documented incidents of security breaches involving LLMs. This will provide a comprehensive overview of the current threat landscape, underscoring the critical need for dedicated security mechanisms tailored to LLM architectures.
Building on this foundation, the PhD will focus on empirical research to uncover and document new vulnerabilities specific to LLMs. By using theoretical exploration and practical experimentation, including adversarial testing and analysis of model responses to malicious inputs, the study will investigate LLMs' susceptibility to various cyber threats. These include data poisoning, model inversion attacks that aim to reconstruct sensitive training data, and the potential for LLMs to inadvertently generate or propagate harmful content.
An important element of this research will be the development of a novel security framework designed specifically for the protection of LLMs. This framework will aim to safeguard against identified threats, ensuring the confidentiality and integrity of data processed by LLMs, while also maintaining the reliability and ethical standards of generated outputs. The efficacy of proposed security interventions will be rigorously evaluated through simulation and, where feasible, deployment in real-world scenarios.
Furthermore, the project will contribute to the ethical discourse surrounding AI by proposing actionable guidelines for the secure and responsible implementation of LLMs. It aims to foster a safer digital environment where LLMs can be leveraged for positive societal impact without compromising security or ethical principles.
Contact for information on the project: WilliamsL10@cardiff.ac.uk
Academic criteria: A 2:1 Honours undergraduate degree or a master's degree, in computing or a related subject. Applicants with appropriate professional experience are also considered. Degree-level mathematics (or equivalent) is required for research in some project areas.
Applicants for whom English is not their first language must demonstrate proficiency by obtaining an IELTS score of at least 6.5 overall, with a minimum of 6.0 in each skills component.
How to apply:
This project is accepting applications all year round, for self-funded candidates.
Please contact the supervisors of the project prior to submitting your application to discuss and develop an individual research proposal that builds on the information provided in this advert. Once you have developed the proposal with support from the supervisors, please submit your application following the instructions provided below.
Please submit your application via Computer Science and Informatics - Study - Cardiff University
In order to be considered candidates must submit the following information:
Interview - If the application meets the entrance requirements, you will be invited to an interview.
If you have any additional questions or need more information, please contact: COMSC-PGR@cardiff.ac.uk
The university will respond to you directly. You will have a FindAPhD account to view your sent enquiries and receive email alerts with new PhD opportunities and guidance to help you choose the right programme.
Log in to save time sending your enquiry and view previously sent enquiries
The information you submit to Cardiff University will only be used by them or their data partners to deal with your enquiry, according to their privacy notice. For more information on how we use and store your data, please read our privacy statement.
Research output data provided by the Research Excellence Framework (REF)
Click here to see the results for all UK universitiesBased on your current searches we recommend the following search filters.
Check out our other PhDs in Cardiff, United Kingdom
Start a New search with our database of over 4,000 PhDs
Based on your current search criteria we thought you might be interested in these.
Developing the next generation of pedestrian behaviour models for revival of high streets and sustainable transport [Self-Funded Students Only]
Cardiff University
Interpretation and Explanation of Language Model Outputs [Self-funded students only]
Cardiff University
Data-Centric Solutions for Earth Observation Challenges: Developing Cutting-Edge Techniques for Remote Sensing Analysis [SELF-FUNDED STUDENTS ONLY]
Cardiff University