Don't miss our weekly PhD newsletter | Sign up now Don't miss our weekly PhD newsletter | Sign up now

  Addressing Cybersecurity Challenges in Large Language Models (LLMs) [SELF-FUNDED STUDENTS ONLY]


   Cardiff School of Computer Science & Informatics

  , ,  Applications accepted all year round  Self-Funded PhD Students Only

About the Project

The surge in the adoption of Large Language Models (LLMs) across diverse industries has highlighted significant cybersecurity concerns. These AI-driven tools, exemplified by models like OpenAI's GPT series, are pivotal in shaping advancements in natural language processing, content generation, and automated decision-making. However, the inherent complexities of LLMs, combined with their reliance on extensive datasets for training, have exposed a multitude of security vulnerabilities. These range from the generation of biased or manipulative content to the risk of personal data exposure, thus posing severe risks to user privacy, data integrity, and the ethical deployment of AI technologies.

This PhD proposal aims to delve deep into the cybersecurity challenges posed by LLMs, focusing on identifying and addressing the multifaceted vulnerabilities inherent in these systems. The research will begin with an exhaustive literature review to consolidate existing knowledge on LLM security, covering academic papers, industry reports, and documented incidents of security breaches involving LLMs. This will provide a comprehensive overview of the current threat landscape, underscoring the critical need for dedicated security mechanisms tailored to LLM architectures.

Building on this foundation, the PhD will focus on empirical research to uncover and document new vulnerabilities specific to LLMs. By using theoretical exploration and practical experimentation, including adversarial testing and analysis of model responses to malicious inputs, the study will investigate LLMs' susceptibility to various cyber threats. These include data poisoning, model inversion attacks that aim to reconstruct sensitive training data, and the potential for LLMs to inadvertently generate or propagate harmful content.

An important element of this research will be the development of a novel security framework designed specifically for the protection of LLMs. This framework will aim to safeguard against identified threats, ensuring the confidentiality and integrity of data processed by LLMs, while also maintaining the reliability and ethical standards of generated outputs. The efficacy of proposed security interventions will be rigorously evaluated through simulation and, where feasible, deployment in real-world scenarios.

Furthermore, the project will contribute to the ethical discourse surrounding AI by proposing actionable guidelines for the secure and responsible implementation of LLMs. It aims to foster a safer digital environment where LLMs can be leveraged for positive societal impact without compromising security or ethical principles.

Contact for information on the project:

Academic criteria: A 2:1 Honours undergraduate degree or a master's degree, in computing or a related subject.  Applicants with appropriate professional experience are also considered. Degree-level mathematics (or equivalent) is required for research in some project areas. 

Applicants for whom English is not their first language must demonstrate proficiency by obtaining an IELTS score of at least 6.5 overall, with a minimum of 6.0 in each skills component. 

How to apply:  

This project is accepting applications all year round, for self-funded candidates.

Please contact the supervisors of the project prior to submitting your application to discuss and develop an individual research proposal that builds on the information provided in this advert. Once you have developed the proposal with support from the supervisors, please submit your application following the instructions provided below.

Please submit your application via Computer Science and Informatics - Study - Cardiff University 

In order to be considered candidates must submit the following information:  

  • Supporting statement  
  • CV  
  • In the ‘Research Proposal’ section of the application enter the name of the project you are applying to and upload your Individual research proposal, as mentioned above in BOLD 
  • In the funding field of your application, please provide details of your funding source.  
  • Qualification certificates and Transcripts 
  • References x 2  
  • Proof of English language (if applicable) 

Interview - If the application meets the entrance requirements, you will be invited to an interview.   

If you have any additional questions or need more information, please contact:  

Computer Science (8)

Funding Notes

This project is offered for self-funded students only, or those with their own sponsorship or scholarship award.
Please note that a PhD Scholarship may also available for this PhD project. If you are interested in applying for a PhD Scholarship, please search FindAPhD for this specific project title, supervisor or School within its Scholarships category.

References

Yao, Yifan, et al. "A survey on large language model (llm) security and privacy: The good, the bad, and the ugly." High-Confidence Computing (2024): 100211. https://www.sciencedirect.com/science/article/pii/S266729522400014X
Wu, Fangzhou, et al. "A New Era in LLM Security: Exploring Security Concerns in Real-World LLM-based Systems." arXiv preprint arXiv:2402.18649 (2024). https://arxiv.org/pdf/2402.18649
Glukhov, David, et al. "Llm censorship: A machine learning challenge or a computer security problem?." arXiv preprint arXiv:2307.10719 (2023).
https://arxiv.org/pdf/2307.10719

Register your interest for this project



How good is research at Cardiff University in Computer Science and Informatics?


Research output data provided by the Research Excellence Framework (REF)

Click here to see the results for all UK universities

Where will I study?