Machine unlearning for privacy preserving applications


   School of Natural and Computing Sciences

  , Dr Mingjun Zhong  Applications accepted all year round  Self-Funded PhD Students Only

About the Project

These projects are open to students worldwide, but have no funding attached. Therefore, the successful applicant will be expected to fund tuition fees at the relevant level (home or international) and any applicable additional research costs. Please consider this before applying. 

Deep learning has made a tremendous progress in a wide variety of areas, with applications ranging from scene understanding to medical imaging, to name a few. Much of this progress can be attributed to the availability of large scale training datasets that are either collected in controlled environments or crowd sourced. While dealing with public datasets we must exercise caution about protecting user’s privacy and make sure biases are not amplified or propagated.

While fully deleting the influence of selected users’ data from databases is simpler, removing the influence of selected training data (or forget set)  from the already trained machine learning (ML) models is not straightforward. It can lead to undesirable behaviour, such as reduced performance on rest of the train set and poor generalization on held-out-examples. With the aim to mitigate this issue, machine unlearning (MU) [1,2] has emerged as a subfield in ML, where the goal is to carefully remove the influence of the forget set from the trained ML model, while maintaining beneficial properties on the rest of the dataset.

A straightforward approach to MU can consist in re-training the model from scratch after having the forget set removed, but it is not efficient. Therefore, we will seek for ways of approaching MU in a more principled way, where the challenges involve forgetting the requested data, preserving the utility of the model and improving the efficiency.

The field of MU is also closely related to research directions, such as differential privacy [3], fairness [4] and lifelong Learning [5]. In this work the PhD student will develop methods pertaining to MU in the aforementioned research areas.

Essential Background:

Decisions will be based on academic merit. The successful applicant should have, or expect to obtain, a UK Honours Degree at 2.1 (or equivalent) in Computing Science.

Application Procedure:

Formal applications can be completed online: https://www.abdn.ac.uk/pgap/login.php.

You should apply for Computing Science (PhD) to ensure your application is passed to the correct team for processing.

Please clearly note the name of the lead supervisor and project title on the application form. If you do not include these details, it may not be considered for the studentship.

Your application must include: A personal statement, an up-to-date copy of your academic CV, and clear copies of your educational certificates and transcripts.

Please note: you DO NOT need to provide a research proposal with this application.

If you require any additional assistance in submitting your application or have any queries about the application process, please don't hesitate to contact us at

Computer Science (8)

Funding Notes

This is a self-funding project open to students worldwide. Our typical start dates for this programme are February or October.

Fees for this programme can be found here Finance and Funding | Study Here | The University of Aberdeen (abdn.ac.uk)

Additional research costs / bench fees may also apply and will be discussed prior to any offer being made.


References

[1] Nguyen et al., A Survey of Machine Unlearning, 2022.
[2] Bourtoule et al., Machine Unlearning, 2021.
[3] Baraheem et al., A Survey on Differential Privacy with Machine Learning and Future Outlook, 2022.
[4] Mehrabi et al., A Survey on Bias and Fairness in Machine Learning, 2019.
[5] Masana et al., Class-incremental learning: survey and performance evaluation on image classification, TPAMI, 2022.

Register your interest for this project


Search Suggestions
Search suggestions

Based on your current searches we recommend the following search filters.