Distributed Machine Learning (DML) is envisioned as a promising candidate to assist network management and mobility support in 6G and beyond networks by exploiting the Edge Computing resources to enable intelligent mobility prediction, handover management, resource allocation/reservation, content caching, proactive energy saving, and more. In this regard, Federated Learning (FL) has been introduced where the collected data is processed locally, and only model updates are communicated through the network. This provides better information privacy and requires less communication resources for model transmission. However, the success of this service provisioning is highly dependent on the security and resilience of such enabling technologies.
To this end, the methods underpinning ML/FL systems are inherently vulnerable to adversarial attacks, which can significantly degrade the performance of ML/FL assisted network and mobility management. Attackers can introduce manipulated data at the learning stage and skew the system’s outcome decisions, rendering the network service unstable, malfunctioning, or unavailable. This can endanger people’s lives in critical application scenarios such as self-driving cars. In this project, you will work with a supervisory team of domain experts to develop novel protection solutions against such ML/FL adversarial attacks, in addition to self-healing and recovery procedures in case of successful exploits.
Aims and objectives
In this project, we seek to investigate how such online information can be used to provide novel insight into industry trends and the project aims to develop novel protection mechanisms against adversarial attacks in Distributed/Federated ML systems for network management and mobility support in 6G and beyond networks. It will help in providing secure and reliable communication during movement in smart cities, and ultimately increasing safety in application scenarios such as self-driving cars, drones, etc.
Find out more.