With the ever-increasing advancements in mobile vision and sensing technologies, augmented, mixed, and/or virtual reality technology (AR/MR/VR; we collectively refer as MR) is increasingly becoming popular.
From face filters on online social networks to game applications augmenting virtual pets or monsters that seemingly inhabit the physical world, various MR applications are now widely accessible to most mobile users.
MR platforms require spatial understanding to detect objects or surfaces, often including their structural (i.e. spatial geometry) and photo-metric (e.g. color and texture) attributes, to allow applications to place virtual or synthetic objects seemingly “anchored” on to real world objects; in some cases, even allowing interactions between the physical and virtual objects.
These functionalities require MR platforms to capture the 3D spatial information with high resolution and high frequency. However, these pose unprecedented risks to user privacy.
Compared to images and videos, spatial data poses additional and, potentially, latent risks to users of MR. Aside from objects being detected, spatial information also reveals the location of the user with high specificity, e.g. in which part of the house the user is, or even detect user poses, movement, or changes in their environment which the user did not indent to share.
Adversaries can vary from a background service that delivers unsolicited ads based on the objects detected from the user’s surroundings to burglars who are able to map the user’s house, and, perhaps, the locations and dimensions of specific objects in their house based on the released 3D data.
Furthermore, turning off GPS tracking for location privacy may no longer be sufficient once the user starts using MR applications that can expose their locations.
This project focuses on thorough experimental validation of the existence of privacy risks associated with 3D point cloud data, and development of privacy preserving user configurable transformation mechanisms for 3D point cloud data resulting in novel 3D data regeneration models and frameworks.
We are looking for a PhD student who possesses demonstrated expertise in the following criteria:
• First-class honour bachelor degree or MSc or equivalent degree in computer science and engineering, telecommunication or electrical engineering.
• Excellent knowledge in applied machine learning, or computer networking and mobile systems.
• Hands on experience in programming and software development.
This is a fully funded scholarship for 3.5 years which covers tuition fees and a stipend for living expenses.
To apply, submit your resume to Dr Kanchana Thilakarathna - [Email Address Removed]