PhD Studentship - Advance Robotic Telepresence: Efficient Encoding based on Intelligent Scene Identification
Telepresence offers the benefits of being physically present at a remote location without incurring the cost or time required to travel to the location. Furthermore in hazardous environments, such as radioactive sites, telepresence can be used to keep the operator at a safe distance, free from radiation dose or the inherent risks of being on-site. We recently deployed a telepresence robot at Sellafield Sites for remote inspection of a vessel critical to Magnox reprocessing. We are currently working on a number of EPSRC and Innovate UK projects on 3D scene reconstruction including, http://gtr.rcuk.ac.uk/projects?ref=102067 (£830k) with NNL and Sellafield and http://gow.epsrc.ac.uk/NGBOViewGrant.aspx?GrantRef=EP/N018427/1 (£2M) with the University of Sheffield.
You will join a team of ten PhD students and post-docs based in the Institute of Sensors Signals and Communications (InstSCC) at the University of Strathclyde who are considering image reconstruction, 3D modelling and remote access.
This specific application is with regard to teleworking for international companies. Although video conferencing tool such as Skype, Zoom and GoToMeeting offer a reasonable substitute for remote face to face meetings and to a lesser extent group meetings, there are serious limitations when it comes to collaborative working.
This PhD will investigate knowledge based data compression for effective telepresence. In practice this will consider remote scene capture, possibly using multiple high resolution, wide field of view cameras, data compression and the human-machine interface. The information captured at the remote location will be far in excess of what can be reliably transmitted to the remote user over realistic network bandwidths. The PhD will consider image processing algorithms to selectively send the most appropriate information to the remote user.
1. Through the application of state of the art machine learning techniques (Deep Learning etc) the developed system will be endowed with the ability to classify objects in the scene. Given this fundamental information, the necessary bandwidth requirements can be optimised accordingly. For example, people’s faces need to be transmitted in high resolution at a rapid framerate, whereas their surroundings do not.
2. The vast majority of the information in the scene will be static in a typical session. Algorithms will be considered to identify and extract these components so that retransmission of redundant information can be reduced.
3. Video encoding and human machine interface. This part of the PhD will consider the software development aspects of implementing a prototype. The candidate will investigate methods of controlling the system, comparing manual control where the remote user moves the field of view, to both hybrid and fully automated solutions.
4. The use of multiple cameras with overlapping fields of view gives rise to the possibility of measuring the physical dimensions of objects from image data in real time. In an engineering context, this could be used in an augmented reality application whereby dimensions are superimposed onto the video of a component being transmitted to the remote worker.
EPSRC DTP Eligibility: https://www.epsrc.ac.uk/skills/students/help/eligibility/
Co-funded by Pressure Profile Systems, Los Angeles (www.pressureprofile.com)
Opportunity of a paid internship with PPS UK which can provide up-to £9k per annum of additional income.
How good is research at University of Strathclyde in Electrical and Electronic Engineering, Metallurgy and Materials?
FTE Category A staff submitted: 59.20
Research output data provided by the Research Excellence Framework (REF)
Click here to see the results for all UK universities