Among all video content, one of the areas that has grown significantly over recent years is based on the use of augmented and virtual reality (AR and VR) technologies. They have the potential for major growth, and developments in displays, interactive equipment, mobile networks, edge computing, and compression are likely to facilitate these in the coming years.
A key new format that underpins the development of these new technologies is referred to volumetric video, with commonly used formats including point clouds, multi-view + depth and equirectangular representations. Various volumetric video codecs have been developed to perform data compression for transmission or storage of these formats. To present/display volumetric video content, the compressed data is decoded and post-processed using synthesizer/renderer which enables 3DoF/6DoF viewing capabilities on AR or VR devices.
In this context, this four-year PhD project will focus on the two essential stages within this workflow: volumetric video compression and post-processing. Inspired by recent advances in deep video compression and rendering, we will research novel AI-based production workflows for volumetric video content to significantly improve the coding efficiency and the perceptual quality of the final rendered content.
This project is funded by the EPSRC iCASE (sponsored by BT) and aligns with the MyWorld UKRI Strength in Places Programme at the University of Bristol, in which BT is contributing as a key collaboration partner. This project is highly relevant to the immersive video production R&D activities within BT and fit well within one of the core research areas outlined in the MyWorld programme on video production and communications for immersive content. The student working on this project will have the opportunity to work with the experts in BT, gaining experience on immersive video production workflows, from capture and contribution to live editorial production and delivery at scale to a growing variety of XR capable devices.
URL for further information: http://www.myworld-creates.com/
Applicants must hold/achieve a minimum of a master’s degree (or international equivalent) in a relevant discipline. Applicants without a master's qualification may be considered on an exceptional basis, provided they hold a first-class undergraduate degree. Please note, acceptance will also depend on evidence of readiness to pursue a research degree.
If English is not your first language, you need to meet this profile level:
Further information about English language requirements and profile levels.
Basic skills and knowledge required
Essential: Excellent analytical skills and experimental acumen.
Desirable: A background understanding in one or more of the following:
3D Computer vision
Artificial intelligence / Machine Learning / Deep Learning
· All candidates should submit a full CV and covering letter to [Email Address Removed] (FAO: Professor David R. Bull) by the deadline.
· Formal applications for PhD are not essential at this stage, but can be submitted via the University of Bristol homepage (clearly marked as MyWorld funded): https://www.bristol.ac.uk/study/postgraduate/apply/
· A Selection Panel will be established to review all applications and to conduct interviews of short-listed candidates.
· Candidates will be invited to give a presentation prior to their formal interview, as part of the final selection process. It is expected that the shortlisting selection process will commence 7 April 2023, with interviews to follow.
· The initial closing date for applications is 31 March 2023. The positions will however remain available until all scholarships are awarded.
For questions about eligibility and the application process please contact SCEEM Postgraduate Research Admissions [Email Address Removed]