Coventry University Featured PhD Programmes
University of Reading Featured PhD Programmes

University of Bristol / EPSRC/BBC R&D iCASE award: AI methods to optimise future immersive video formats

Faculty of Engineering

This project is no longer listed on and may not be available.

Click here to search for PhD studentship opportunities
Prof D Bull , Prof I Gilchrist No more applications being accepted Funded PhD Project (UK Students Only)

About the Project

Background: The University of Bristol, through Bristol Vision Institute (BVI), is a world leader in vision science and vision engineering. Bristol has a long and rich tradition at the forefront of the study of visual communications and image understanding, artificial vision systems, human and animal vision, all based on interdisciplinary collaborations across engineering, science, medicine and the creative arts ( This project involves two BVI research groups – the Visual Information Laboratory (VI-Lab) and the Experimental Psychology Vision Group (EPVG). VI-Lab ( undertakes innovative, collaborative and interdisciplinary research resulting in world leading technology in the areas of computer vision, image and video communications, content analysis and distributed sensor systems. VI-Lab is one of the largest groupings of its type in the UK. The group’s research studios provide an excellently equipped facility for undertaking this research, containing state of the art cameras, displays and measurement equipment together with a comprehensive suite for psychophysical experimentation and subjective testing. EPVG has an international reputation for its work on human visual behavior, with particular strengths in understanding basic perceptual processes, visual attention, visual memory, eye movements and natural scene statistics. This work includes understanding human interactions with real and virtual environments.

Project Description: There is a hunger for new, more immersive video content from consumers, producers and network operators. Efforts in this area have focused on extending the video parameter space with greater dynamic range, increased spatial and temporal resolutions, wider colour gamut, enhanced interactivity through 360 degree content, larger displays and, of course, the full stimulation of peripheral vision through head-mounted displays that provide a greater sense of immersion for many users. There is however, a very limited understanding of the interactions between the dimensions in this extended parameter space and their relationship to content statistics, visual engagement and delivery methods. The way we represent these immersive video formats is thus key in ensuring that content is delivered at an appropriate quality, which retains the intended immersive properties of the format, while retaining compatibility with the bandwidth and variable nature of the transmission channel. Major research innovations are needed to solve this problem.

The research challenges to be addressed are based on the hypothesis that, by exploiting AI methods to capture the perceptual properties of the Human Visual System, and its content-dependent performance, we can obtain step changes in visual engagement while also managing bit rate. Deep learning methods have made significant advances in recent years and are now able to analyse and classify visual scenes with a performance approaching that of a human. Our objectives are therefore: i) to understand the perceptual relationships between video parameters and content type; and ii) to develop new visual content representations that adapt to content statistics and their immersive requirements. Solutions to will develop new machine (deep) learning methods to classify scene content, optimized using perceptual loss functions, and relate this to the extended video parameter space.

The person: We are seeking a person with an interest in video processing and scene understanding, exploiting an understanding of perceptual processes. The person would ideally have an undergraduate or Master’s degree in a relevant discipline such as Computer Science, or Electronic Engineering, or could be from other numerate disciplines of from psychology with a strong mathematical background. Due to the interdisciplinary nature of this work, applicants with different scientific backgrounds will also be considered.

Funding Notes

The post must start by September 2020 and will be funded over 3 years. The award will cover an enhanced EPSRC stipend, home-based student fees and a substantial budget for consumables, travel and subsistence. The person will have extensive opportunities to work closely with BBC R&D throughout, including secondments/training. Standard EPSRC eligibility rules apply. This advert will remain open until a suitable candidate is recruited. Applicants are encouraged to contact us as soon as possible. For further details or to discuss this (or other relevant) areas contact Professor David Bull email: [Email Address Removed] including a full CV and any relevant details.

FindAPhD. Copyright 2005-2021
All rights reserved.