About the Project
EPSRC iCASE award with BBC Research and Development
Background: The University of Bristol, through Bristol Vision Institute (BVI) is a world leader in vision science. Bristol has a long and rich tradition at the forefront of the study of human and animal vision, artificial vision systems and imaging more generally, based on collaborations across science, engineering, medicine and the creative arts (www.bristol.ac.uk/vision-institute). Within BVI this project will involve two research groups – the Visual Information Laboratory (VI-Lab) and the Experimental Psychology Vision Group (EPVG). VI-Lab (www.bristol.ac.uk/vi-lab) undertakes innovative, collaborative and interdisciplinary research resulting in world leading technology in the areas of computer vision, image and video communications, content analysis and distributed sensor systems. VI-Lab is one of the largest groupings of its type in the UK. The group’s research studio provides an excellently equipped facility for undertaking this research, containing state of the art cameras, displays and measurement equipment together with a comprehensive suite for psychophysical experimentation. EPVG has an international reputation for its work on human visual behavior, with particular strengths in understanding basic perceptual processes, visual attention, visual memory, eye movements and natural scene statistics. This work includes understanding human interactions with real and virtual environments.
Description: There is a hunger for new, more immersive video content (UHDTV, 360 etc) from users, producers and network operators. Efforts in this area have focused on extending the video parameter space with greater dynamic range, spatial resolution, temporal resolution, colour gamut, interactivity and display size / format. There is however, a very limited understanding of the interactions between these parameters and their relationship to content statistics, visual immersion and delivery methods. The way we represent these immersive video formats is thus key in ensuring that content is delivered at an appropriate quality, which retains the intended immersive properties of the format, while retaining compatibility with the bandwidth and variable nature of the transmission channel. Major research innovations are needed to solve this problem.
The research challenges to be addressed are based on the hypothesis that, by exploiting the perceptual properties of the Human Visual System, and its content dependent performance, we can obtain step changes in visual engagement while also managing bit rate. We must therefore: i) understand the perceptual relationships between video parameters and content type; and ii) develop new visual content representations that adapt to content statistics and their immersive requirements. The solution to this problem will focus around exploitation of machine learning methods to classify scene content and relate this to the extended video parameter space.
The person: We are seeking a person with an interest in technology for video communications linked to perceptual processes. The person would ideally have an undergraduate or Master’s degree in a relevant discipline such as Electrical and Electronic Engineering or Computer Science. Due to the interdisciplinary nature of this work, applicants with different scientific background will also be considered.
Further details and contact: The post will start in September 2018 and will be funded over 3.5 years. The award will cover an enhanced EPSRC stipend, home-based student fees and a budget for consumables, travel and subsistence. Standard EPSRC eligibility rules apply. This advert will remain open until a suitable candidate is recruited. Applicants are encouraged to contact us as soon as possible. For further details or to discuss this (or other relevant) areas please contact Professor David Bull email: [Email Address Removed] or Professor Iain Gilchrist, email: [Email Address Removed] including a full CV and any other relevant details.