About the Project
The University of Bristol, through Bristol Vision Institute (BVI), is a world leader in vision science and vision engineering. Bristol has a long and rich tradition at the forefront of the study of visual communications and image understanding, artificial vision systems, human and animal vision, all based on interdisciplinary collaborations across engineering, science, medicine and the creative arts (www.bristol.ac.uk/vision-institute).
This project involves two BVI research groups – the Visual Information Laboratory (VI-Lab) and the Experimental Psychology Vision Group (EPVG). VI-Lab (www.bristol.ac.uk/vi-lab) undertakes innovative, collaborative and interdisciplinary research resulting in world leading technology in the areas of computer vision, image and video communications, content analysis and distributed sensor systems. VI-Lab is one of the largest groupings of its type in the UK. The group’s research studios provide an excellently equipped facility for undertaking this research, containing state of the art cameras, displays and measurement equipment together with a comprehensive suite for psychophysical experimentation and subjective testing. EPVG has an international reputation for its work on human visual behaviour, with particular strengths in understanding basic perceptual processes, visual attention, visual memory, eye movements and natural scene statistics. This work includes understanding human interactions with real and virtual environments.
There is a hunger for new, more immersive video content from consumers, producers and network operators. Efforts in this area have focused on extending the video parameter space with greater dynamic range, increased spatial and temporal resolutions, wider colour gamut, enhanced interactivity through 360 degree content, larger displays and, of course, the full stimulation of peripheral vision through head-mounted displays that provide a greater sense of immersion for many users. There is however, a very limited understanding of the interactions between the dimensions in this extended parameter space and their relationship to content statistics, visual engagement and delivery methods. The way we represent these immersive video formats is thus key in ensuring that content is delivered at an appropriate quality, which retains the intended immersive properties of the format, while retaining compatibility with the bandwidth and variable nature of the transmission channel. Major research innovations are needed to solve this problem.
The research challenges to be addressed are based on the hypothesis that, by exploiting the perceptual properties of the Human Visual System, and its content-dependent performance, we can obtain step changes in visual engagement while also managing bit rate. We must therefore:
i) understand the perceptual relationships between video parameters and content type; and
ii) develop new visual content representations that adapt to content statistics and their immersive requirements. Solutions to this problem will be addressed in this project exploiting machine (deep) learning methods to classify scene content and relate this to the extended video parameter space.
We are seeking a person with an interest in technology for video communications that exploits perceptual processes. The person would ideally have an undergraduate or a Master’s degree in a relevant discipline such as Computer Science, or Electronic Engineering, or could be a Psychology graduate with a strong mathematical background. Due to the interdisciplinary nature of this work, applicants with different scientific backgrounds will also be considered.
For further details or to discuss this (or other relevant) areas please contact Professor David Bull email: [Email Address Removed] including a full CV and any other relevant details.
Based on your current searches we recommend the following search filters.
Based on your current search criteria we thought you might be interested in these.