About the Project
The development of deep learning CNN architectures has led to remarkable progress in the field of computer vision, especially for object detection . However, variability in an object’s appearance when manipulated, occluded or seen from different angles prevents consistent object recognition, which affects the interpretation of the scene of interest. Consequently, an individual’s activities cannot be monitored with the required accuracy.
This project proposes to address activity recognition from hand and object manipulation using videos captured by a wearable camera by developing a novel two-stream CNN-Profile HMM architecture, where the successful deep learning CNN architecture that is designed to optimise inter object discrimination is enhanced by intra object variation models. Indeed, it is proposed to use instances when a given object is recognised to create a canonical object model that would integrate viewpoint variations. Such models will be created using the novel ‘vide-omics’ paradigm which has recently been developed at Kingston University . Founded on the principles of genomics where variability is the expected norm rather than an inconvenience to control, ‘vide-omics’ will allow modelling variations of an object’s appearance by interpreting them as the product of mutations applied to a canonical model, the ‘common ancestor’ of all representations of that object. Similarly to genomics where models of gene families are produced by encoding mutations between each member using Profile Hidden Markov Models (P-HMMs), novel individual descriptors will be developed based on P-HMMs to model quantitatively known variations. The suitability of P-HMMs to encode variations in images has already been demonstrated in a recent publication .
Applicants should have, at least, an Honours Degree at 2.1 or above (or equivalent) in Computer Science or related disciplines. In addition, they should have excellent programming skills in Matlab, Java and/or C++ and an interest in machine learning.
Qualified applicants are strongly encouraged to contact informally the supervising academic, Dr Nebel ([email protected]), to discuss the application. More on Dr Nebel’s research group and activities can be found on his personal website: https://sites.google.com/site/jeanchristophenebel
 Recognition of Activities of Daily Living with Egocentric Vision: A Review, T. Nguyen, J.-C. Nebel, F. Florez-Revuelta, Sensors, 16(1):72; 2016
 Recognition of activities of daily living from egocentric videos using hands detected by a deep convolutional network, T. Nguyen, J.-C. Nebel, F. Florez-Revuelta, International Conference on Image Analysis and Recognition, 2018
 Vide-omics: A Genomics-inspired Paradigm for Video Analysis, I. Kazantzidis, F. Florez-Revuelta, M. Dequidt, N. Hill and J.-C. Nebel, Computer Vision and Image Understanding, 166:28-40, 2018
 Profile Hidden Markov Models for Foreground Modelling, I. Kazantzidis, F. Florez-Revuelta and J.-C. Nebel, IEEE International Conference on Image Processing, 2018
Why not add a message here
Based on your current searches we recommend the following search filters.
Based on your current search criteria we thought you might be interested in these.