Deep Facial Micro-Expression Detection and Generation Framework
This project aims to develop automated deep learning facial micro-expression (FME) detection and generation framework. It focuses on FMEs which often represent emotion control. The objectives are:
- Detect FMEs using deep learning methods
- Establish large-scale dataset using generative framework
- Identify hidden malicious intent through automated facial micro-expressions analysis.
This project will produce a tool for end users to assess and monitor the behavior of people with suspicious intent. A tool to detect and generate facial micro-expressions is a missing puzzle in current research.
***Aims and objectives***
This project develops automated FME detection framework for security applications, which can be generalised to other applications. The objectives are:
1. Establish large-scale dataset using a generative framework.
More datasets or expanding previous sets would be a simple improvement that can help move the research forward faster. With our expert panel (existing datasets owner and psychologists), we design a standard procedure on how to maximise the amount of micro-movements induced spontaneously. We will establish a large-scale publicly available dataset of spontaneous behaviour that comprises of annotated visual and depth face data, and annotated spatial-temporal data. Our aim is to record a realistic dataset. In addition to data collection on a rigid head pose (similar to existing micro-expression datasets), we also recorded the participants under varying viewing conditions to mimic realistic situations. Moreover, we will develop a generative adversarial network to increase the size of the dataset.
2. Detect FMEs using deep learning approaches.
Dynamic face data has proven more effective in facial expressions analysis. The state-of-the-art FME research is based on spatial-temporal visual face data (recorded as video clips). We propose to increase the accuracy of the FME detection by including depth face data and deep learning approaches. This will increase the robustness and accuracy of the method as depth data will provide surface information which will complement the visual data.
3. Identify hidden malicious intent through automated FMEs analysis.
We will deploy a deep learning approach for spatial-temporal analysis of the face data. The dual input from visual and depth data will be fused into an R-CNN and LSTM to predict the expressions on the face.
For more information on the project, see the website: https://www2.mmu.ac.uk/research/research-study/scholarships/detail/scieng-mhy-2019-1-deep-facial-micro-expression-detection-and-generation-framework.php
This is a fully funded scholarship covering tuition fees at the Home/EU rate (£4,260) and an annual stipend of around £14,777. International applicants are welcome to apply, but will have to cover the difference between International and Home/EU fees (likely to be around £11k each year).