Imperial College London Featured PhD Programmes
Norwich Research Park Featured PhD Programmes
Norwich Research Park Featured PhD Programmes
FindA University Ltd Featured PhD Programmes
Cardiff University Featured PhD Programmes

Emotional and Facial communication recognition utilising an Intelligent Serious Game to improve brain function and mental agility

This project is no longer listed on and may not be available.

Click here to search for PhD studentship opportunities
  • Full or part time
    Dr S Sadeghi-Esfahlani
    Dr S Cirstea
    Dr G Wilson
  • Application Deadline
    Applications accepted all year round
  • Self-Funded PhD Students Only
    Self-Funded PhD Students Only

About This PhD Project

Project Description

Research Group: Sound and Game Engineering (SAGE) Research Group

Proposed supervisory team: Dr Shabnam Sadeghi-Esfahlani,
([Email Address Removed]), Dr Silvia Cirstea ([Email Address Removed]),
Dr George Wilson ([Email Address Removed])

Theme: Using computer technology to enable social integration

Summary of the research project:

Human-interaction via the interpretation of facial expressions is a fundamental social skill necessary for effective communication. Some people with a mild-cognitive impairment have difficulty expressing facial signals and can be a debilitating condition to the point of social exclusion.

This project will develop a novel IT-system to help those with a mild-cognitive impairment develop strategies and acquire techniques to improve their communication skills. Facial expression recognition will be implemented by training an AI (intelligent humanoid character) using a Neural Network (NN) algorithm (Holden, et al., 2017). This will yield real-time data-driven reactions in different circumstances (i.e. looking at images/events and react accordingly). A hardware system will be developed that integrates a game engine with the motion capture (Kinect-v2, 32-Neuron Alum), and biofeedback electrical signals (electroencephalogram, Surface Electromyography). The framework learns from this data that includes a combination of various facial/gesticulatory/emotional corresponding stimuli. The system takes inputs from the player and automatically produces high-quality expressions to achieve the desired reaction. The entire network is trained in an end-to-end fashion on a large dataset composed of facial expressions fitted into virtual environments. The system will automatically produce expressions where the character adapts to different conditions.

The study will have two phases; in the first-stage, the NN will be trained based on the control subjects’ reactions. In the second phase, the NN teaches the impaired player through a gamified set of challenges to react in a socially meaningful way in various emotional scenarios, thus improving their communication skills in their real-world lives.

Where you'll study: Cambridge


This project is self-funded. Details of studentships for which funding is available are selected by a competitive process and are advertised on our jobs website as they become available.

Next steps:

If you wish to be considered for this project, you will need to apply for our Sound Engineering PhD. In the section of the application form entitled 'Outline research proposal', please quote the above title and include a research proposal.

FindAPhD. Copyright 2005-2019
All rights reserved.