Don't miss our weekly PhD newsletter | Sign up now Don't miss our weekly PhD newsletter | Sign up now

  Emotional and Facial communication recognition utilising an Intelligent Serious Game to improve brain function and mental agility


   School of Computing and Information Science

This project is no longer listed on FindAPhD.com and may not be available.

Click here to search FindAPhD.com for PhD studentship opportunities
  Dr S Sadeghi-Esfahlani, Dr S Cirstea, Dr G Wilson  Applications accepted all year round  Self-Funded PhD Students Only

About the Project

Research Group: Sound and Game Engineering (SAGE) Research Group
https://www.anglia.ac.uk/science-and-technology/research/our-research-institutes-and-groups/sound-and-game-engineering

Proposed supervisory team: Dr Shabnam Sadeghi-Esfahlani,
([Email Address Removed]), Dr Silvia Cirstea ([Email Address Removed]),
Dr George Wilson ([Email Address Removed])
https://www.anglia.ac.uk/science-and-technology/about/computing-and-technology/our-staff/silvia-cirstea
https://www.anglia.ac.uk/science-and-technology/about/computing-and-technology/our-staff/george-wilson

Theme: Using computer technology to enable social integration

Summary of the research project:

Human-interaction via the interpretation of facial expressions is a fundamental social skill necessary for effective communication. Some people with a mild-cognitive impairment have difficulty expressing facial signals and can be a debilitating condition to the point of social exclusion.

This project will develop a novel IT-system to help those with a mild-cognitive impairment develop strategies and acquire techniques to improve their communication skills. Facial expression recognition will be implemented by training an AI (intelligent humanoid character) using a Neural Network (NN) algorithm (Holden, et al., 2017). This will yield real-time data-driven reactions in different circumstances (i.e. looking at images/events and react accordingly). A hardware system will be developed that integrates a game engine with the motion capture (Kinect-v2, 32-Neuron Alum), and biofeedback electrical signals (electroencephalogram, Surface Electromyography). The framework learns from this data that includes a combination of various facial/gesticulatory/emotional corresponding stimuli. The system takes inputs from the player and automatically produces high-quality expressions to achieve the desired reaction. The entire network is trained in an end-to-end fashion on a large dataset composed of facial expressions fitted into virtual environments. The system will automatically produce expressions where the character adapts to different conditions.

The study will have two phases; in the first-stage, the NN will be trained based on the control subjects’ reactions. In the second phase, the NN teaches the impaired player through a gamified set of challenges to react in a socially meaningful way in various emotional scenarios, thus improving their communication skills in their real-world lives.

Where you'll study: Cambridge
https://www.anglia.ac.uk/student-life/life-on-campus/cambridge-campus

Funding:

This project is self-funded. Details of studentships for which funding is available are selected by a competitive process and are advertised on our jobs website as they become available.

https://www24.i-grasp.com/fe/tpl_angliaruskin01.asp

Next steps:

If you wish to be considered for this project, you will need to apply for our Sound Engineering PhD. In the section of the application form entitled 'Outline research proposal', please quote the above title and include a research proposal.

 About the Project