A number of commercial software exists that allow deriving emotions of users based on their facial expressions (e.g. a smiley face is typically interpreted as a happy state, whereas a frowning face is often seen as representing angry or sad state). While body language is also often said to be representative of someone’s mental or emotional state, less is understood about the mapping between the two, and no automatic system exists that can annotate movement sequences with their emotional meanings in real time. At the same time, such a system would be useful in many settings such as mental health therapy sessions, sports and performing arts, interpersonal communication, and security/forensic applications.
This project will address the gap by developing a taxonomy describing movement quality in the context of mental health (in collaboration with dance-movement therapists), tool for annotating movement sequences with the identified quality descriptors, a deep learning model inferring mental and emotional states from movement patterns based on the identified quality descriptors, and a mathematical model explaining the deep learning model's outputs in terms of biomechanics. The developed models, tools and materials will be used to demonstrate whether mental and emotional states can be objectively assessed through movement and how movement can enable positive change in mental health and wellbeing.
You must have a good Bachelor's degree (2.1 or higher, or equivalent) or Master's degree in Computer Science, Mathematics or a similar relevant subject.
Experience with statistical analysis, deep learning and programming in Python are essential. Knowledge of movement and EEG analysis techniques are desirable.