Don't miss our weekly PhD newsletter | Sign up now Don't miss our weekly PhD newsletter | Sign up now

  CareBot: A Deep Learning based Multimodal Social Interaction Robot for Elderly Care

   Faculty of Computing, Engineering and the Built Environment

This project is no longer listed on and may not be available.

Click here to search for PhD studentship opportunities
  Dr Philip Vance, Dr Nazmul Siddique, Dr Pratheepan Yogarajah  No more applications being accepted  Competition Funded PhD Project (Students Worldwide)

About the Project

Due to the ongoing healthcare crisis and staff shortages, alternative forms of providing assistance to an ageing population is becoming critical to support a depleted healthcare community. Social robots have the potential to support this community by completing some day-to-day tasks in domestic environments, including non-contact measurement of vital signs; assessment of mood and keeping track medication schedules. This would free caregivers to focus on more acute situations. The ability to successfully communicate and interact with each other in a seamless manner is integral to the cohabitation of humans and robots. Sociable robots should be capable of proactively engaging with people within accepted social norms to enhance the interaction process. These social norms exist between humans as a combination of pre-defined ‘rules’ and through reinforcement learning from other humans. Currently, this paradigm is one in which robots struggle with, specifically in recognising many of the paralinguistic (e.g. tone, nuance) and non-verbal (e.g. body language) cues of humans.

This project aims to address the aforementioned shortcoming, by developing sociable robots (using a Pepper robot) that will utilise Multimodal deep learning techniques to socially interact with humans in a more meaningful manner.

The objective is to develop a computational model for human-robot social interaction using paralinguistic and non-verbal cues. Using robotic sensory information, these cues will be identified by extracting key features associated with each cue. A dataset of paralinguistic and non-verbal cues will be developed as part of this phase of the project which can be disseminated and utilised by the wider research community. A multimodal deep learning computational model will then be developed to analyse the extracted features for identification and classification of various paralinguistic and non-verbal social cues so that appropriate onward action can be executed. Endowing robots with the ability to conduct human-robot social interactions will contribute to advancing the integration of robot systems in human-centric environments.

Computer Science (8) Engineering (12)

How good is research at Ulster University - Magee Campus in Computer Science and Informatics?

Research output data provided by the Research Excellence Framework (REF)

Click here to see the results for all UK universities
Search Suggestions
Search suggestions

Based on your current searches we recommend the following search filters.

 About the Project