Prediction of human activities is an essential task in practical human-centered robotics applications, such as assisted living, healthcare, human-robot collaboration and entertainment and immersive platforms.
Current robotic platforms and activity recognition methods make decisions using a stationary approach, which considers only the current state of the human activity and ignores the previous set or sequence of actions that has been performed by the human. This lack of understanding the sequence of actions that a human does to perform an activity is an important aspect that needs to be addressed to have intelligent machines capable of explaining each decision made by the robot, as well as understanding the human actions and its own actions. Some works attempt to develop cognitive architectures that store episodes for each action performed by humans, however, these works rely merely on storing and retrieving actions in a database without any computational processing.
In this project, we are interested in developing a synthetic autobiographical memory which will store the episodes or actions performed by a human to complete an activity. This synthetic memory will include artificial intelligence methods to find and link features between the episodes or actions performed by the human. Thus, the next time the human is performing a sequence of actions, the robot will be able to retrieve or remember the most probable activity that will be performed according to the sequence of actions observed. Furthermore, the relation found by the synthetic memory between episodes or actions will allow the robot to “imagine” or predict what new activities could be performed by the user if a certain sequence of actions is observed. This will make the robotics system intelligent but also safe to react to unexpected activities or situations.
This synthetic autobiographical memory will be part of a robotics cognitive architecture composed of layers responsible for receiving data from vision, tactile and audio sensors, pre-processing multimodal data, updating the synthetic memory, and controlling the actions of the robotics system.
This project is associated with the UKRI Centre for Doctoral Training (CDT) in Accountable, Responsible and Transparent AI (ART-AI). We value people from different life experiences with a passion for research. The CDT's mission is to graduate diverse specialists with perspectives who can go out in the world and make a difference.
The project involves the following key research tasks:
· Research of artificial intelligence methods for understanding human actions and activities.
· Research on machine learning methods for processing of multimodal data from robotics platforms.
· Research and implementation of a multi-layer cognitive architecture composed of low, middle and high processing layers.
· Development and implementation of control algorithms for the robotic platform.
The research to be undertaken in this project has a strong multidisciplinary nature. Therefore, the student is expected to collaborate with students and researchers from ART-AI, Computer Science, Electronic and Electrical Engineering, Mechanical Engineering and Psychology. Furthermore, the student is expected to attend multiple events such as conferences, workshops and publish the results from the research work in international conferences and journals.
Candidates should have, or expect to receive, an MSc or MEng in Robotics, Computer Science, Electronics, Mechanics, Mathematics, Physics, Neuroscience or related areas.
Informal enquiries about the project should be directed to Dr Uriel Martinez Hernandez: [Email Address Removed].
Formal applications should be accompanied by a research proposal and made via the University of Bath’s online application form. Enquiries about the application process should be sent to [Email Address Removed].
Start date: 3 October 2022.