Don't miss our weekly PhD newsletter | Sign up now Don't miss our weekly PhD newsletter | Sign up now

  Multi-modal Intelligent Sensing and Recognition for Human-Robot Interaction/Collaboration


   School of Computing

This project is no longer listed on FindAPhD.com and may not be available.

Click here to search FindAPhD.com for PhD studentship opportunities
  Dr Zhaojie Ju, Dr C Yang, Dr Mohamed Bader-EL-Den  No more applications being accepted  Self-Funded PhD Students Only

About the Project

PROJECT REF: CCTS3390217

In the human-robot interaction/collaboration, the robot is supposed to be able to detect, perceive and understand corresponding human motions in the environment to interact, co-operate, imitate or learn in an intelligent manner. Sensory information of both human motions and the environment is captured by various types of sensors such as cameras, markers, accelerometers, tactile sensors, etc [1]. Research applications of human motion analysis in human-robot interactions/collaborations include programming by demonstration, imitation, tele-operation, activity or context recognition and humanoid design [2]. In addition, the extraction of meaningful information about the environment through perceptual systems also plays a key role in scene representation and recognition to future make the robot interact with human in a more natural way [7]. The aim of scene representation for HRI is to describe the way in which human and robot tend to interact around a scene and to generate a representation tied to geography, indicating which types of motions might happen in which part of the scene. It can enable a robot to respond efficiently to user commands, which refer to spatial locations, object features or object labels without re-performing a visual search each time. The objectives of this project are:

1. To develop a multimodal-sensing platform for human-robot interaction and collaboration, using various types of sensors such as depth cameras, markers, accelerometers, tactile sensors, force sensors, bio-signal sensors, etc. to capture both human motions and the operation environment.

2. To investigate a more robust and less noisy representation of human action features, including the local and globe features, incorporating a variety of uncertainties, e.g., quality of images, individual action habits, different environments, etc.

3. To investigate an advanced motion analysis framework including hierarchical data fusion strategies and off-the-shelf probabilistic recognition algorithms, to synchronise and fuse the sensory information for the real-time analysis and automatic recognition of the human action with satisfactory accuracy and reliable fusion results. The priority is given to balancing the effectiveness and efficiency of the system.

4. We will investigate effective methods for scene representation using dynamic neural field including transient detectors, temporal variation model, etc. The scene representation will be incorporated into the motion analysis framework to achieve a more effective and stable system.


Funding Notes

Please use our online application form and state the project code (CCTS3390217) and title in the personal statement section.

References

References to recent published articles:

[1] Ju Z. and Liu H. Fuzzy Gaussian Mixture Models, Pattern Recognition, 45(3):1146-1158, 2012.

[2] Ju Z, Liu H. Human Hand Motion Analysis With Multisensory Information [J]. IEEE/ASME Transactions on Mechatronics, 19(2):456-466, 2014.