About the Project
The ability to connect via the internet to a larger database or AI ensures that an expanding range of sounds can be explored, as well as to allow an element of mimicry, either of the humans or other animate or inanimate sound sources within an environment. Source identification does not have to be confined to purely visual elements, the level of engagement with other identifiable objects can also be captured in order to refine a robot’s communication. A robot can be a highly intuitive friend, who always wants to make everyone at ease, in as an efficient and effective aural manner as possible. The robot listens and provides an auditory backdrop to convey both its intentions and its own level of engagement. If the requirement is for an efficient assistant then silence, with only essential simple sounds can be expressed. If an engaging companion is preferred, then a fully adaptive and intuitive auditory interaction can be shared.
A well designed robot has the potential to have a frictionless natural auditory interaction with those around it, so that irrespective of the context a harmonious existence can be achieved.
Applications from potential part-time students are welcomed.
A first degree (at least a 2.1) ideally in Interactive Media or similar with a good fundamental knowledge of Sound Design and Programming.
English language requirement
IELTS score must be at least 6.5 (with not less than 6.0 in each of the four components). Other, equivalent qualifications will be accepted.
• Experience of fundamental Sound Design
• Competent in Programming
• Knowledge of Psychoacoustics
• Good written and oral communication skills
• Strong motivation, with evidence of independent research skills relevant to the project
• Good time management
Familiarity with C++ and C#.
Jee, E. S., Jeong, Y. J., Kim, C. H., & Kobayashi, H. (2010). Sound design for emotion and intention expression of socially interactive robots. Intelligent Service Robotics, 3(3), 199-206.
Jeong, E., Kwon, G. H., & So, J. (2017, June). Exploring the taxonomie and associative link between emotion and function for robot sound design. In 2017 14th International Conference on Ubiquitous Robots and Ambient Intelligence (URAI) (pp. 641-643). IEEE.
Traeger, M. L., Sebo, S. S., Jung, M., Scassellati, B., & Christakis, N. A. (2020). Vulnerable robots positively shape human conversational dynamics in a human–robot team. Proceedings of the National Academy of Sciences, 117(12), 6370–6375. https://doi.org/10.1073/PNAS.1910402117
Kory-Westlund, J. M., & Breazeal, C. (2019). Exploring the Effects of a Social Robot’s Speech Entrainment and Backstory on Young Children’s Emotion, Rapport, Relationship, and Learning. Frontiers in Robotics and AI, 6, 54. https://doi.org/10.3389/frobt.2019.00054
Yilmazyildiz, S., Read, R., Belpeame, T., & Verhelst, W. (2016). Review of semantic-free utterances in social human–robot interaction. International Journal of Human-Computer Interaction, 32(1), 63-85.
Why not add a message here
Based on your current searches we recommend the following search filters.
Based on your current search criteria we thought you might be interested in these.