About the Project
Deep learning is receiving a lot of attention due to its ability to achieve unprecedented levels of performance in terms of accuracy and speed, to the point where deep learning algorithms can outperform humans at decision making, and tasks such as classifying images, and real-time detection.
Multi-modal data fusion is an important task for many machine learning applications, including human activity recognition, information retrieval, and real-time applications of A.I. For example, datasets can include audio and visual data; image and sensor data; multi-sensor multi-modal data; and text and image data.
Multi-modal data can be complex, noisy and imbalanced, particularly when collected from real environments. Therefore, it is a significant challenge to create deep learning models that can classify multi-modal data.
Machine learning models designed to classify imbalanced data are biased toward learning the more commonly occurring classes. Such bias occurs naturally since the models better learn classes which contain more records, a similar process that would occur with human learning.
The aim of this project is to: devise feature engineering approaches for multi-modal data; identify whether fusing multi-modal data improves results as opposed to using uni-modal datasets in specific machine learning tasks; and to develop computational approaches and methods for fusing imbalanced multi-modal data.
A relevant Master's degree and / or experience in one or more of the following will be an advantage: data analytics, artificial intelligence and machine learning.
If you are interested please get in touch with Dr Cosma ([Email Address Removed]) to discuss the topic further.
International Fee band: Research Band 2 Laboratory Based (£21,100)
Based on your current searches we recommend the following search filters.
Based on your current search criteria we thought you might be interested in these.