University of Edinburgh Featured PhD Programmes
Birkbeck, University of London Featured PhD Programmes
Life Science Zurich Graduate School Featured PhD Programmes

Multimodal Deep Learning for Context Awareness (Application Ref: SF19/EE/CIS/WEI)

Faculty of Engineering and Environment

This project is no longer listed on and may not be available.

Click here to search for PhD studentship opportunities
Dr B Wei , Prof W L Woo Applications accepted all year round Self-Funded PhD Students Only

About the Project

Context-awareness plays an essential role in a smart building, including many applications such as indoor localisation, activity recognition, identity recognition, etc. With the proliferation of the Internet of Things (IoT) in smart buildings, we could take advantage of their integrated multimodal sensors for better perception. Many sensors have already been deployed in smart buildings for data collection, such as cameras, microphones, etc., which are readily available for further data analysis. Wearable mobile devices are also good supplementary for perceiving surrounding environments. The use of data fusion methods from these multimodal sensors to improve recognition performance is a popular research topic. Additionally, the recent development of deep learning has motivated successful applications in many areas, such as computer vision, acoustic recognition, etc., which significantly outperforms traditional machine learning algorithms. Deep learning architectures also provide the feasibility of the integration of different sensor data. However, it is still challenging to design a general deep learning model which can be readily applied to any IoT sensors without explicit modification. Accuracy and efficiency are two essential factors to be considered when designing context awareness applications for IoT. The sophisticated deep learning algorithms can improve the recognition dramatically but are challenging to be realised on the IoT application scenario due to its limited computational capability. We consider the computational efficiency as well to ensure the proposed system can offer real-time recognition.
Proper deep learning algorithms will be investigated in this research project. Multidisciplinary techniques among the following areas will be explored: deep learning, IoT, computer vision, embedded system, data science, computer networking, electronic & electrical engineering, optimisation, mobile computing, etc. Multiple types of IoT devices will be used in this research project to implement a prototype, such as Raspberry Pis, mobile phones, smartwatches, etc.

This project is supervised by Dr. Bo Wei

Eligibility and How to Apply:
Please note eligibility requirement:
• Academic excellence of the proposed student i.e. 2:1 (or equivalent GPA from non-UK universities [preference for 1st class honours]); or a Masters (preference for Merit or above); or APEL evidence of substantial practitioner achievement.
• Appropriate IELTS score, if required.

For further details of how to apply, entry requirements and the application form, see

Please note: Applications that do not include a research proposal of approximately 1,000 words (not a copy of the advert), or that do not include the advert reference (e.g. SF19/EE/CIS/WEI) will not be considered.

Start Date: 1 March 2020 or 1 October 2020.

Northumbria University takes pride in, and values, the quality and diversity of our staff. We welcome applications from all members of the community. The University holds an Athena SWAN Bronze award in recognition of our commitment to improving employment practices for the advancement of gender equality and is a member of the Euraxess network, which delivers information and support to professional researchers.

Funding Notes

This is an unfunded research project.


[1] Wei, B., Trigoni, N. and Markham, A., 2018, May. iMag: Accurate and rapidly deployable inertial magneto-inductive localisation. In 2018 IEEE International Conference on Robotics and Automation (ICRA) (pp. 99-106). IEEE.

[2] Shen, Y., Yang, M., Wei, B., Chou, C.T. and Hu, W., 2017. Learn to recognise: exploring priors of sparse face recognition on smartphones. IEEE Transactions on Mobile Computing, (6), pp.1705-1717.

[3] Zhang, J., Wei, B., Hu, W. and Kanhere, S.S., 2016, May. Wifi-id: Human identification using wifi signal. In Distributed Computing in Sensor Systems (DCOSS), 2016 International Conference on (pp. 75-82). IEEE.

[4] Shen, Y., Hu, W., Yang, M., Liu, J., Wei, B., Lucey, S. and Chou, C.T., 2016. Real-time and robust compressive background subtraction for embedded camera networks. IEEE Transactions on Mobile Computing, 15(2), pp.406-418.

[5] Wei, B., Hu, W., Yang, M. and Chou, C.T., 2015, April. Radio-based device-free activity recognition with radio frequency interference. In Proceedings of the 14th International Conference on Information Processing in Sensor Networks (pp. 154-165). ACM.

[6] Wei, B., Varshney, A., Patwari, N., Hu, W., Voigt, T. and Chou, C.T., 2015, April. drti: Directional radio tomographic imaging. In Proceedings of the 14th International Conference on Information Processing in Sensor Networks (pp. 166-177). ACM.

[7] Shen, Y., Hu, W., Yang, M., Wei, B., Lucey, S. and Chou, C.T., 2014, April. Face recognition on smartphones via optimised sparse representation classification. In Proceedings of the 13th international symposium on Information processing in sensor networks (pp. 237-248). IEEE Press.

[8] Wei, B., Yang, M., Shen, Y., Rana, R., Chou, C.T. and Hu, W., 2013, November. Real-time classification via sparse representation in acoustic sensor networks. In Proceedings of the 11th ACM Conference on Embedded Networked Sensor Systems (p. 21). ACM.

[9] Shen, Y., Hu, W., Liu, J., Yang, M., Wei, B. and Chou, C.T., 2012, November. Efficient background subtraction for real-time tracking in embedded camera networks. In Proceedings of the 10th ACM Conference on Embedded Network Sensor Systems (pp. 295-308). ACM.
Search Suggestions

Search Suggestions

Based on your current searches we recommend the following search filters.

FindAPhD. Copyright 2005-2020
All rights reserved.