Image to description generation is one of the popular but also difficult research areas for deep learning. In this project, we employ an automatic scene to speech conversion system using a novel feature extraction and selection algorithm. A system which can precisely label image regions and generate whole image description can benefit in diverse applications such as news or medical image annotation and automatic scripts generation for movies. This system can also be applied as an assistive system to help visually impaired/blind people. visually impaired people require constant assistance in their daily lives. The form of assistance can come from other people or various Electronic Travel Aid (ETA) assistive devices. Some people use guide dogs and white canes as a form of assistance which is reliable but also come with some drawbacks such as they cannot provide information about objects, activity etc. Similarly, affordability and inefficiency are most common drawbacks of the ETAs. Considering the various drawbacks of available assistance systems for blind and visually impaired people, this project aims to develop a scene to speech conversion system embedded with smart glasses to assist visually impaired people during their outdoor visits. This system will be able to do the following tasks:
• Recognise various objects
• Recognise the activity of other people (e.g. walking, standing, running, sitting etc)
• Convert the above data into text and speech format.
This system will enable visually impaired people to navigate the world independently with confidence. This project also has various applications, as this system is mainly focusing on applying deep learning algorithms for scene to text description.
The principal supervisor for this project is Dr Kamlesh Mistry. The second supervisor will be Dr Hubert Shum.
Eligibility and How to Apply:
Please note eligibility requirement:
• Academic excellence of the proposed student i.e. 2:1 (or equivalent GPA from non-UK universities [preference for 1st class honours]); or a Masters (preference for Merit or above); or APEL evidence of substantial practitioner achievement.
• Appropriate IELTS score, if required.
• Applicants cannot apply for this funding if currently engaged in Doctoral study at Northumbria or elsewhere.
For further details of how to apply, entry requirements and the application form, see https://www.northumbria.ac.uk/research/postgraduate-research-degrees/how-to-apply/
Please note: Applications that do not include a research proposal of approximately 1,000 words (not a copy of the advert), or that do not include the advert reference (e.g. RDF20/EE/CIS/MISTRY) will not be considered.
Deadline for applications: Friday 24 January 2020
Start Date: 1 October 2020
Northumbria University takes pride in, and values, the quality and diversity of our staff. We welcome applications from all members of the community. The University holds an Athena SWAN Bronze award in recognition of our commitment to improving employment practices for the advancement of gender equality.
•Kamlesh Mistry, Li Zhang, Siew Chin Neoh, Chee Peng Lim, Benjamin Fielding, A micro-GA Embedded PSO Feature Selection Approach to Intelligent Facial Emotion Recognition, IEEE transaction on Cybernetics, 2017.
•Siew Chin Neoh, Li Zhang, Kamlesh Mistry, Mohammed Alamgir Hossain, Chee Peng Lim, Nauman Aslam, Philip Kinghorn, Intelligent facial emotion recognition using a layered encoding cascade optimization model, Applied Soft Computing, 2015.
•Li Zhang, Kamlesh Mistry, Siew Chin Neoh, Chee Peng Lim, Intelligent Facial Emotion Recognition Using Moth-Firefly Optimization, Knowledge-Based Systems, 2017.
•Li Zhang, Kamlesh Mistry, Ming Jiang, Siew Chin Neoh, Mohammed Alamgir Hossain, Adaptive facial point detection and emotion recognition for a humanoid robot, Computer Vision and Image Understanding 140 (2015) 93-114.