Don't miss our weekly PhD newsletter | Sign up now Don't miss our weekly PhD newsletter | Sign up now

  PhD Studentship in HCI and AI: Designing User Interactions around Deep learning


   UCL Interaction Centre (UCLIC)

This project is no longer listed on FindAPhD.com and may not be available.

Click here to search FindAPhD.com for PhD studentship opportunities
  Dr E Costanza, Prof M Musolesi  No more applications being accepted  Funded PhD Project (European/UK Students Only)

About the Project

Applications are invited for a PhD studentship in the UCL Interaction Centre (UCLIC), funded by an EPSRC DTP grant for up to 4 years from October 2021.

Machine Learning (ML), and in particular Deep Learning (DL), gained increasing popularity over recent years. As ML gets applied to an ever-increasing number of domains (ranging from medical applications to crime prevention), there have been numerous calls to apply user-centred design approaches to this class of AI systems, and to make these systems intelligible by users: a research area usually referred to as Explainable AI.

The specific focus of this PhD is on interactive applications based on ML, building on prior experience and tools developed in our team (see references 1-4 below). In particular, the research is expected to focus on one or two engaging interactive applications which will serve as a basis to design user studies and field deployments to investigate specific interaction features and explanations around AI. Examples could include camera-based tangible user interfaces for music [1], or applications based on wearable cameras [5,6]. However, the successful applicant will have freedom to steer the direction towards their interests and strengths.

Person Specification

Applicants should be interested in Human-Computer Interaction (HCI), possess a strong bachelor’s (1st or 2:1) or Master's degree in a related discipline (e.g., Computer Science, HCI, Machine Learning, Artificial Intelligence), and have excellent communication and presentation skills. In general, the ideal candidate should also have a strong interest in ML/AI and their application to interactive applications and real-world systems. Good programming skills, experience of software development of interactive applications, and/or analytical models, experience with image processing or computer vision, and relevant previous research experience are also desirable.

Eligibility

To be considered for this scholarship applicants need to be meet the eligibility requirements defined by the UK Research and Innovation (please see linked document). In particular, any applicants “classed as a home student” would be eligible for funding; applicants “classed as an International student” could be eligible for funding in exceptional circumstances (for example, if a candidate has an outstanding track record of very relevant research, including e.g. publications in world leading venues). Please refer to the linked document for definitions of “home” and “international” student.

Application Procedure

Applicants should submit their applications via UCL Select by 5pm Wednesday 30th June - NB: please notify Louise Gaynor with your application number when you apply. Applications must include:

  1. A personal statement (1 – 2 pages)
  2. A research proposal (1 – 4 pages) including a summary of some relevant literature and an outline of the type of research to be conducted (including ideas about which methods would be appropriate)
  3. Name and email contact details of 2 referees
  4. Academic transcripts
  5. A CV

Interview date TBC.

Questions about the studentship can be made to Dr Enrico Costanza while queries about the application process can be made to Louise Gaynor.


Computer Science (8) Mathematics (25)

Funding Notes

Minimum £17,609 per annum plus fees.

References

1. E Costanza, M Giaccone, O Küng, S Shelley, and J Huang. 2010. Ubicomp to the masses: a large-scale study of two tangible interfaces for download. In Proc. ACM UbiComp '10. DOI:https://doi.org/10.1145/1864349.1864388 https://www.youtube.com/watch?v=cKd8NXWwvKI
2. J Kittley-Davies, A Alqaraawi, R Yang, E Costanza, A Rogers, and S Stein. 2019. Evaluating the Effect of Feedback from Different Computer Vision Processing Stages: A Comparative Lab Study. In Proc. ACM CHI '19. DOI:https://doi.org/10.1145/3290605.3300273
3. V Darvariu, L Convertino, A Mehrotra, and M Musolesi. 2020. Quantifying the Relationships between Everyday Objects and Emotional States through Deep Learning Based Image Analysis Using Smartphones. Proc. ACM IMWUT. 4. DOI:https://doi.org/10.1145/3380997
4. A Alqaraawi, M Schuessler, P Weiß, E Costanza, and N Berthouze. 2020. Evaluating saliency map explanations for convolutional neural networks: a user study. In Proc. IUI '20. DOI:https://doi.org/10.1145/3377325.3377519
5. The Microsoft SenseCam project https://www.microsoft.com/en-us/research/project/sensecam/ (see also related research papers)
6. F M Li, D L Chen, M Fan, and K N Truong. 2019. FMT: A Wearable Camera-Based Object Tracking Memory Aid for Older Adults. Proc. ACM IMWUT. DOI:https://doi.org/10.1145/3351253