The past year has seen some notable examples of the challenges associated with AI systems when they are deployed in the ‘wild’. On February 14th 2016, a modified Lexus SUV operating as a Google self-driving car caused a crash with a local bus in Mountain View, California – damaging both vehicles. Google’s subsequent report into the accident concluded that their vehicle lacked sufficient knowledge of how other roads users might behave. Another recent case highlighted how classification models generated by machine learning algorithms can easily embed stereotyped biases towards race and gender - resulting in the potential for such systems to behave in a discriminatory fashion.
While all new technologies have the capacity to do harm, with AI systems it may be difficult or even impossible to know what went wrong or who should be held responsible. How can we benefit from the superhuman capacity and efficiency that such systems offer without giving up our desire for accountability, transparency and responsibility? How can we avoid a stalemate choice between forgoing the benefits of automated systems altogether or accepting a degree of arbitrariness that would be unthinkable in society’s usual human relationships?
A solution to this dilemma may be afforded by computational models of provenance - as a substrate for enabling trust; such a mechanism facilitates transparency and accountability by recording the processes, entities and agents associated with a system and its behaviours - supporting verification and compliance monitoring. Some important research questions to be investigated by this PhD include:
What would a provenance- enabled infrastructure for accountable AIs look like, and how would it be framed by (and operate within) its wider socio-legal context?
How would such a framework go beyond simply documenting the static elements of an intelligent system (data, algorithm, learned model ...) to capture dynamic interactions among system components which represent the way they operate?
The successful candidate will have or expect to have a UK First Class Honours Degree (or equivalent) or a UK Honours Degree at 2.1 (or equivalent) in COMPUTER SCIENCE or related discipline, ALONG WITH an MSc Degree (or equivalent) at Commendation or Distinction.
Essential background: AI principles; programming in Python or Java.
Desirable Knowledge: Blockchain/distributed ledger technologies, knowledge representation and reasoning.
this project is advertised in relation to the research areas of the discipline of Computing Science. Formal applications can be completed online: https://www.abdn.ac.uk/pgap/login.php
You should apply for Degree of Doctor of Philosophy in Computing Science, to ensure that your application is passed to the correct person for processing. NOTE CLEARLY THE NAME OF THE SUPERVISOR and EXACT PROJECT TITLE ON THE APPLICATION FORM. IF YOU DO NOT MENTION ELPHINSTONE FUNDING ON YOUR APPLICATION THEN IT WILL NOT BE CONSIDERED FOR THE SCHOLARSHIP. Applicants are limited to applying for a maximum of 2 Elphinstone funded projects. Any further applications received will be automatically withdrawn.
Informal inquiries can be made to Professor P Edwards ([email protected]
) with a copy of your curriculum vitae and cover letter indicating your interest in the project and why you wish to undertake it. All general enquiries should be directed to the Postgraduate Research School ([email protected]