Postgrad LIVE! Study Fairs

Birmingham | Edinburgh | Liverpool | Sheffield | Southampton | Bristol

London School of Hygiene & Tropical Medicine Featured PhD Programmes
University College London Featured PhD Programmes
Imperial College London Featured PhD Programmes
Engineering and Physical Sciences Research Council Featured PhD Programmes
University College London Featured PhD Programmes

Realising Accountable Artificial Intelligence

This project is no longer listed in the FindAPhD
database and may not be available.

Click here to search the FindAPhD database
for PhD studentship opportunities
  • Full or part time
    Prof P Edwards
    Dr W Pang
    Dr C Cottrill
  • Application Deadline
    Applications accepted all year round
  • Self-Funded PhD Students Only
    Self-Funded PhD Students Only

Project Description

The past year has seen some notable examples of the challenges associated with AI systems when they are deployed in the ‘wild’. On February 14th 2016, a modified Lexus SUV operating as a Google self-driving car caused a crash with a local bus in Mountain View, California – damaging both vehicles. Google’s subsequent report into the accident concluded that their vehicle lacked sufficient knowledge of how other roads users might behave. Another recent case highlighted how classification models generated by machine learning algorithms can easily embed stereotyped biases towards race and gender - resulting in the potential for such systems to behave in a discriminatory fashion.

While all new technologies have the capacity to do harm, with AI systems it may be difficult or even impossible to know what went wrong or who should be held responsible. How can we benefit from the superhuman capacity and efficiency that such systems offer without giving up our desire for accountability, transparency and responsibility? How can we avoid a stalemate choice between forgoing the benefits of automated systems altogether or accepting a degree of arbitrariness that would be unthinkable in society’s usual human relationships?

A solution to this dilemma may be afforded by computational models of provenance - as a substrate for enabling trust; such a mechanism facilitates transparency and accountability by recording the processes, entities and agents associated with a system and its behaviours - supporting verification and compliance monitoring. Some important research questions to be investigated by this PhD include:

What would a provenance- enabled infrastructure for accountable AIs look like, and how would it be framed by (and operate within) its wider socio-legal context?

How would such a framework go beyond simply documenting the static elements of an intelligent system (data, algorithm, learned model ...) to capture dynamic interactions among system components which represent the way they operate?

The successful candidate will have or expect to have a UK Honours Degree at 2.1 (or equivalent) in COMPUTER SCIENCE or related discipline.

Essential background: AI principles; programming in Python or Java.
Desirable Knowledge: Blockchain/distributed ledger technologies, knowledge representation and reasoning.

This project is advertised in relation to the research areas of the discipline of Computing Science. Formal applications can be completed online: You should apply for Degree of Doctor of Philosophy in Computing Science, to ensure that your application is passed to the correct person for processing.

Informal inquiries can be made to Professor P Edwards ([email protected]) with a copy of your curriculum vitae and cover letter indicating your interest in the project and why you wish to undertake it. All general enquiries should be directed to the Postgraduate Research School ([email protected]).

Funding Notes

There is no funding attached to this project, it is for self-funded students only.

FindAPhD. Copyright 2005-2018
All rights reserved.