About the Project
In the context of the increasing presence of intelligent computer-based systems that are making decisions for humans or giving advice that is acted upon, the provision of explanations for these decisions and this advice is an enduring concern for society at large. Such explanations need to be constructed using a balance of different explanation types e.g. ontological, mechanistic, operational and need to be tailored to their audience. The contribution of effective explanation systems also needs to run alongside appropriate policy and practice guidance for the deployment of these systems. The research question is: for a particular type of intelligent computer-based system what is an appropriate balance of explanation system design, and policy and practice guidance, that will enable humans to retain overarching control of the system’s decisions whilst enabling the system to perform its purposes effectively and efficiently?
Aim and scope of work
1. Explore the contexts of intelligent computer-based decision-making systems and the policy and practice guidance around their deployment
2. Examine the design and implementation mechanisms for constructing explanations in intelligent computer-based decision-making systems
3. Construct a prototype decision-making or recommender system with a tailor-made explanation function and a set of policy and practice guidance statements about its use
4. Evaluate the prototype.
The successful applicant will hold a minimum of a Bachelor’s degree in a relevant subject (UK 1.1 or 2.1 classification). Some prior understanding and/or experience of Artificial Intelligence will be an advantage.
Candidates are invited to make further enquiries with the Director of Studies.