Don't miss our weekly PhD newsletter | Sign up now Don't miss our weekly PhD newsletter | Sign up now

  Computational Social Choice and Machine Learning for Ethical Decision Making


   UKRI Centre for Doctoral Training in Safe and Trusted Artificial Intelligence

This project is no longer listed on FindAPhD.com and may not be available.

Click here to search FindAPhD.com for PhD studentship opportunities
  Dr Maria Polukarov  Applications accepted all year round  Funded PhD Project (UK Students Only)

About the Project

This project is part of a unique cohort based 4-year fully-funded PhD programme at the UKRI Centre for Doctoral Training in Safe and Trusted AI.

The UKRI Centre for Doctoral Training in Safe and Trusted AI brings together world leading experts from King’s College London and Imperial College London to train a new generation of researchers and is focused on the use of symbolic artificial intelligence for ensuring the safety and trustworthiness of AI systems.

Project Description:

The problem of ethical decision making presents a grand challenge for modern AI research. Arguably, the main obstacle to automating ethical decisions is the lack of a formal specification of ground-truth ethical principles, which have been the subject of debate for centuries among philosophers (e.g., trolley problem). Indeed, a major approach to the ethics of AI is to design it to act according to the aggregate views of society – the so-called social choice. However, the normative basis of this approach appears weak due to the fact that there is no one single aggregate ethical view of society. Instead, we face three sets of decisions: standing, concerning whose ethics views are included; measurement, concerning how their views are identified; and aggregation, concerning how individual views are combined to a single view that will guide AI behaviour. These decisions pose difficult ethical dilemmas with major consequences for AI behaviour and must be made up front in the initial AI design.

Against this background, recent research proposes to combine machine learning (ML) techniques and the formal methods of computational social choice (COMSOC) to learn a model of societal preferences, and, when faced with a particular ethical dilemma at run time, efficiently aggregate those preferences to identify a desirable choice. Specifically, a voting-based algorithm inspired by a theory of swap-dominance efficient voting rules, has been implemented and successfully evaluated in the autonomous vehicle domain. However, the systematic study of voting systems and their applicability in automated ethical decision making is still lacking.

This PhD project will explore how the above approach extends to other application domains and social choice mechanisms, and analyse different combination of those, in order to create successful automated ethical decision-making systems.

How to Apply:

The deadline for Round A for entry in October 2023 is Monday 28th November. See here for further round deadlines.

We will also be holding an online information session on Tuesday 15 November 1-2pm: UKRI Centre for Doctoral Training in Safe and Trusted AI - Info Session Tickets, Tue 15 Nov 2022 at 13:00 | Eventbrite

Committed to providing an inclusive environment in which diverse students can thrive, we particularly encourage applications from women, disabled and Black, Asian and Minority Ethnic (BAME) candidates, who are currently under-represented in the sector.

We encourage you to contact Dr Maria Polukarov ([Email Address Removed]) to discuss your interest before you apply, quoting the project code: STAI-CDT-2023-KCL-5.

When you are ready to apply, please follow the application steps on our website here. Our 'How to Apply' page offers further guidance on the PhD application process.

Computer Science (8)

Funding Notes

For more information, see Fees and Funding section here - https://safeandtrustedai.org/apply-now/

References

A Voting-Based System for Ethical Decision Making
Ritesh NoothigaVu, Snehalkumar ‘Neil’ S. Gaikwad, Edmond Awad, Sohan Dsouza, Iyad Rahwan, Pradeep Ravikumar, Ariel D. Procaccia AAAI 2018
Social choice ethics in artificial intelligence Seth D. Baum
AI & SOCIETY volume 35, pages165–176(2020)
Search Suggestions
Search suggestions

Based on your current searches we recommend the following search filters.