Don't miss our weekly PhD newsletter | Sign up now Don't miss our weekly PhD newsletter | Sign up now

This project is no longer listed on FindAPhD.com and may not be available.

Click here to search FindAPhD.com for PhD studentship opportunities
  Dr Frederik Mallmann-Trenn  Applications accepted all year round  Funded PhD Project (UK Students Only)

About the Project

This project is part of a unique cohort based 4-year fully funded PhD programme at the UKRI Centre for Doctoral Training in Safe and Trusted AI.

The UKRI Centre for Doctoral Training in Safe and Trusted AI brings together world leading experts from King’s College London and Imperial College London to train a new generation of researchers and is focused on the use of symbolic artificial intelligence for ensuring the safety and trustworthiness of AI systems.

Project Description:

The rise of fake news and misinformation is a threat to our societies. Even though we are not always able to quantify the effect of misinformation, it is clear that it is polarising the society and often leads to violence and promotes racism.[1] Much of the fake news detection is based on human intervention that is often too slow to stop it—reliable automated fake news detection is needed. The results can be devastating, reaching from political instability to genocide.[2] To make matters more difficult, a platform such as Facebook or Twitter cannot simply delete suspicious messages without providing and explanation to the users. This is what this projects aims to facilitate.

This project aims to develop AI methods to ensure that intelligent algorithms that are used to identify fake news and the sources of fake news are themselves trustworthy and are able to provide human-understandable explanations for their decisions.

In particular, in the first part of the project, the student will design a model of how fake news propagates in social networks, by modelling the social network as a graph, where the nodes are the agents and the edges can represent that an agent follows another (Twitter) or that the agents are “friends” on Facebook. 

Some of the agents are malicious and their aim is to propagate fake news. To build a realistic model, the student will try to analyse existing data (e.g., Twitter Database [3]), read relevant literature describing the fake news mechanisms employed (e.g., [5]) and talk to engineers working on fake news detection. At this stage, Diego Seaz-Trumper [4] from the Wikimedia Foundation agreed to meet regularly and assist with this. We will also reach out to Twitter, Facebook and LinkedIn.

The second part of the project, aims to develop efficient AI methods that block fake news automatically in an explainable way. The use of a graph / network of influences means that it will possible to identify where in the network that interventions will have the most impact on the diffusion of information, and thus where to focus resources to have the most effect.

For this it will be crucial to develop a form of symbolic knowledge representation that is human readable. This will be the third part of the project. More precisely, the project will choose a particular application domain where fake news occurs, such as epidemiology or interest-rate policies, and draw upon any existing computational representation available for the domain. The AI techniques to be used for knowledge representation will be drawn from Natural Language Processing (NLP) and Computational Argumentation, along with causal models, such as Bayesian Belief Networks (Bayesnets). A causal model of the application domain will enable automated reasoning over social influence graphs, for example about the impact of network interventions.

How to Apply:

The deadline for Round A for entry in October 2023 is Monday 28th November. See here for further round deadlines.

We will also be holding an online information session on Tuesday 15 November 1-2pm: UKRI Centre for Doctoral Training in Safe and Trusted AI - Info Session Tickets, Tue 15 Nov 2022 at 13:00 | Eventbrite

Committed to providing an inclusive environment in which diverse students can thrive, we particularly encourage applications from women, disabled and Black, Asian and Minority Ethnic (BAME) candidates, who are currently under-represented in the sector.

We encourage you to contact Dr Frederik Mallmann-Trenn ([Email Address Removed]) to discuss your interest before you apply, quoting the project code: STAI-CDT-2022-KCL-8.

When you are ready to apply, please follow the application steps on our website here. Our 'How to Apply' page offers further guidance on the PhD application process.

Computer Science (8) Philosophy (28)

Funding Notes

See Fees and Funding section for more information on funding - https://safeandtrustedai.org/apply-now/

References

[1] https://www.britishcouncil.org/anyone-anywhere/explore/dark-side-web/fake-news
[2] https://www.nytimes.com/2018/10/15/technology/myanmar-facebook-genocide.html
[3] https://blog.twitter.com/en_us/topics/company/2018/enabling-further-research-of-information-operations-on-twitter
[4] Senior Research Scientist at WMF Research, who is leading the disinformation efforts in his team.
[5] https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0250419
Search Suggestions
Search suggestions

Based on your current searches we recommend the following search filters.