University of Essex – School of Computer Science and Electronic Engineering
Qualification type: PhD
Closes: 30th August 2019
Contact: For further discussion email Dr Dimitri Ognibene ([email protected]
The PhD studentship aims to address these challenges combining dynamic network modelling with automated content analyses (textual or multimedia) using modern machine learning methods, such as Deep Learning and Hierarchical Bayesian Models.
The student may extract relevant content features, topics and events from online discussions to (a) predict short and long term responses of multiple users, (b) estimate the different effects of diverse information suggestion strategies in such context, and (c) define different interventions to improve model accuracy.
As such, we are particularly interested in PhD candidates that like to work on one or multiple of the following topics:
• Modelling temporal dynamics of social media users beliefs
• Model Based Reinforcement Learning algorithms for governance of social networks
• Semantic analysis of unstructured textual or multimodal data, including sentiment analysis and detection of biased or fake content, violent language and cyberbullying.
The successful applicant will join the Essex COURAGE team — formed by Dr Ognibene (PI), Professor Scherp (Co-I), Dr Villavicencio (Co-I) and Visiting Professor Kruschwitz (Co-I).
AI for good: teaching young people tackle biased news and hateful content.
Artificial Intelligence and Machine Learning have become powerful paradigms to address a wide range of everyday problems. Recent developments have also had a significant impact on the way computers process natural language automatically and make decisions such as to classify an email as spam or to flag up a news article as something that might indicate an emerging story. The focus of this project is to advance the state of the art in understanding, predicting and ameliorating the diffusion and effects of toxic content on social media that quickly become a real threat to society.
This PhD scholarship is part of the research project “COURAGE: A Social Media Companion Safeguarding and Educating Students”, which is an international collaboration funded by VolkswagenStiftung (Volkswagen Foundation) as part of the Artificial Intelligence and the Society of the Future funding initiative. The project partners include the Universitat Pompeu Fabra (Spain), the Istituto per le Tecnologie Didattiche of the National Council of Research ITD-CNR (Italy), Hochschule Ruhr West (Germany) and the Rhine-Ruhr Institute for System Innovation (Germany). The project aims to develop a Virtual Social Media Companion that educates and supports teenage school students facing the threats of social media such as discrimination and biases as well as hate speech, bullying, fake news and other toxic content. The companion will raise awareness of potential threats in social media among students without being intrusive.
The Essex team will be involved in developing Bayesian computational models of beliefs dynamics of social media users to support governance and educational strategies. These models will also be applied to evaluate socially relevant variables, such as trust and inclusion. We will build on and implement state-of-the-art AI methods to provide measurements of sentiment, bias, hatefulness, veracity, polarization, and sensationalism of social media content. In addition, we will drive forward the state of the art in detecting hate speech and biased content. The companion will actively counteract this kind of content, balancing it with opposite perspectives and proposing specifically themed challenges adopting ideas used in games.
Funding for: UK and EU students
Funding amount: Home/EU fee waiver (£4,630 in 2019-20). Stipend equivalent to RCUK £15,990 in 2019-20
The studentship includes, a full Home/EU fee waiver, a doctoral stipend, plus £2,500 training bursary via Proficio funding.