Weekly PhD Newsletter | SIGN UP NOW Weekly PhD Newsletter | SIGN UP NOW

Explainable Natural Language Processing for the Analysis of Online Discourse


   Centre for Accountable, Responsible and Transparent AI

  ,  Applications accepted all year round  Competition Funded PhD Project (Students Worldwide)

About the Project

Recent advances in Natural Language Processing (NLP), have relied, in large part, on deep neural models which consist of hundreds of millions (and often billions) of parameters. These models, trained on massive amounts of data (e.g., all of the web), perform extremely well (better than humans by some benchmarks) on a range of downstream tasks, including sentiment analysis, automatic question answering, and textual entailment [1]. While the increased reliance on deep learning has resulted in these significant gains, this very dependence has led to methods which are fundamentally opaque and near impossible to interpret [2].

Simultaneously, deep neural models have become popular as efficient and effective solutions for the analysis of data pertaining to online discourse. For example, they achieve impressive results on the identification of online fake news [7], propaganda [10] and author stance on topics [11]. The use of effective automated methods is now imperative given the significant increase in the volume, diversity and complexity of online fake news, propaganda and abuse in recent years [3, 4, 5, 6].

However, the inherent opacity of deep neural models pose significant challenges in these settings: recent research suggests that the best strategies for dealing with disinformation and propaganda is not removal, but flagging and the provision of additional information [8], which is impossible without adequate explanation. An ideal fake-news identification system, for example, would be characterised not just by high levels of accuracy, but would also provide a convincing explanation that can be easily communicated to the public (see also [9]).

As such, this project focuses on the generation of explanations for model outputs with applications primarily in, but not limited to, social media data.

Model explanations are typically either faithful or post hoc. Faithful explanations are those which provide a clear explanation for why a model made a particular prediction and are typically not common in deep learning. Post hoc explanations, on the other hand, are independent of the way in which models make predictions. As such, they provide some explanation supporting model predictions. Additionally, model explanations can be generated completely independent of the output or in a multi-task framework. This project will explore all of these avenues of research.

This project is associated with the UKRI Centre for Doctoral Training (CDT) in Accountable, Responsible and Transparent AI (ART-AI). We value people from different life experiences with a passion for research. The CDT's mission is to graduate diverse specialists with perspectives who can go out in the world and make a difference.

Informal enquiries are strongly encouraged and should be directed to Dr Harish Tayyar Madabushi () or to Dr Iulia Cioroianu (). 

Candidates should have a good first degree or a Master’s degree in computer science, maths, a related discipline, or equivalent industrial experience. Good programming skills are essential. A strong mathematical background and previous machine learning experience are highly desirable. Familiarity with bash, linux and using GPUs for high performance computing is a plus. 

Formal applications should be accompanied by a research proposal and made via the University of Bath’s online application form. Enquiries about the application process should be sent to .

Start date: 2 October 2023.


Funding Notes

ART-AI CDT studentships are available on a competition basis and applicants are advised to apply early as offers are made from January onwards. Funding will cover tuition fees and maintenance at the UKRI doctoral stipend rate (£17,668 per annum in 2022/23, increased annually in line with the GDP deflator) for up to 4 years.
We also welcome applications from candidates who can source their own funding.

References

[1] Zoph, B., Bello, I., Kumar, S., Du, N., Huang, Y., Dean, J., Shazeer, N. and Fedus, W., 2022. Designing effective sparse expert models. arXiv preprint arXiv:2202.08906.
[2] Wiegreffe, S., Marasović, A. and Smith, N.A., 2021, November. Measuring Association Between Labels and Free-Text Rationales. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing (pp. 10266-10284).
[3] Ha, L., Andreu Perez, L., & Ray, R. (2021). Mapping Recent Development in Scholarship on Fake News and Misinformation, 2008 to 2017: Disciplinary Contribution, Topics, and Impact. American Behavioral Scientist, 65(2), 290–315. https://doi.org/10.1177/0002764219869402
[4] Persily, N., & Tucker, J. A. (2020). Social Media and Democracy: The State of the Field, Prospects for Reform. Cambridge University Press.
[5] Osmundsen, M., Bor, A., Vahlstrup, P. B., Bechmann, A., & Petersen, M. B. (2021). Partisan Polarization Is the Primary Psychological Motivation behind Political Fake News Sharing on Twitter. American Political Science Review, 115(3), 999–1015. https://doi.org/10.1017/S0003055421000290
[6] Alyukov, M. (2022). Propaganda, authoritarianism and Russia’s invasion of Ukraine. Nature Human Behaviour, 6(6), Article 6. https://doi.org/10.1038/s41562-022-01375-x
[7] Mridha, M. F., Keya, A. J., Hamid, Md. A., Monowar, M. M., & Rahman, Md. S. (2021). A Comprehensive Review on Fake News Detection With Deep Learning. IEEE Access, 9, 156151–156170. https://doi.org/10.1109/ACCESS.2021.3129329
[8] Lanius, C., Weber, R., & MacKenzie, W. I. (2021). Use of bot and content flags to limit the spread of misinformation among social networks: A behavior and attitude survey. Social Network Analysis and Mining, 11(1), 32. https://doi.org/10.1007/s13278-021-00739-x
[9] Tayyar Madabushi, Ramisch, Idiart, Villavicencio. COLING Tutorial: Psychological, Cognitive and Linguistic BERTology (Part 1), COLING https://sites.google.com/view/coling2022tutorial/
[10] Tayyar Madabushi, H., Kochkina, E. and Castelle, M., 2019, November. Cost-Sensitive BERT for Generalisable Sentence Classification on Imbalanced Data. In Proceedings of the Second Workshop on Natural Language Processing for Internet Freedom: Censorship, Disinformation, and Propaganda (pp. 125-134).
[11] Prakash, A. and Tayyar Madabushi, H., 2020, December. Incorporating Count-Based Features into Pre-Trained Models for Improved Stance Detection. In Proceedings of the 3rd NLP4IF Workshop on NLP for Internet Freedom: Censorship, Disinformation, and Propaganda (pp. 22-32).

How good is research at University of Bath in Computer Science and Informatics?


Research output data provided by the Research Excellence Framework (REF)

Click here to see the results for all UK universities

Email Now


Search Suggestions
Search suggestions

Based on your current searches we recommend the following search filters.

PhD saved successfully
View saved PhDs