About the Project
The significant, and by some measures superhuman , performance of deep neural models has led to their wide adoption across multiple fields including Natural Language Processing. This adoption has come at the cost of the decades of progress made in what is now referred to as “traditional” Natural Language Processing. In particular, deep learning completely ignores the significant body of research into what is called symbolic AI, such as explicit knowledge representation and reasoning. Additionally, this dependence on deep learning has led to methods which are fundamentally opaque . Popular methods of explaining the decisions made by deep neural models are often not faithful: that is, the explanation generated by a model is not what the model itself is using to arrive at that decision.
To address these shortcomings, this project focuses on the area which spans the intersection of neural and symbolic AI: a fast growing and increasingly important field known as neuro-symbolic artificial intelligence .
Neuro-symbolic methods have the potential of benefiting from the advantages of both deep neural models (i.e., performance) and symbolic methods (i.e., transparency and mutability) - see also . Such methods would focus on the development of methods that incorporate declarative knowledge into deep neural methods, including the use of knowledge representation logics, such as natural logic. For example,  use a sequence to sequence model to generate natural logic based inferences as proofs, thus providing an inherently interpretable model for fact verification. Similarly,  propose a method of infusing knowledge directly into pre-trained language models by enabling them to directly access information pertaining to entities mentioned in text. Other work in this regard include that by  who explore methods of incorporating mutable knowledge into neural models.
In addition to the development of neuro-symbolic models which are inherently explainable and transparent, this project requires the application of these methods on social media data. The recent rise in hate, abuse, and fake news in online discourse [3, 4, 5, 6] has made it imperative that effective methods are developed, in particular, those which are interpretable .
This project is associated with the UKRI Centre for Doctoral Training (CDT) in Accountable, Responsible and Transparent AI (ART-AI). We value people from different life experiences with a passion for research. The CDT's mission is to graduate diverse specialists with perspectives who can go out in the world and make a difference.
Informal enquiries are strongly encouraged and should be directed to Dr Harish Tayyar Madabushi (email@example.com) or Dr. Iulia Cioroianu (firstname.lastname@example.org).
Candidates should have a good first degree or a Master’s degree in computer science, maths, a related discipline, or equivalent industrial experience. Good programming skills are essential. A strong mathematical background and previous machine learning experience are highly desirable. Familiarity with bash, linux and using GPUs for high performance computing is a plus.
Formal applications should be accompanied by a research proposal and made via the University of Bath’s online application form. Enquiries about the application process should be sent to email@example.com.
Start date: 2 October 2023.
 Zoph, B., Bello, I., Kumar, S., Du, N., Huang, Y., Dean, J., Shazeer, N. and Fedus, W., 2022. Designing effective sparse expert models. arXiv preprint arXiv:2202.08906.
 Wiegreffe, S., Marasović, A. and Smith, N.A., 2021, November. Measuring Association Between Labels and Free-Text Rationales. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing (pp. 10266-10284).
 Ha, L., Andreu Perez, L., & Ray, R. (2021). Mapping Recent Development in Scholarship on Fake News and Misinformation, 2008 to 2017: Disciplinary Contribution, Topics, and Impact. American Behavioral Scientist, 65(2), 290–315. https://doi.org/10.1177/0002764219869402
 Persily, N., & Tucker, J. A. (2020). Social Media and Democracy: The State of the Field, Prospects for Reform. Cambridge University Press.
 Alyukov, M. (2022). Propaganda, authoritarianism and Russia’s invasion of Ukraine. Nature Human Behaviour, 6(6), Article 6. https://doi.org/10.1038/s41562-022-01375-x
 Carson, A., & Wright, S. (2022). Fake news and democracy: Definitions, impact and response. Australian Journal of Political Science, 0(0), 1–10. https://doi.org/10.1080/10361146.2022.2122778
 Lanius, C., Weber, R., & MacKenzie, W. I. (2021). Use of bot and content flags to limit the spread of misinformation among social networks: A behavior and attitude survey. Social Network Analysis and Mining, 11(1), 32. https://doi.org/10.1007/s13278-021-00739-x
 Krishna, A., Riedel, S. and Vlachos, A., 2022. Proofver: Natural logic theorem proving for fact verification. Transactions of the Association for Computational Linguistics, 10, pp.1013-1030.
 Tayyar Madabushi, Ramisch, Idiart, Villavicencio. COLING Tutorial: Psychological, Cognitive and Linguistic BERTology (Part 1), COLING https://sites.google.com/view/coling2022tutorial/
 Verga, P., Sun, H., Soares, L.B. and Cohen, W., 2021, June. Adaptable and interpretable neural memory over symbolic knowledge. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (pp. 3678-3691).
 Févry, T., Soares, L.B., Fitzgerald, N., Choi, E. and Kwiatkowski, T., 2020, November. Entities as Experts: Sparse Memory Access with Entity Supervision. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP) (pp. 4937-4951).