Weekly PhD Newsletter | SIGN UP NOW Weekly PhD Newsletter | SIGN UP NOW

Neuro-Symbolic Artificial Intelligence for Efficient and Interpretable Natural Language Understanding


   Centre for Accountable, Responsible and Transparent AI

  ,  Applications accepted all year round  Competition Funded PhD Project (Students Worldwide)

About the Project

The significant, and by some measures superhuman [1], performance of deep neural models has led to their wide adoption across multiple fields including Natural Language Processing. This adoption has come at the cost of the decades of progress made in what is now referred to as “traditional” Natural Language Processing. In particular, deep learning completely ignores the significant body of research into what is called symbolic AI, such as explicit knowledge representation and reasoning. Additionally, this dependence on deep learning has led to methods which are fundamentally opaque [2]. Popular methods of explaining the decisions made by deep neural models are often not faithful: that is, the explanation generated by a model is not what the model itself is using to arrive at that decision.

To address these shortcomings, this project focuses on the area which spans the intersection of neural and symbolic AI: a fast growing and increasingly important field known as neuro-symbolic artificial intelligence [3].

Neuro-symbolic methods have the potential of benefiting from the advantages of both deep neural models (i.e., performance) and symbolic methods (i.e., transparency and mutability) - see also [9]. Such methods would focus on the development of methods that incorporate declarative knowledge into deep neural methods, including the use of knowledge representation logics, such as natural logic. For example, [8] use a sequence to sequence model to generate natural logic based inferences as proofs, thus providing an inherently interpretable model for fact verification. Similarly, [11] propose a method of infusing knowledge directly into pre-trained language models by enabling them to directly access information pertaining to entities mentioned in text. Other work in this regard include that by [10] who explore methods of incorporating mutable knowledge into neural models.

In addition to the development of neuro-symbolic models which are inherently explainable and transparent, this project requires the application of these methods on social media data. The recent rise in hate, abuse, and fake news in online discourse [3, 4, 5, 6] has made it imperative that effective methods are developed, in particular, those which are interpretable [7].

This project is associated with the UKRI Centre for Doctoral Training (CDT) in Accountable, Responsible and Transparent AI (ART-AI). We value people from different life experiences with a passion for research. The CDT's mission is to graduate diverse specialists with perspectives who can go out in the world and make a difference.

Informal enquiries are strongly encouraged and should be directed to Dr Harish Tayyar Madabushi () or Dr. Iulia Cioroianu ().

Candidates should have a good first degree or a Master’s degree in computer science, maths, a related discipline, or equivalent industrial experience. Good programming skills are essential. A strong mathematical background and previous machine learning experience are highly desirable. Familiarity with bash, linux and using GPUs for high performance computing is a plus. 

Formal applications should be accompanied by a research proposal and made via the University of Bath’s online application form. Enquiries about the application process should be sent to .

Start date: 2 October 2023.


Funding Notes

ART-AI CDT studentships are available on a competition basis and applicants are advised to apply early as offers are made from January onwards. Funding will cover tuition fees and maintenance at the UKRI doctoral stipend rate (£17,668 per annum in 2022/23, increased annually in line with the GDP deflator) for up to 4 years.
We also welcome applications from candidates who can source their own funding.

References

[1] Zoph, B., Bello, I., Kumar, S., Du, N., Huang, Y., Dean, J., Shazeer, N. and Fedus, W., 2022. Designing effective sparse expert models. arXiv preprint arXiv:2202.08906.
[2] Wiegreffe, S., Marasović, A. and Smith, N.A., 2021, November. Measuring Association Between Labels and Free-Text Rationales. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing (pp. 10266-10284).
[3] Ha, L., Andreu Perez, L., & Ray, R. (2021). Mapping Recent Development in Scholarship on Fake News and Misinformation, 2008 to 2017: Disciplinary Contribution, Topics, and Impact. American Behavioral Scientist, 65(2), 290–315. https://doi.org/10.1177/0002764219869402
[4] Persily, N., & Tucker, J. A. (2020). Social Media and Democracy: The State of the Field, Prospects for Reform. Cambridge University Press.
[5] Alyukov, M. (2022). Propaganda, authoritarianism and Russia’s invasion of Ukraine. Nature Human Behaviour, 6(6), Article 6. https://doi.org/10.1038/s41562-022-01375-x
[6] Carson, A., & Wright, S. (2022). Fake news and democracy: Definitions, impact and response. Australian Journal of Political Science, 0(0), 1–10. https://doi.org/10.1080/10361146.2022.2122778
[7] Lanius, C., Weber, R., & MacKenzie, W. I. (2021). Use of bot and content flags to limit the spread of misinformation among social networks: A behavior and attitude survey. Social Network Analysis and Mining, 11(1), 32. https://doi.org/10.1007/s13278-021-00739-x
[8] Krishna, A., Riedel, S. and Vlachos, A., 2022. Proofver: Natural logic theorem proving for fact verification. Transactions of the Association for Computational Linguistics, 10, pp.1013-1030.
[9] Tayyar Madabushi, Ramisch, Idiart, Villavicencio. COLING Tutorial: Psychological, Cognitive and Linguistic BERTology (Part 1), COLING https://sites.google.com/view/coling2022tutorial/
[10] Verga, P., Sun, H., Soares, L.B. and Cohen, W., 2021, June. Adaptable and interpretable neural memory over symbolic knowledge. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (pp. 3678-3691).
[11] Févry, T., Soares, L.B., Fitzgerald, N., Choi, E. and Kwiatkowski, T., 2020, November. Entities as Experts: Sparse Memory Access with Entity Supervision. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP) (pp. 4937-4951).

How good is research at University of Bath in Computer Science and Informatics?


Research output data provided by the Research Excellence Framework (REF)

Click here to see the results for all UK universities

Email Now


Search Suggestions
Search suggestions

Based on your current searches we recommend the following search filters.

PhD saved successfully
View saved PhDs