The significant, and by some measures superhuman , performance of deep neural models has led to their wide adoption across multiple fields including Natural Language Processing. This adoption has come at the cost of the decades of progress made in what is now referred to as “traditional” Natural Language Processing. In particular, deep learning completely ignores the significant body of research into what is called symbolic AI, such as explicit knowledge representation and reasoning. Additionally, this dependence on deep learning has led to methods which are fundamentally opaque . Popular methods of explaining the decisions made by deep neural models are often not faithful: that is, the explanation generated by a model is not what the model itself is using to arrive at that decision.
To address these shortcomings, this project focuses on the area which spans the intersection of neural and symbolic AI: a fast growing and increasingly important field known as neuro-symbolic artificial intelligence .
Neuro-symbolic methods have the potential of benefiting from the advantages of both deep neural models (i.e., performance) and symbolic methods (i.e., transparency and mutability) - see also . Such methods would focus on the development of methods that incorporate declarative knowledge into deep neural methods, including the use of knowledge representation logics, such as natural logic. For example,  use a sequence to sequence model to generate natural logic based inferences as proofs, thus providing an inherently interpretable model for fact verification. Similarly,  propose a method of infusing knowledge directly into pre-trained language models by enabling them to directly access information pertaining to entities mentioned in text. Other work in this regard include that by  who explore methods of incorporating mutable knowledge into neural models.
In addition to the development of neuro-symbolic models which are inherently explainable and transparent, this project requires the application of these methods on social media data. The recent rise in hate, abuse, and fake news in online discourse [3, 4, 5, 6] has made it imperative that effective methods are developed, in particular, those which are interpretable .
This project is associated with the UKRI Centre for Doctoral Training (CDT) in Accountable, Responsible and Transparent AI (ART-AI). We value people from different life experiences with a passion for research. The CDT's mission is to graduate diverse specialists with perspectives who can go out in the world and make a difference.
Informal enquiries are strongly encouraged and should be directed to Dr Harish Tayyar Madabushi ([Email Address Removed]) or Dr. Iulia Cioroianu ([Email Address Removed]).
Candidates should have a good first degree or a Master’s degree in computer science, maths, a related discipline, or equivalent industrial experience. Good programming skills are essential. A strong mathematical background and previous machine learning experience are highly desirable. Familiarity with bash, linux and using GPUs for high performance computing is a plus.
Formal applications should be accompanied by a research proposal and made via the University of Bath’s online application form. Enquiries about the application process should be sent to [Email Address Removed].
Start date: 2 October 2023.