Don't miss our weekly PhD newsletter | Sign up now Don't miss our weekly PhD newsletter | Sign up now

  Generating explainable answers to fact verification questions


   Department of Computer Science

This project is no longer listed on FindAPhD.com and may not be available.

Click here to search FindAPhD.com for PhD studentship opportunities
  Dr R Batista-Navarro  Applications accepted all year round  Funded PhD Project (Students Worldwide)

About the Project

Online platforms such as social media and news sites have recently become vehicles for widespread misinformation, making it challenging for users to distinguish facts from hearsay. This can be addressed by the task of fact verification, which is aimed at automatically assessing the truthfulness of claims. Traditionally, fact verification has been cast as a binary classification task, whereby a given claim is labelled as true or false by a classifier (Thorne and Vlachos, 2018). More recently, it has also been formulated as a question answering task (Zhang et al., 2020, Jobanputra, 2019), thus providing end-users with the ability to pose their doubts as questions (e.g., "Does Ibuprofen make COVID-19 worse?") and retrieve answers in real-time. These answers can come in a short form (e.g., "No") or in a more informative form (e.g., "There is no evidence linking worse COVID-19 symptoms to Ibuprofen") that can include not only evidence from multiple sources but also explanations.

In this project, the PhD candidate will develop machine learning-based methods for fact verification, casting it as a question answering task. Along the way, the candidate will investigate neural architectures for machine reading comprehension (Liu et al., 2019, Qiu et al., 2019) as well as methods for information retrieval (for selecting relevant and reliable information sources) and natural language generation (for producing long-form answers to questions). Importantly, the candidate will propose approaches for making any generated answers explainable in the form of human-readable natural language text, enabling end-users to better understand and interpret the answers generated by the fact verification model (Kotonya and Toni, 2020).


Funding Notes

Candidates who have been offered a place for PhD study in the Department of Computer Science may be considered for funding by the Department. Further details on funding can be found at: https://www.cs.manchester.ac.uk/study/postgraduate-research/funding/.

References

Thorne, James, and Andreas Vlachos. "Automated Fact Checking: Task Formulations, Methods and Future Directions." Proceedings of the 27th International Conference on Computational Linguistics. 2018 (https://www.aclweb.org/anthology/C18-1283.pdf).
Zhang, Wenxuan, et al. "AnswerFact: Fact Checking in Product Question Answering." Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP). 2020 (https://www.aclweb.org/anthology/2020.emnlp-main.188.pdf).
Jobanputra, Mayank. "Unsupervised Question Answering for Fact-Checking." Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP). 2020 (https://www.aclweb.org/anthology/D19-6609/).
Liu, Shanshan, et al. "Neural machine reading comprehension: Methods and trends." Applied Sciences 9.18 (2019): 3698 (https://www.mdpi.com/2076-3417/9/18/3698).
Qiu, Boyu, et al. "A survey on neural machine reading comprehension." arXiv preprint arXiv:1906.03824 (2019) (https://arxiv.org/abs/1906.03824).
Kotonya, Neema, and Francesca Toni. "Explainable Automated Fact-Checking for Public Health Claims." Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP). 2020 (https://www.aclweb.org/anthology/2020.emnlp-main.623.pdf).

How good is research at The University of Manchester in Computer Science and Informatics?


Research output data provided by the Research Excellence Framework (REF)

Click here to see the results for all UK universities