The increasing adoption of Artificial Intelligence (AI) technologies that could impact humanity raises increasing concerns from general public, industry and regulatory bodies. Issues of trust, fairness and transparency in data-driven and automated decision support are still to be consistently measured and validated for fairness and effectiveness.
This research project explores solutions to define, characterise, evaluate and test AI algorithms, building on our current expertise in machine learning, explainable AI, data and model governance and their applications. Quantitative and qualitative measures will encompass Explainability, Efficiency, Efficacy, Ethics, Fairness, Reliability, Robustness and Trustworthiness in data and machine learning resources among other Responsible AI concepts. The project also explores their validation through a number of case studies for health and social care with a focus on post-pandemic (e.g. Post-COVID) challenges. These will help understanding and developing valuable personalised reliable AI systems for the benefit of personal users and wider communities.
As a PhD student you will join our dynamic and motivated Artificial Intelligence Research (AIRe) Group comprising PhD and taught students and interns, postdoctoral researchers, and academic staff. The multidisciplinary research theme will involve experts in AI and Machine Learning, Big Data Science and Technology, Mathematics, Statistics, Health and Social Sciences, Bioinformatics, Economics to assess data and models resources.
You will have the opportunity to present your work at conferences and research events; publish contributions in scientific journals; participate in academic and industry activities. The University of Bradford is offering a comprehensive doctoral training environment.
How to apply
Formal applications can be submitted via the University of Bradford web site. Applicants should register an account, and choose 'Full-time PhD in Computer Science' as the course.