The increasing adoption of Artificial Intelligence (AI) technologies that could impact humanity raises increasing concerns from general public, industry and regulatory bodies. Issues of trust and transparency in automated decision support are still to be measured and validated for fairness and effectiveness of advantageous technological progress.
The University of Bradford’s Artificial Intelligence Research Group explores solutions to characterise, evaluate and test responsible AI, building on our current expertise in machine learning, explainable AI, federated learning, decentralised data and model governance and their applications. The Trust, Reputation, and Transparency measures will encompass Explainability, Efficiency and Ethics among other concepts. The project also explores their validation through a number of case studies. These will help understanding and developing decentralised AI systems for the benefit of users’ wellbeing and wider communities.
Research students joining our dynamic and motivated research team receive training and contribute to multidisciplinary research on computational models and analytics with applications in Responsible AI.
As a PhD student you will work part of the AIRe team: PhD and taught students and interns, postdoctoral researchers, academic staff. You will have the opportunity to present your work at conferences and research events; publish contributions in scientific journals; participate in academic and industry activities. The University of Bradford is offering a comprehensive doctoral training programme.