This project addresses the issue of responsibility and accountability across the Artificial Intelligence (AI) Lifecyle from a business perspective as part of a practical solution towards trustworthy AI. The aim of this project is to create a reactive, co-produced framework which will enable businesses to identify, assign and fulfil legal, ethical, economic, social, moral and environmental responsibilities and accountability from ideation to product/service development and deployment in the context of data and AI driven systems. In order to do this it is important to first find out to what extent do businesses and citizens understand the responsibility dimensions and legal frameworks in the context of designing trustworthy artificial intelligence-based products/services. Once the framework is designed it will be piloted within business and evaluated to establish if public trust in AI systems is improved.
This project requires not only excellent research skills, but the ability to communicate well and work alongside all stakeholders, especially those whose opinions and concerns in the development of data and AI products and services is often not heard.
AIMS AND OBJECTIVES
The aim of this project is to create a reactive, co-produced framework which will enable businesses to identify, assign and fulfil legal, ethical, economic, social, moral and environmental responsibilities and accountability from ideation to product/service development and deployment in the context of data and AI driven systems
Objectives:
- Knowledge Acquisition on the Responsibility/Accountability Landscape achieved through extensive literature review and SME /Public engagement and consultation. This will include existing and emerging legal frameworks to align the proposed framework to existing best practice and initiatives in development nationally and globally, standards, guidelines, tools, and methods to identify emerging responsibility and accountability frameworks, social, moral, and environmental responsibility in the context of AI to investigate social attitudes to these dimensions of responsibility and accountability.
- Evaluation of the knowledge/education skills gap in the field of ethical AI governance. This will involve the design of an evaluation comprised of both a business and public study.
- Analysis, design, and implementation of a new prototype responsibility /accountability Framework, with ongoing feedback/ refinement based upon feedback and evaluation.
- Design framework evaluation methodology. The evaluation methodology will focus on the end-users’ perception of usability and effectiveness of the framework, and how adoption of the framework affects stakeholder trust. The evaluation methodology will include a) surveys/interviews businesses, b) round table discussions with stakeholders
- Design and development of a framework implementation online toolkit e.g. downloadable templates, checklists; framework handbook and audit guide; training videos; and guidance on reporting and compliance monitoring)
- Evaluation of the dimensionalities of the framework from all stakeholders.
For more information visit https://www.mmu.ac.uk/research/research-study/scholarships#ai-71449-5