With the advanced capabilities of computational resources, researchers have managed to develop complex algorithms that can produce unprecedented results across various tasks. Depending on the problem, some algorithms utilise enormous resources for days or months to complete their task or to achieve a competitive performance. For example, the training of a deep learning model on policy document on 8 NVIDIA P100 GPUs requires 274,120 compute hours and consumes about 656,347 kWh of energy. This is equivalent to the consumption of 226 domestic electricity meters in the UK per year. This has brought attention on the tremendous energy that is required to run these approaches, and on the study of their negative impact on the environment due to the produced greenhouse gas emissions and resulted in international events and assemblies (more recently the 26th UN Climate Change Conference of the Parties (COP26) in Glasgow), with international agreements such as the Paris Agreement which aims to achieve “energy-related carbon dioxide emissions to net zero by 2050”.
In light of this, the study of environmental cost of Artificial Intelligence approaches, more specifically in machine learning algorithms, has been a trending research line and has brought the attention of many researchers and practitioners in the community.
The growing complexity of machine learning algorithms raise other concerns that are related to their opaque nature and the lack of transparency. Many AI approaches, for instance the ones that fall under the deep learning umbrella, can achieve impressive results in terms of their Performance (i.e. classification accuracy), however, this comes at the cost of complex internal interactions that cannot be directly understood. This opacity has become a growing concern and has triggered the need to clearly understand the automated decisions that are made by these methods, more importantly, if they are intended for use in some critical or highly sensitive domains, such as: health (emergency triage), transportation (automated vehicle), finance (credit scoring), human resource (hiring), Justice (criminal justice), public safety (terrorist detection), and so on. The question that arises here is to which degree can we trust (or fully rely on) the AI decisions that can be biased or erroneous, and complex or opaque (difficult to comprehend)? Justifying algorithmic outputs, more importantly when something goes wrong, is one of many reasons why we need to open the black-box of AI. Having the ability to clearly understand the outputs and decision making process made by the AI can help to improve the methods towards better more ethical outcomes.
Recent research work at the University of Aberdeen focusses on the aspects of Explainable, Ethical and Green AI in the domains of: textual analysis and automated topic detection [1,2]; automated data mining methods [3,4]; and Autonomous learning in Robotics [5,6].
This PhD will look to further the research in this exciting and increasingly relevant area.
Selection will be made on the basis of academic merit. The successful candidate should have, or expect to obtain, a UK Honours Degree at 2.1 or above in Computing Science, Engineering or related disciplines.
Formal applications can be completed online: https://www.abdn.ac.uk/pgap/login.php
• Apply for Degree of Doctor of Philosophy in Engineering
• State name of the lead supervisor as the Name of Proposed Supervisor
• State ‘Self-funded’ as Intended Source of Funding
• State the exact project title on the application form
When applying please ensure all required documents are attached:
• All degree certificates and transcripts (Undergraduate AND Postgraduate MSc-officially translated into English where necessary)
• Detailed CV, Personal Statement/Motivation Letter and Intended source of funding