Don't miss our weekly PhD newsletter | Sign up now Don't miss our weekly PhD newsletter | Sign up now

  Explainable, Ethical, Green Artificial Intelligence


   School of Engineering

This project is no longer listed on FindAPhD.com and may not be available.

Click here to search FindAPhD.com for PhD studentship opportunities
  Dr A Starkey, Dr M N Campbell-Bannerman  Applications accepted all year round  Self-Funded PhD Students Only

About the Project

With the advanced capabilities of computational resources, researchers have managed to develop complex algorithms that can produce unprecedented results across various tasks. Depending on the problem, some algorithms utilise enormous resources for days or months to complete their task or to achieve a competitive performance. For example, the training of a deep learning model on policy document on 8 NVIDIA P100 GPUs requires 274,120 compute hours and consumes about 656,347 kWh of energy. This is equivalent to the consumption of 226 domestic electricity meters in the UK per year. This has brought attention on the tremendous energy that is required to run these approaches, and on the study of their negative impact on the environment due to the produced greenhouse gas emissions and resulted in international events and assemblies (more recently the 26th UN Climate Change Conference of the Parties (COP26) in Glasgow), with international agreements such as the Paris Agreement which aims to achieve “energy-related carbon dioxide emissions to net zero by 2050”.

In light of this, the study of environmental cost of Artificial Intelligence approaches, more specifically in machine learning algorithms, has been a trending research line and has brought the attention of many researchers and practitioners in the community.

The growing complexity of machine learning algorithms raise other concerns that are related to their opaque nature and the lack of transparency. Many AI approaches, for instance the ones that fall under the deep learning umbrella, can achieve impressive results in terms of their Performance (i.e. classification accuracy), however, this comes at the cost of complex internal interactions that cannot be directly understood. This opacity has become a growing concern and has triggered the need to clearly understand the automated decisions that are made by these methods, more importantly, if they are intended for use in some critical or highly sensitive domains, such as: health (emergency triage), transportation (automated vehicle), finance (credit scoring), human resource (hiring), Justice (criminal justice), public safety (terrorist detection), and so on. The question that arises here is to which degree can we trust (or fully rely on) the AI decisions that can be biased or erroneous, and complex or opaque (difficult to comprehend)? Justifying algorithmic outputs, more importantly when something goes wrong, is one of many reasons why we need to open the black-box of AI. Having the ability to clearly understand the outputs and decision making process made by the AI can help to improve the methods towards better more ethical outcomes.

Recent research work at the University of Aberdeen focusses on the aspects of Explainable, Ethical and Green AI in the domains of: textual analysis and automated topic detection [1,2]; automated data mining methods [3,4]; and Autonomous learning in Robotics [5,6].

This PhD will look to further the research in this exciting and increasingly relevant area.

Selection will be made on the basis of academic merit. The successful candidate should have, or expect to obtain, a UK Honours Degree at 2.1 or above in Computing Science, Engineering or related disciplines.

APPLICATION PROCEDURE:

Formal applications can be completed online: https://www.abdn.ac.uk/pgap/login.php

• Apply for Degree of Doctor of Philosophy in Engineering

• State name of the lead supervisor as the Name of Proposed Supervisor

• State ‘Self-funded’ as Intended Source of Funding

• State the exact project title on the application form

When applying please ensure all required documents are attached:

• All degree certificates and transcripts (Undergraduate AND Postgraduate MSc-officially translated into English where necessary)

• Detailed CV, Personal Statement/Motivation Letter and Intended source of funding

Computer Science (8) Engineering (12)

Funding Notes

This PhD project has no funding attached and is therefore available to students (UK/International) who are able to seek their own funding or sponsorship. Supervisors will not be able to respond to requests to source funding. Details of the cost of study can be found by visiting https://www.abdn.ac.uk/study/international/finance.php

References

[1] Short Text Classification using Contextual Analysis. Al Sulaimani, S.H., & Starkey, AJ. Nov 2021, In: IEEE Access. 9, p 149619-149629
[2] Predicting Supervised Machine Learning Performances for Sentiment Analysis Using Contextual-Based Approaches. Abdul Aziz, A. & Starkey, AJ, Jan 2020, In: IEEE Access. 8, p. 17722-17733 12 p
[3] Review of Classification Algorithms with Changing Inter-Class Distances. Akpan, U.I. & Starkey, AJ. Jun 2021, In: Machine Learning with Applications 4, 12p, 100031.
[4] Automated Feature Identification and Classification Using Automated Feature Weighted Self Organizing Map (FWSOM). Starkey, AJ, Ahmad, A. U. & Hamdoun, H. 6 Nov 2017 In : IOP Conference Series: Materials Science and Engineering. 261, 1, p. 1-7
[5] An Unsupervised Autonomous Learning Framework for Goal-directed Behaviours in Dynamic Contexts. Ezenkwu, C & Starkey, AJ, Jun 2022, In: Advances in Computational Intelligence 2, 14
[6] Unsupervised Temporospatial Neural Architecture for Sensorimotor Map Learning. Ezenkwu, C. P. & Starkey, AJ, Aug 2019, In: IEEE Transactions on Cognitive and Developmental Systems.

Where will I study?

Search Suggestions
Search suggestions

Based on your current searches we recommend the following search filters.