Looking to list your PhD opportunities? Log in here.
About the Project
With the advanced capabilities of computational resources, researchers have managed to develop complex algorithms that can produce unprecedented results across various tasks. Depending on the problem, some algorithms utilise enormous resources for days or months to complete their task or to achieve a competitive performance. For example, the training of a deep learning model on policy document on 8 NVIDIA P100 GPUs requires 274,120 compute hours and consumes about 656,347 kWh of energy. This is equivalent to the consumption of 226 domestic electricity meters in the UK per year. This has brought attention on the tremendous energy that is required to run these approaches, and on the study of their negative impact on the environment due to the produced greenhouse gas emissions and resulted in international events and assemblies (more recently the 26th UN Climate Change Conference of the Parties (COP26) in Glasgow), with international agreements such as the Paris Agreement which aims to achieve “energy-related carbon dioxide emissions to net zero by 2050”.
In light of this, the study of environmental cost of Artificial Intelligence approaches, more specifically in machine learning algorithms, has been a trending research line and has brought the attention of many researchers and practitioners in the community.
The growing complexity of machine learning algorithms raise other concerns that are related to their opaque nature and the lack of transparency. Many AI approaches, for instance the ones that fall under the deep learning umbrella, can achieve impressive results in terms of their Performance (i.e. classification accuracy), however, this comes at the cost of complex internal interactions that cannot be directly understood. This opacity has become a growing concern and has triggered the need to clearly understand the automated decisions that are made by these methods, more importantly, if they are intended for use in some critical or highly sensitive domains, such as: health (emergency triage), transportation (automated vehicle), finance (credit scoring), human resource (hiring), Justice (criminal justice), public safety (terrorist detection), and so on. The question that arises here is to which degree can we trust (or fully rely on) the AI decisions that can be biased or erroneous, and complex or opaque (difficult to comprehend)? Justifying algorithmic outputs, more importantly when something goes wrong, is one of many reasons why we need to open the black-box of AI. Having the ability to clearly understand the outputs and decision making process made by the AI can help to improve the methods towards better more ethical outcomes.
Recent research work at the University of Aberdeen focusses on the aspects of Explainable, Ethical and Green AI in the domains of: textual analysis and automated topic detection [1,2]; automated data mining methods [3,4]; and Autonomous learning in Robotics [5,6].
This PhD will look to further the research in this exciting and increasingly relevant area.
Selection will be made on the basis of academic merit. The successful candidate should have, or expect to obtain, a UK Honours Degree at 2.1 or above in Computing Science, Engineering or related disciplines.
APPLICATION PROCEDURE:
Formal applications can be completed online: https://www.abdn.ac.uk/pgap/login.php
• Apply for Degree of Doctor of Philosophy in Engineering
• State name of the lead supervisor as the Name of Proposed Supervisor
• State ‘Self-funded’ as Intended Source of Funding
• State the exact project title on the application form
When applying please ensure all required documents are attached:
• All degree certificates and transcripts (Undergraduate AND Postgraduate MSc-officially translated into English where necessary)
• Detailed CV, Personal Statement/Motivation Letter and Intended source of funding
Funding Notes
References
[2] Predicting Supervised Machine Learning Performances for Sentiment Analysis Using Contextual-Based Approaches. Abdul Aziz, A. & Starkey, AJ, Jan 2020, In: IEEE Access. 8, p. 17722-17733 12 p
[3] Review of Classification Algorithms with Changing Inter-Class Distances. Akpan, U.I. & Starkey, AJ. Jun 2021, In: Machine Learning with Applications 4, 12p, 100031.
[4] Automated Feature Identification and Classification Using Automated Feature Weighted Self Organizing Map (FWSOM). Starkey, AJ, Ahmad, A. U. & Hamdoun, H. 6 Nov 2017 In : IOP Conference Series: Materials Science and Engineering. 261, 1, p. 1-7
[5] An Unsupervised Autonomous Learning Framework for Goal-directed Behaviours in Dynamic Contexts. Ezenkwu, C & Starkey, AJ, Jun 2022, In: Advances in Computational Intelligence 2, 14
[6] Unsupervised Temporospatial Neural Architecture for Sensorimotor Map Learning. Ezenkwu, C. P. & Starkey, AJ, Aug 2019, In: IEEE Transactions on Cognitive and Developmental Systems.
Email Now
Why not add a message here
The information you submit to Aberdeen University will only be used by them or their data partners to deal with your enquiry, according to their privacy notice. For more information on how we use and store your data, please read our privacy statement.

Search suggestions
Based on your current searches we recommend the following search filters.
Check out our other PhDs in Aberdeen, United Kingdom
Check out our other PhDs in United Kingdom
Start a New search with our database of over 4,000 PhDs

PhD suggestions
Based on your current search criteria we thought you might be interested in these.
Explainable artificial intelligence in Bayesian machine learning research and applications
University of Bath
Decision support for public security using explainable artificial intelligence
University of Portsmouth
Explainable Artificial Intelligence: Understanding, Interpreting and Using Deep Learning Models (RDF23/CIS/MAO)
Northumbria University