University of Leeds Featured PhD Programmes
University of Kent Featured PhD Programmes
University of Kent Featured PhD Programmes
Catalysis Hub Featured PhD Programmes
University of Sheffield Featured PhD Programmes

Multi-agent Reinforcement Learning Control for Energy Storage and Renewable EnergyIntegration in Smart Grids with Economic Dispatch. Mathematics PhD

  • Full or part time
  • Application Deadline
    Monday, May 13, 2019
  • Competition Funded PhD Project (European/UK Students Only)
    Competition Funded PhD Project (European/UK Students Only)

Project Description

The University of Exeter EPSRC DTP (Engineering and Physical Sciences Research Council Doctoral Training Partnership) is offering up to 4 fully funded doctoral studentships for 2019/20 entry. Students will be given sector-leading training and development with outstanding facilities and resources. Studentships will be awarded to outstanding applicants, the distribution will be overseen by the University’s EPSRC Strategy Group in partnership with the Doctoral College.

Supervisors:
Dr Saptarshi Das, Department of Mathematics, College of Engineering, Mathematics and Physical Sciences
Dr Mohammad Abusara, Department of Renewable Energy, College of Engineering, Mathematics and Physical Sciences

Project description:
According to the IEEE 2050 grid vision [1], smart grids and future power networks will have complex cyber-physical system interactions between multiple distributed energy resources (DERs) e.g. marine, solar, wind, bioenergy etc. beside the conventional synchronous generators along with efficient management of energy storage elements e.g. electrochemical (battery), plug in hybrid electric vehicles (PHEV), super-capacitor, fuel cells, hydrogen etc. Integration of DERs and intelligent management of storage and loads are crucial for efficient operation of smart grids enabling the inter-operability between power systems dynamics with information and communication technologies (ICT). In such complex smart grids, different functions e.g. wide area monitoring, state estimation, control, optimization are expected to become very complicated due to interactions amongst system dynamics with communications, information and economic aspects. Recently multi-agent and reinforcement learning (RL) based artificial intelligence and machine learning algorithms have shown huge potential to solve such complex planning and conflicting optimization problems by using various mathematical models and simulations for agents and the environment.

This project will explore a combination of multi-agent learning and deep reinforcement learning methods in designing robust control systems for future smart grids. This will involve resolving conflicting actions taken by multiple agents while optimizing for local or global rewards and balancing the exploration vs. exploitation trade-offs in the functional landscapes of many objective functions with constraints. So far the multi-agent and reinforcement learning control problems have mostly been used for accurate environment and agent models which will be extended to consider different forms of uncertainties (in model parameters, dynamical systems norms, stochastic dynamics and forcing, noise, unmodelled dynamics, fault tolerance and changes in network topology etc.) and will advance the state-of-the-art methods in robust reinforcement learning control methods [2]. Existing robust control methods primarily developed for linear systems will be revisited in the context of agent-based learning and control employing complex cyber-physical systems. Beside tackling the voltage/frequency control issues via reactive/active power management, the control and optimization tasks become even more complicated when considering the economic aspect for the distributed generation and energy storage, in particular PHEVs as both storage and DGs. Resolving such complex optimization-based control tasks will need fundamental breakthrough in the data-based learning and control which this project aims at. The project will fuse concepts of grid frequency/voltage control [3]-[7] using advanced methods in RL [8], [9], and develop control strategies to tackle situations like sudden islanding [10], [11] and faults. Several energy trading algorithms have recently been proposed using game theory [12], [13] for maximizing the utility as a trade-off between revenue from trading and associated cost. Economic dispatch problems with emission, energy storage with network constraints are solved in [14], [15].

The project will explore these existing concepts to design efficient economic operation of smart grids with a focus on PHEV, battery storage and integration of DERs in the form of multiple renewable energy systems using a combination of methods from robust control theory, reinforcement learning, game theory and multi-agent systems. The statistical learning and control algorithms will be translated to perform experimental case studies on the existing microgrids with battery storage.

Funding Notes

For successful eligible applicants the studentship comprises:

An index-linked stipend for up to 3.5 years full time (currently £14,777 per annum for 2018/19), pro-rata for part-time students.
Payment of University tuition fees (UK/EU)
Research Training Support Grant (RTSG) of £5,000 over 3.5 years, or pro-rata for part-time students

References

[1] G. Simard, “IEEE Grid Vision 2050”, 2013, pp. 1-93, IEEE.
[2] R.M. Kretchmar, P.M. Young, C.W. Anderson, D.C. Hittle, M.L. Anderson, and C.C. Delnero, “Robust reinforcement learning control with static and dynamic stability”. International Journal of Robust and Nonlinear Control, vol. 11, no. 15, pp.1469-1500, 2001.
[3] I. Pan and S. Das, “Fractional order AGC for distributed energy resources using robust optimization", IEEE Transactions on Smart Grid, vol. 7, no. 5, pp. 2175-2186, Sep. 2016.
[4] I. Pan and S. Das, “Kriging based surrogate modeling for fractional order control of microgrids”, IEEE Transactions on Smart Grid, vol. 6, no. 1, pp. 36-44, Jan. 2015.
[5] I. Pan and S. Das, “Fractional order fuzzy control of hybrid power system with renewable generation using chaotic PSO”, ISA Transactions, vol. 62, pp. 19-29, May. 2016.
[6] I. Pan and S. Das, “Fractional order load-frequency control of interconnected power systems using chaotic multi-objective optimization”, Applied Soft Computing, vol. 29, pp. 328-344, Apr. 2015.
[7] S. Das and I. Pan, “On the mixed H2/H∞ loop shaping trade-offs in fractional order control of the AVR system”, IEEE Transactions on Industrial Informatics, vol. 10, no. 4, pp. 1982-1991, Nov. 2014.
[8] E. Anderlini, D.I. Forehand, P. Stansell, Q. Xiao, and M. Abusara, “Control of a point absorber using reinforcement learning” IEEE Transactions on Sustainable Energy, vol. 7, no. 4, pp. 1681-1690, 2016.
[9] E. Anderlini, D.I. Forehand, E. Bannon, and M. Abusara, “Control of a realistic wave energy converter model using least-squares policy iteration”. IEEE Transactions on Sustainable Energy, vol. 8, no. 4, pp. 1618-1628, 2017.
[10] R. AlBadwawi, W. Issa, M. Abusara, and T. Mallick, “Supervisory control for power management of an islanded AC microgrid using frequency signalling-based fuzzy logic controller”, IEEE Transactions on Sustainable Energy, 2018.
[11] W.R. Issa, A.H. El Khateb, M. Abusara, T.K. Mallick, “Control strategy for uninterrupted microgrid mode transfer during unintentional islanding scenarios”, IEEE Transactions on Industrial Electronics, vol. 65, no. 6, pp. 4831-4839, 2018.
[12] Y. Wang, W. Saad, Z. Han, H.V. Poor, and T. Başar, “A game-theoretic approach to energy trading in the smart grid”, IEEE Transactions on Smart Grid, vol. 5, no. 3, pp.1439-1450, 2014.
For references [13]-[15] see: http://www.exeter.ac.uk/studying/funding/award/?id=3386

Related Subjects

Email Now

Insert previous message below for editing? 
You haven’t included a message. Providing a specific message means universities will take your enquiry more seriously and helps them provide the information you need.
Why not add a message here
* required field
Send a copy to me for my own records.

Your enquiry has been emailed successfully





FindAPhD. Copyright 2005-2019
All rights reserved.