Research on Collaborative Control Strategy of Virtual Power Plant Based on Deep Reinforcement Learning Framework
DOI:
https://doi.org/10.13052/dgaej2156-3306.4034Keywords:
Deep reinforcement learning, DQN method, multi-objective decision-making framework, virtual power plant, collaborative scheduling optimizationAbstract
To address the limitations of the traditional Deep Q-Network (DQN) in VPPs collaborative dispatch, such as large state space, low computational efficiency, and poor generalization, an improved DQN (IDQN) is proposed. Firstly, consider various factors of VPPs and optimize its dispatch process to enhance its prediction accuracy. Secondly, a new reward mechanism is designed to guide agents to consider both long-term stability and rational allocation of resources to improve energy utilization. Then, IDQN is employed to optimize the VPPs cooperative scheduling, which can improve the scheduling efficiency and control its cost. Finally, based on the improved DQN, a multi-objective decision-making framework is proposed for collaborative optimization scheduling of VPP to improve system stability. Compared with the classical Q-learning algorithm, the introduced IDQN has lower rejection rate, lower cost and higher scheduling efficiency when used in VPP collaborative scheduling.
Downloads
References
H. M. Rouzbahani, H. Karimipour, and L. Lei, “A review on virtual power plant for energy management,” Sustainable energy technologies and assessments, vol. 47, p. 101370, 2021.
D. Qiu, Y. Wang, W. Hua, and G. Strbac, “Reinforcement learning for electric vehicle applications in power systems: A critical review,” Renewable and Sustainable Energy Reviews, vol. 173, p. 113052, 2023.
H. Gao, T. Jin, C. Feng, C. Li, Q. Chen, and C. Kang, “Review of virtual power plant operations: Resource coordination and multidimensional interaction,” Applied Energy, vol. 357, p. 122284, 2024.
Q. Zhang, J. Yan, H. O. Gao, and F. You, “A systematic review on power systems planning and operations management with grid integration of transportation electrification at scale,” Advances in Applied Energy, vol. 11, p. 100147, 2023.
Y. Kuang et al., “Model-free demand response scheduling strategy for virtual power plants considering risk attitude of consumers,” CSEE Journal of Power and Energy Systems, vol. 9, no. 2, pp. 516–528, 2021.
X. Liu, “Bi-layer game method for scheduling of virtual power plant with multiple regional integrated energy systems,” International Journal of Electrical Power & Energy Systems, vol. 149, p. 109063, 2023.
A. K. Podder et al., “Systematic categorization of optimization strategies for virtual power plants,” Energies, vol. 13, no. 23, p. 6251, 2020.
X. Li, F. Luo, and C. Li, “Multi-agent deep reinforcement learning-based autonomous decision-making framework for community virtual power plants,” Applied Energy, vol. 360, p. 122813, 2024.
H. Wu, D. Qiu, L. Zhang, and M. Sun, “Adaptive multi-agent reinforcement learning for flexible resource management in a virtual power plant with dynamic participating multi-energy buildings,” Applied Energy, vol. 374, p. 123998, 2024.
S. Wang, W. Sheng, Y. Shang, and K. Liu, “Distribution network voltage control considering virtual power plants cooperative optimization with transactive energy,” Applied Energy, vol. 371, p. 123680, 2024.
J. Zhu, P. Duan, M. Liu, Y. Xia, Y. Guo, and X. Mo, “Bi-Level real-time economic dispatch of VPP considering uncertainty,” IEEE Access, vol. 7, pp. 15282–15291, 2019.
A. Alahyari, M. Ehsan, and M. Mousavizadeh, “A hybrid storage-wind virtual power plant (VPP) participation in the electricity markets: A self-scheduling optimization considering price, renewable generation, and electric vehicles uncertainties,” Journal of Energy Storage, vol. 25, p. 100812, 2019.
Q. Li et al., “A scheduling framework for VPP considering multiple uncertainties and flexible resources,” Energy, vol. 282, p. 128385, 2023.
T. Popławski, S. Dudzik, P. Szela̧g, and J. Baran, “A case study of a virtual power plant (VPP) as a data acquisition tool for PV energy forecasting,” Energies, vol. 14, no. 19, p. 6200, 2021.
S.-Y. Park, S.-W. Park, and S.-Y. Son, “Optimal VPP Operation Considering Network Constraint Uncertainty of DSO,” IEEE Access, vol. 11, pp. 8523–8530, 2023.
X. Wang, H. Lu, Y. Zhang, Y. Wang, and J. Wang, “Decentralized coordinated operation model of VPP and P2H systems based on stochastic-bargaining game considering multiple uncertainties and carbon cost,” Applied Energy, vol. 312, p. 118750, 2022.
K. M. Muttaqi and D. Sutanto, “A cooperative energy transaction model for VPP integrated renewable energy hubs in deregulated electricity markets,” IEEE Transactions on Industry Applications, vol. 58, no. 6, pp. 7776–7791, 2022.
B. Feng, Z. Liu, G. Huang, and C. Guo, “Robust federated deep reinforcement learning for optimal control in multiple virtual power plants with electric vehicles,” Applied Energy, vol. 349, p. 121615, 2023.
Y. Li, W. Chang, and Q. Yang, “Deep reinforcement learning based hierarchical energy management for virtual power plant with aggregated multiple heterogeneous microgrids,” Applied Energy, vol. 382, p. 125333, 2025.
L. Xue, Y. Zhang, J. Wang, H. Li, and F. Li, “Privacy-preserving multi-level co-regulation of VPPs via hierarchical safe deep reinforcement learning,” Applied Energy, vol. 371, p. 123654, 2024.
L. Lin, X. Guan, Y. Peng, N. Wang, S. Maharjan, and T. Ohtsuki, “Deep reinforcement learning for economic dispatch of virtual power plant in internet of energy,” IEEE Internet of Things Journal, vol. 7, no. 7, pp. 6288–6301, 2020.
X. Liu, S. Li, and J. Zhu, “Optimal coordination for multiple network-constrained VPPs via multi-agent deep reinforcement learning,” IEEE Transactions on Smart Grid, vol. 14, no. 4, pp. 3016–3031, 2022.
Z. Yi et al., “An improved two-stage deep reinforcement learning approach for regulation service disaggregation in a virtual power plant,” IEEE Transactions on Smart Grid, vol. 13, no. 4, pp. 2844–2858, 2022.
S. M. Nosratabadi, R.-A. Hooshmand, and E. Gholipour, “Stochastic profit-based scheduling of industrial virtual power plant using the best demand response strategy,” Applied energy, vol. 164, pp. 590–606, 2016.

