Residential College | false |
Status | 已發表Published |
Mixed Deep Reinforcement Learning Considering Discrete-continuous Hybrid Action Space for Smart Home Energy Management | |
Huang, Chao1,2; Zhang, Hongcai1; Wang, Long2; Luo, Xiong2; Song, Yonghua1 | |
2022-05 | |
Source Publication | Journal of Modern Power Systems and Clean Energy |
ISSN | 2196-5625 |
Volume | 10Issue:3Pages:743-754 |
Abstract | This paper develops deep reinforcement learning (DRL) algorithms for optimizing the operation of home energy system which consists of photovoltaic (PV) panels, battery ener‐ gy storage system, and household appliances. Model-free DRL algorithms can efficiently handle the difficulty of energy system modeling and uncertainty of PV generation. However, discretecontinuous hybrid action space of the considered home energy system challenges existing DRL algorithms for either discrete ac‐ tions or continuous actions. Thus, a mixed deep reinforcement learning (MDRL) algorithm is proposed, which integrates deep Q-learning (DQL) algorithm and deep deterministic policy gra‐ dient (DDPG) algorithm. The DQL algorithm deals with dis‐ crete actions, while the DDPG algorithm handles continuous ac‐ tions. The MDRL algorithm learns optimal strategy by trialand-error interactions with the environment. However, unsafe actions, which violate system constraints, can give rise to great cost. To handle such problem, a safe-MDRL algorithm is fur‐ ther proposed. Simulation studies demonstrate that the pro‐ posed MDRL algorithm can efficiently handle the challenge from discrete-continuous hybrid action space for home energy management. The proposed MDRL algorithm reduces the oper‐ ation cost while maintaining the human thermal comfort by comparing with benchmark algorithms on the test dataset. Moreover, the safe-MDRL algorithm greatly reduces the loss of thermal comfort in the learning stage by the proposed MDRL algorithm. |
Keyword | Demand Response Deep Reinforcement Learning Discrete-continuous Action Space Home Energy Management Safe Reinforcement Learning |
DOI | 10.35833/MPCE.2021.000394 |
URL | View the original |
Indexed By | SCIE |
Language | 英語English |
WOS Research Area | Engineering |
WOS Subject | Engineering, Electrical & Electronic |
WOS ID | WOS:000797467700020 |
Publisher | IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC, 445 HOES LANE, PISCATAWAY, NJ 08855-4141 |
Scopus ID | 2-s2.0-85127082700 |
Fulltext Access | |
Citation statistics | |
Document Type | Journal article |
Collection | THE STATE KEY LABORATORY OF INTERNET OF THINGS FOR SMART CITY (UNIVERSITY OF MACAU) |
Corresponding Author | Zhang, Hongcai |
Affiliation | 1.State Key Laboratory of Internet of Things for Smart City and Department of Electrical and Computer Engineering, University of Macau, Macao, 999078, Macao 2.School of Computer and Communication Engineering, University of Science and Technology Beijing, Beijing, China |
First Author Affilication | University of Macau |
Corresponding Author Affilication | University of Macau |
Recommended Citation GB/T 7714 | Huang, Chao,Zhang, Hongcai,Wang, Long,et al. Mixed Deep Reinforcement Learning Considering Discrete-continuous Hybrid Action Space for Smart Home Energy Management[J]. Journal of Modern Power Systems and Clean Energy, 2022, 10(3), 743-754. |
APA | Huang, Chao., Zhang, Hongcai., Wang, Long., Luo, Xiong., & Song, Yonghua (2022). Mixed Deep Reinforcement Learning Considering Discrete-continuous Hybrid Action Space for Smart Home Energy Management. Journal of Modern Power Systems and Clean Energy, 10(3), 743-754. |
MLA | Huang, Chao,et al."Mixed Deep Reinforcement Learning Considering Discrete-continuous Hybrid Action Space for Smart Home Energy Management".Journal of Modern Power Systems and Clean Energy 10.3(2022):743-754. |
Files in This Item: | Download All | |||||
File Name/Size | Publications | Version | Access | License | ||
Mixed_Deep_Reinforce(727KB) | 期刊论文 | 作者接受稿 | 开放获取 | CC BY-NC-SA | View Download |
Items in the repository are protected by copyright, with all rights reserved, unless otherwise indicated.
Edit Comment