UM

Browse/Search Results:  1-10 of 23 Help

Selected(0)Clear Items/Page:    Sort:
Dynamic gNodeB Sleep Control for Energy-Conserving Radio Access Network Journal article
Shen, Pengfei, Shao, Yulin, Cao, Qi, Lu, Lu. Dynamic gNodeB Sleep Control for Energy-Conserving Radio Access Network[J]. IEEE Transactions on Cognitive Communications and Networking, 2024, 10(4), 1371 - 1385.
Authors:  Shen, Pengfei;  Shao, Yulin;  Cao, Qi;  Lu, Lu
Favorite | TC[WOS]:0 TC[Scopus]:1  IF:7.4/6.9 | Submit date:2024/05/16
Base Station Sleep Control  Ng-ran  Markov Decision Process  Greedy Policy  Index Policy  
Freshness-Aware Resource Allocation for Non-Orthogonal Wireless-Powered IoT Networks Conference paper
Chen, Yunfeng, Liu, Yong, Xiao, Jinhao, Wu, Qunying, Zhang, Han, Hou, Fen. Freshness-Aware Resource Allocation for Non-Orthogonal Wireless-Powered IoT Networks[C]:IEEE, 2024.
Authors:  Chen, Yunfeng;  Liu, Yong;  Xiao, Jinhao;  Wu, Qunying;  Zhang, Han; et al.
Favorite | TC[WOS]:0 TC[Scopus]:0 | Submit date:2024/08/05
Age Of Information  Markov Decision Process  Non-orthogonal Multiple Access  Wireless-powered Iot Network  
Utilizing Deep Reinforcement Learning for High-Voltage Distribution Network Expansion Planning Conference paper
Ou, Zhongxi, Zhang, Liang, Zhao, Xiaoyan, Lan, Wei, Liu, Dundun, Liu, Weifeng. Utilizing Deep Reinforcement Learning for High-Voltage Distribution Network Expansion Planning[C]:Institute of Electrical and Electronics Engineers Inc., 2024, 725-730.
Authors:  Ou, Zhongxi;  Zhang, Liang;  Zhao, Xiaoyan;  Lan, Wei;  Liu, Dundun; et al.
Favorite | TC[Scopus]:0 | Submit date:2024/09/03
Advantage Actor-critic  Deep Reinforcement Learning  Distribution Network Expansion  Markov Decision Process  
BLER Analysis and Optimal Power Allocation of HARQ-IR for Mission-Critical IoT Communications Journal article
He, Fuchao, Shi, Zheng, Zhou, Binggui, Yang, Guanghua, Li, Xiaofan, Ye, Xinrong, Ma, Shaodan. BLER Analysis and Optimal Power Allocation of HARQ-IR for Mission-Critical IoT Communications[J]. IEEE Internet of Things Journal, 2024.
Authors:  He, Fuchao;  Shi, Zheng;  Zhou, Binggui;  Yang, Guanghua;  Li, Xiaofan; et al.
Favorite | TC[WOS]:0 TC[Scopus]:1  IF:8.2/9.0 | Submit date:2024/09/03
Block Error Rate  Decoding  Deep Reinforcement Learning  Fading Channels  Harq-ir  Internet Of Things  Markov Decision Process  Reliability Theory  Resource Management  Short Packet Communications  Signal To Noise Ratio  Throughput  
A probability approximation framework: Markov process approach Journal article
Chen, Peng, Shao, Qi Man, Xu, Lihu. A probability approximation framework: Markov process approach[J]. Annals of Applied Probability, 2023, 33(2), 1619-1659.
Authors:  Chen, Peng;  Shao, Qi Man;  Xu, Lihu
Favorite | TC[WOS]:1 TC[Scopus]:0  IF:1.4/1.9 | Submit date:2023/05/02
Euler–maruyama (Em) Discretization  Itô’s Formula  Markov Process  Normal Approximation  Online Stochastic Gradient Descent  Probability Approximation  Stable Process  Stochastic Differential Equation  Wasserstein-1 Distance  
A Reinforcement Learning Based Coordinated but Differentiated Load Frequency Control Method With Heterogeneous Frequency Regulation Resources Journal article
YuxinMa, Zechun Hu, Yonghua Song. A Reinforcement Learning Based Coordinated but Differentiated Load Frequency Control Method With Heterogeneous Frequency Regulation Resources[J]. IEEE Transactions on Power Systems, 2023, 39(1), 2239-2250.
Authors:  YuxinMa;  Zechun Hu;  Yonghua Song
Favorite | TC[WOS]:3 TC[Scopus]:4  IF:6.5/7.4 | Submit date:2023/08/03
Delays  Energy Storage Systems  Frequency Control  Generators  Load Frequency Control  Mathematical Models  Partially Observable Markov Decision Process  Power System Stability  Proximal Policy Optimization  Regulation  Renewable Energy Resources  Renewable Energy Sources  
A Deep Reinforcement Learning Recommender System With Multiple Policies for Recommendations Journal article
Mingsheng Fu, Liwei Huang, Ananya Rao, Athirai A. Irissappane, Jie Zhang, Hong Qu. A Deep Reinforcement Learning Recommender System With Multiple Policies for Recommendations[J]. IEEE Transactions on Industrial Informatics, 2022, 19(2), 2049-2061.
Authors:  Mingsheng Fu;  Liwei Huang;  Ananya Rao;  Athirai A. Irissappane;  Jie Zhang; et al.
Favorite | TC[WOS]:6 TC[Scopus]:7  IF:11.7/11.4 | Submit date:2023/02/22
Deep Reinforcement Learning (Drl)  Multitask Markov Decision Process (Mdp)  Recommender System  
Inventory Control Policy for Perishable Products under a Buyback Contract and Brownian Demands Journal article
Gong, M., Lian, Z. T., Xiao, H.. Inventory Control Policy for Perishable Products under a Buyback Contract and Brownian Demands[J]. International Journal of Production and Economics, 2022, 251(9), 108522.
Authors:  Gong, M.;  Lian, Z. T.;  Xiao, H.
Adobe PDF | Favorite |   IF:9.8/10.3 | Submit date:2023/07/21
Perishable Inventory  (s s) Policy  Markov Renewal Process  Brownian Demand Model  Buyback Contract  
Inventory control policy for perishable products under a buyback contract and Brownian demands Journal article
Gong, Min, Lian, Zhaotong, Xiao, Hua. Inventory control policy for perishable products under a buyback contract and Brownian demands[J]. INTERNATIONAL JOURNAL OF PRODUCTION ECONOMICS, 2022, 251, 108522.
Authors:  Gong, Min;  Lian, Zhaotong;  Xiao, Hua
Favorite | TC[WOS]:6 TC[Scopus]:7  IF:9.8/10.3 | Submit date:2022/05/31
(s, s) Policy  Brownian Demand Model  Buyback Contract  Markov Renewal Process  Perishable Inventory  
Reinforcement Learning Enabled Dynamic Resource Allocation in the Internet of Vehicles Journal article
Liang, Hongbin, Zhang, Xiaohui, Hong, Xintao, Zhang, Zongyuan, Li, Mushu, Hu, Guangdi, Hou, Fen. Reinforcement Learning Enabled Dynamic Resource Allocation in the Internet of Vehicles[J]. IEEE Transactions on Industrial Informatics, 2021, 17(7), 4957-4967.
Authors:  Liang, Hongbin;  Zhang, Xiaohui;  Hong, Xintao;  Zhang, Zongyuan;  Li, Mushu; et al.
Favorite | TC[WOS]:29 TC[Scopus]:37  IF:11.7/11.4 | Submit date:2021/12/08
Hierarchical Architecture  Internet Of Vehicles (Iov)  Reinforcement Learning  Resource Allocation  Semi-markov Decision Process (Smdp)