UM

Browse/Search Results:  1-4 of 4 Help

Selected(0)Clear Items/Page:    Sort:
Redundancy-free and load-balanced TGNN training with hierarchical pipeline parallelism Journal article
Xia, Yaqi, Zhang, Zheng, Yang, Donglin, Hu, Chuang, Zhou, Xiaobo, Chen, Hongyang, Sang, Qianlong, Cheng, Dazhao. Redundancy-free and load-balanced TGNN training with hierarchical pipeline parallelism[J]. IEEE Transactions on Parallel and Distributed Systems, 2024, 35(11), 1904-1919.
Authors:  Xia, Yaqi;  Zhang, Zheng;  Yang, Donglin;  Hu, Chuang;  Zhou, Xiaobo; et al.
Favorite | TC[WOS]:0 TC[Scopus]:0  IF:5.6/4.5 | Submit date:2024/08/05
Communication Balance  Distributed Training  Dynamic Gnn  Pipeline Parallelism  Redundancy-free  
MPMoE: Memory Efficient MoE for Pre-Trained Models With Adaptive Pipeline Parallelism Journal article
Zhang, Zheng, Xia, Yaqi, Wang, Hulin, Yang, Donglin, Hu, Chuang, Zhou, Xiaobo, Cheng, Dazhao. MPMoE: Memory Efficient MoE for Pre-Trained Models With Adaptive Pipeline Parallelism[J]. IEEE Transactions on Parallel and Distributed Systems, 2024, 35(6), 843-856.
Authors:  Zhang, Zheng;  Xia, Yaqi;  Wang, Hulin;  Yang, Donglin;  Hu, Chuang; et al.
Favorite | TC[WOS]:0 TC[Scopus]:1  IF:5.6/4.5 | Submit date:2024/05/16
Distributed Training  Memory Redundancy  Mixture Of Experts  Performance Model  Pipeline Parallelism  
Locality-aware and Fault-tolerant Batching for Machine Learning on Distributed Datasets Journal article
Liu, Liu, Ding, Zhijun, Cheng, Dazhao, Zhou, Xiaobo. Locality-aware and Fault-tolerant Batching for Machine Learning on Distributed Datasets[J]. IEEE Transactions on Cloud Computing, 2024, 12(2), 370-387.
Authors:  Liu, Liu;  Ding, Zhijun;  Cheng, Dazhao;  Zhou, Xiaobo
Favorite | TC[WOS]:0 TC[Scopus]:0  IF:5.3/4.6 | Submit date:2024/05/16
Adaptation Models  Byzantine Gradient  Computational Modeling  Data Models  Distributed Databases  Distributed Dataset  Graphics Processing Units  Load Management  Machine Learning Training  Straggler  Training  
MPipeMoE: Memory Efficient MoE for Pre-trained Models with Adaptive Pipeline Parallelism Conference paper
Zhang, Zheng, Yang, Donglin, Xia, Yaqi, Ding, Liang, Tao, Dacheng, Zhou, Xiaobo, Cheng, Dazhao. MPipeMoE: Memory Efficient MoE for Pre-trained Models with Adaptive Pipeline Parallelism[C], USA:Institute of Electrical and Electronics Engineers Inc., 2023, 167-177.
Authors:  Zhang, Zheng;  Yang, Donglin;  Xia, Yaqi;  Ding, Liang;  Tao, Dacheng; et al.
Favorite | TC[WOS]:1 TC[Scopus]:1 | Submit date:2023/08/08
Mixture Of Experts  Pipeline Parallelism  Distributed Training  Memory Efficiency