UM

Browse/Search Results:  1-3 of 3 Help

Selected(0)Clear Items/Page:    Sort:
Raptor-T: A Fused and Memory-Efficient Sparse Transformer for Long and Variable-Length Sequences Journal article
Wang, Hulin, Yang, Donglin, Xia, Yaqi, Zhang, Zheng, Wang, Qigang, Fan, Jianping, Zhou, Xiaobo, Cheng, Dazhao. Raptor-T: A Fused and Memory-Efficient Sparse Transformer for Long and Variable-Length Sequences[J]. IEEE TRANSACTIONS ON COMPUTERS, 2024, 73(7), 1852-1865.
Authors:  Wang, Hulin;  Yang, Donglin;  Xia, Yaqi;  Zhang, Zheng;  Wang, Qigang; et al.
Favorite | TC[WOS]:1 TC[Scopus]:1  IF:3.6/3.2 | Submit date:2024/05/16
Sparse Transformer  Inference Acceleration  Gpu  Deep Learning  Memory Optimization  Resource Management  
MPMoE: Memory Efficient MoE for Pre-Trained Models With Adaptive Pipeline Parallelism Journal article
Zhang, Zheng, Xia, Yaqi, Wang, Hulin, Yang, Donglin, Hu, Chuang, Zhou, Xiaobo, Cheng, Dazhao. MPMoE: Memory Efficient MoE for Pre-Trained Models With Adaptive Pipeline Parallelism[J]. IEEE Transactions on Parallel and Distributed Systems, 2024, 35(6), 843-856.
Authors:  Zhang, Zheng;  Xia, Yaqi;  Wang, Hulin;  Yang, Donglin;  Hu, Chuang; et al.
Favorite | TC[WOS]:0 TC[Scopus]:1  IF:5.6/4.5 | Submit date:2024/05/16
Distributed Training  Memory Redundancy  Mixture Of Experts  Performance Model  Pipeline Parallelism  
Redundancy-Free High-Performance Dynamic GNN Training with Hierarchical Pipeline Parallelism Conference paper
Xia, Yaqi, Zhang, Zheng, Wang, Hulin, Yang, Donglin, Zhou, Xiaobo, Cheng, Dazhao. Redundancy-Free High-Performance Dynamic GNN Training with Hierarchical Pipeline Parallelism[C], 2023, 17-13.
Authors:  Xia, Yaqi;  Zhang, Zheng;  Wang, Hulin;  Yang, Donglin;  Zhou, Xiaobo; et al.
Favorite | TC[WOS]:4 TC[Scopus]:5 | Submit date:2023/08/08