×
验证码:
换一张
Forgotten Password?
Stay signed in
Login With UMPASS
English
|
繁體
Login With UMPASS
Log In
ALL
ORCID
TI
AU
PY
SU
KW
TY
JN
DA
IN
PB
FP
ST
SM
Study Hall
Image search
Paste the image URL
Home
Faculties & Institutes
Scholars
Publications
Subjects
Statistics
News
Search in the results
Faculties & Institutes
THE STATE KEY LA... [3]
Faculty of Scien... [2]
Authors
XIAOBO ZHOU [3]
Document Type
Journal article [2]
Conference paper [1]
Date Issued
2024 [2]
2023 [1]
Language
英語English [3]
Source Publication
IEEE Transaction... [2]
Proceedings - 20... [1]
Indexed By
SCIE [2]
CPCI-S [1]
Funding Organization
Funding Project
×
Knowledge Map
UM
Start a Submission
Submissions
Unclaimed
Claimed
Attach Fulltext
Bookmarks
Browse/Search Results:
1-3 of 3
Help
Selected(
0
)
Clear
Items/Page:
5
10
15
20
25
30
35
40
45
50
55
60
65
70
75
80
85
90
95
100
Sort:
Select
Issue Date Ascending
Issue Date Descending
Journal Impact Factor Ascending
Journal Impact Factor Descending
WOS Cited Times Ascending
WOS Cited Times Descending
Submit date Ascending
Submit date Descending
Title Ascending
Title Descending
Author Ascending
Author Descending
Redundancy-free and load-balanced TGNN training with hierarchical pipeline parallelism
Journal article
Xia, Yaqi, Zhang, Zheng, Yang, Donglin, Hu, Chuang, Zhou, Xiaobo, Chen, Hongyang, Sang, Qianlong, Cheng, Dazhao. Redundancy-free and load-balanced TGNN training with hierarchical pipeline parallelism[J]. IEEE Transactions on Parallel and Distributed Systems, 2024, 35(11), 1904-1919.
Authors:
Xia, Yaqi
;
Zhang, Zheng
;
Yang, Donglin
;
Hu, Chuang
;
Zhou, Xiaobo
; et al.
Favorite
|
TC[WOS]:
0
TC[Scopus]:
0
IF:
5.6
/
4.5
|
Submit date:2024/08/05
Communication Balance
Distributed Training
Dynamic Gnn
Pipeline Parallelism
Redundancy-free
MPMoE: Memory Efficient MoE for Pre-Trained Models With Adaptive Pipeline Parallelism
Journal article
Zhang, Zheng, Xia, Yaqi, Wang, Hulin, Yang, Donglin, Hu, Chuang, Zhou, Xiaobo, Cheng, Dazhao. MPMoE: Memory Efficient MoE for Pre-Trained Models With Adaptive Pipeline Parallelism[J]. IEEE Transactions on Parallel and Distributed Systems, 2024, 35(6), 843-856.
Authors:
Zhang, Zheng
;
Xia, Yaqi
;
Wang, Hulin
;
Yang, Donglin
;
Hu, Chuang
; et al.
Favorite
|
TC[WOS]:
0
TC[Scopus]:
1
IF:
5.6
/
4.5
|
Submit date:2024/05/16
Distributed Training
Memory Redundancy
Mixture Of Experts
Performance Model
Pipeline Parallelism
MPipeMoE: Memory Efficient MoE for Pre-trained Models with Adaptive Pipeline Parallelism
Conference paper
Zhang, Zheng, Yang, Donglin, Xia, Yaqi, Ding, Liang, Tao, Dacheng, Zhou, Xiaobo, Cheng, Dazhao. MPipeMoE: Memory Efficient MoE for Pre-trained Models with Adaptive Pipeline Parallelism[C], USA:Institute of Electrical and Electronics Engineers Inc., 2023, 167-177.
Authors:
Zhang, Zheng
;
Yang, Donglin
;
Xia, Yaqi
;
Ding, Liang
;
Tao, Dacheng
; et al.
Favorite
|
TC[WOS]:
1
TC[Scopus]:
1
|
Submit date:2023/08/08
Mixture Of Experts
Pipeline Parallelism
Distributed Training
Memory Efficiency