×
验证码:
换一张
Forgotten Password?
Stay signed in
Login With UMPASS
English
|
繁體
Login With UMPASS
Log In
ALL
ORCID
TI
AU
PY
SU
KW
TY
JN
DA
IN
PB
FP
ST
SM
Study Hall
Image search
Paste the image URL
Home
Faculties & Institutes
Scholars
Publications
Subjects
Statistics
News
Search in the results
Faculties & Institutes
THE STATE KEY LA... [3]
Faculty of Scien... [2]
Authors
XIAOBO ZHOU [3]
Document Type
Journal article [2]
Conference paper [1]
Date Issued
2024 [2]
2023 [1]
Language
英語English [3]
Source Publication
IEEE TRANSACTION... [1]
IEEE Transaction... [1]
Indexed By
SCIE [2]
Funding Organization
Funding Project
×
Knowledge Map
UM
Start a Submission
Submissions
Unclaimed
Claimed
Attach Fulltext
Bookmarks
Browse/Search Results:
1-3 of 3
Help
Selected(
0
)
Clear
Items/Page:
5
10
15
20
25
30
35
40
45
50
55
60
65
70
75
80
85
90
95
100
Sort:
Select
Issue Date Ascending
Issue Date Descending
Title Ascending
Title Descending
Author Ascending
Author Descending
WOS Cited Times Ascending
WOS Cited Times Descending
Submit date Ascending
Submit date Descending
Journal Impact Factor Ascending
Journal Impact Factor Descending
Raptor-T: A Fused and Memory-Efficient Sparse Transformer for Long and Variable-Length Sequences
Journal article
Wang, Hulin, Yang, Donglin, Xia, Yaqi, Zhang, Zheng, Wang, Qigang, Fan, Jianping, Zhou, Xiaobo, Cheng, Dazhao. Raptor-T: A Fused and Memory-Efficient Sparse Transformer for Long and Variable-Length Sequences[J]. IEEE TRANSACTIONS ON COMPUTERS, 2024, 73(7), 1852-1865.
Authors:
Wang, Hulin
;
Yang, Donglin
;
Xia, Yaqi
;
Zhang, Zheng
;
Wang, Qigang
; et al.
Favorite
|
TC[WOS]:
1
TC[Scopus]:
1
IF:
3.6
/
3.2
|
Submit date:2024/05/16
Sparse Transformer
Inference Acceleration
Gpu
Deep Learning
Memory Optimization
Resource Management
MPMoE: Memory Efficient MoE for Pre-Trained Models With Adaptive Pipeline Parallelism
Journal article
Zhang, Zheng, Xia, Yaqi, Wang, Hulin, Yang, Donglin, Hu, Chuang, Zhou, Xiaobo, Cheng, Dazhao. MPMoE: Memory Efficient MoE for Pre-Trained Models With Adaptive Pipeline Parallelism[J]. IEEE Transactions on Parallel and Distributed Systems, 2024, 35(6), 843-856.
Authors:
Zhang, Zheng
;
Xia, Yaqi
;
Wang, Hulin
;
Yang, Donglin
;
Hu, Chuang
; et al.
Favorite
|
TC[WOS]:
0
TC[Scopus]:
1
IF:
5.6
/
4.5
|
Submit date:2024/05/16
Distributed Training
Memory Redundancy
Mixture Of Experts
Performance Model
Pipeline Parallelism
Redundancy-Free High-Performance Dynamic GNN Training with Hierarchical Pipeline Parallelism
Conference paper
Xia, Yaqi, Zhang, Zheng, Wang, Hulin, Yang, Donglin, Zhou, Xiaobo, Cheng, Dazhao. Redundancy-Free High-Performance Dynamic GNN Training with Hierarchical Pipeline Parallelism[C], 2023, 17-13.
Authors:
Xia, Yaqi
;
Zhang, Zheng
;
Wang, Hulin
;
Yang, Donglin
;
Zhou, Xiaobo
; et al.
Favorite
|
TC[WOS]:
4
TC[Scopus]:
5
|
Submit date:2023/08/08