Residential College | false |
Status | 已發表Published |
Redundancy-free and load-balanced TGNN training with hierarchical pipeline parallelism | |
Xia, Yaqi1; Zhang, Zheng1; Yang, Donglin2; Hu, Chuang1; Zhou, Xiaobo3![]() ![]() ![]() | |
2024-11 | |
Source Publication | IEEE Transactions on Parallel and Distributed Systems
![]() |
ISSN | 1045-9219 |
Volume | 35Issue:11Pages:1904-1919 |
Abstract | Recently, Temporal Graph Neural Networks (TGNNs), as an extension of Graph Neural Networks, have demonstrated remarkable effectiveness in handling dynamic graph data. Distributed TGNN training requires efficiently tackling temporal dependency, which often leads to excessive cross-device communication that generates significant redundant data. However, existing systems are unable to remove the redundancy in data reuse and transfer, and suffer from severe communication overhead in a distributed setting. This work introduces Sven, a co-designed algorithm-system library aimed at accelerating TGNN training on a multi-GPU platform. Exploiting dependency patterns of TGNN models, we develop a redundancy-free graph organization to mitigate redundant data transfer. Additionally, we investigate communication imbalance issues among devices and formulate the graph partitioning problem as minimizing the maximum communication balance cost, which is proved to be an NP-hard problem. We propose an approximation algorithm called Re-FlexBiCut to tackle this problem. Furthermore, we incorporate prefetching, adaptive micro-batch pipelining, and asynchronous pipelining to present a hierarchical pipelining mechanism that mitigates the communication overhead. Sven represents the first comprehensive optimization solution for scaling memory-based TGNN training. Through extensive experiments conducted on a 64-GPU cluster, Sven demonstrates impressive speedup, ranging from 1.9x to 3.5x, compared to state-of-the-art approaches. Additionally, Sven achieves up to 5.26x higher communication efficiency and reduces communication imbalance by up to 59.2%. |
Keyword | Communication Balance Distributed Training Dynamic Gnn Pipeline Parallelism Redundancy-free |
DOI | 10.1109/TPDS.2024.3432855 |
URL | View the original |
Indexed By | SCIE |
Language | 英語English |
WOS Research Area | Computer Science ; Engineering |
WOS Subject | Computer Science, Theory & Methods ; Engineering, Electrical & Electronic |
WOS ID | WOS:001311204500002 |
Publisher | IEEE COMPUTER SOC, 10662 LOS VAQUEROS CIRCLE, PO BOX 3014, LOS ALAMITOS, CA 90720-1314 |
Scopus ID | 2-s2.0-85199570814 |
Fulltext Access | |
Citation statistics | |
Document Type | Journal article |
Collection | Faculty of Science and Technology THE STATE KEY LABORATORY OF INTERNET OF THINGS FOR SMART CITY (UNIVERSITY OF MACAU) DEPARTMENT OF COMPUTER AND INFORMATION SCIENCE |
Corresponding Author | Sang, Qianlong; Cheng, Dazhao |
Affiliation | 1.School of Computer Science, Wuhan University, Hubei 430072, China 2.Nvidia Corp, Santa Clara, CA 95051 USA 3.IOTSC & Department of Computer and Information Sciences, University of Macau, Macau 999078, China 4.Research Center for Graph Computing, Zhejiang Lab, Hangzhou 311100, China |
Recommended Citation GB/T 7714 | Xia, Yaqi,Zhang, Zheng,Yang, Donglin,et al. Redundancy-free and load-balanced TGNN training with hierarchical pipeline parallelism[J]. IEEE Transactions on Parallel and Distributed Systems, 2024, 35(11), 1904-1919. |
APA | Xia, Yaqi., Zhang, Zheng., Yang, Donglin., Hu, Chuang., Zhou, Xiaobo., Chen, Hongyang., Sang, Qianlong., & Cheng, Dazhao (2024). Redundancy-free and load-balanced TGNN training with hierarchical pipeline parallelism. IEEE Transactions on Parallel and Distributed Systems, 35(11), 1904-1919. |
MLA | Xia, Yaqi,et al."Redundancy-free and load-balanced TGNN training with hierarchical pipeline parallelism".IEEE Transactions on Parallel and Distributed Systems 35.11(2024):1904-1919. |
Files in This Item: | There are no files associated with this item. |
Items in the repository are protected by copyright, with all rights reserved, unless otherwise indicated.
Edit Comment