Residential College | false |
Status | 已發表Published |
Locality-aware and Fault-tolerant Batching for Machine Learning on Distributed Datasets | |
Liu, Liu1; Ding, Zhijun1; Cheng, Dazhao2; Zhou, Xiaobo3 | |
2024 | |
Source Publication | IEEE Transactions on Cloud Computing |
ISSN | 2168-7161 |
Volume | 12Issue:2Pages:370-387 |
Abstract | The performance of distributed ML training is largely determined by workers that generate gradients in the slowest pace, i.e., stragglers. The state-of-the-art load balancing approaches consider that each worker stores a complete dataset locally and the data fetching time can be ignored. They only consider the computation capacity of workers in equalizing the gradient computation time. However, we find that in scenarios of ML on distributed datasets, whether in edge computing or distributed data cache systems, the data fetching time is non-negligible and often becomes the primary cause of stragglers. In this paper, we present LOFT, an adaptive load balancing approach for ML upon distributed datasets at the edge. It aims to balance the time to generate gradients at each worker while ensuring the model accuracy. Specifically, LOFT features a locality-aware batching. It builds performance and optimization models upon data fetching and gradient computation time. Leveraging the models, it develops an adaptive scheme based on grid search. Furthermore, it offers Byzantine gradient aggregation upon Ring All-Reduce, which makes itself fault-tolerant under Byzantine gradients brought by a small batch size. Experiments with twelve public DNN models and four open datasets show that LOFT reduces the training time by up to 46%, while reducing the training loss by up to 67% compared to LB-BSP. |
Keyword | Adaptation Models Byzantine Gradient Computational Modeling Data Models Distributed Databases Distributed Dataset Graphics Processing Units Load Management Machine Learning Training Straggler Training |
DOI | 10.1109/TCC.2024.3351716 |
URL | View the original |
Indexed By | SCIE |
Language | 英語English |
WOS Research Area | Computer Science |
WOS Subject | Computer Science, Information Systems ; Computer Science, Software Engineering ; Computer Science, Theory & Methods |
WOS ID | WOS:001241591300012 |
Publisher | IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC, 445 HOES LANE, PISCATAWAY, NJ 08855-4141 |
Scopus ID | 2-s2.0-85182357532 |
Fulltext Access | |
Citation statistics | |
Document Type | Journal article |
Collection | THE STATE KEY LABORATORY OF INTERNET OF THINGS FOR SMART CITY (UNIVERSITY OF MACAU) |
Corresponding Author | Zhou, Xiaobo |
Affiliation | 1.Department of Computer Science and Technology, Tongji University, Shanghai, China 2.School of Computer Science, Wuhan University, Hubei, China 3.IOTSC Lab & Department of Computer and Information Science, University of Macau, Macau, China |
Corresponding Author Affilication | University of Macau |
Recommended Citation GB/T 7714 | Liu, Liu,Ding, Zhijun,Cheng, Dazhao,et al. Locality-aware and Fault-tolerant Batching for Machine Learning on Distributed Datasets[J]. IEEE Transactions on Cloud Computing, 2024, 12(2), 370-387. |
APA | Liu, Liu., Ding, Zhijun., Cheng, Dazhao., & Zhou, Xiaobo (2024). Locality-aware and Fault-tolerant Batching for Machine Learning on Distributed Datasets. IEEE Transactions on Cloud Computing, 12(2), 370-387. |
MLA | Liu, Liu,et al."Locality-aware and Fault-tolerant Batching for Machine Learning on Distributed Datasets".IEEE Transactions on Cloud Computing 12.2(2024):370-387. |
Files in This Item: | There are no files associated with this item. |
Items in the repository are protected by copyright, with all rights reserved, unless otherwise indicated.
Edit Comment