Residential College | false |
Status | 已發表Published |
Heterogeneity-Aware Memory Efficient Federated Learning via Progressive Layer Freezing | |
Wu, Yebo1; Li, Li1; Tian, Chunlin1; Chang, Tao2; Lin, Chi3; Wang, Cong4; Xu, Cheng Zhong1 | |
2024-09 | |
Conference Name | 2024 32nd IEEE/ACM International Symposium on Quality of Service, IWQoS |
Source Publication | IEEE International Workshop on Quality of Service, IWQoS |
Conference Date | 19-21 June 2024 |
Conference Place | Guangzhou, China |
Country | China |
Publisher | Institute of Electrical and Electronics Engineers Inc. |
Abstract | Federated Learning (FL) emerges as a new learning paradigm that enables multiple devices to collaboratively train a shared model while preserving data privacy. However, intensive memory footprint during the training process severely bottlenecks the deployment of FL on resource-limited mobile devices in real-world cases. Thus, a framework that can effectively reduce the memory footprint while guaranteeing training efficiency and model accuracy is crucial for FL. In this paper, we propose SmartFreeze, a framework that effectively reduces the memory footprint by conducting the training in a progressive manner. Instead of updating the full model in each training round, SmartFreeze divides the shared model into blocks consisting of a specified number of layers. It first trains the front block with a well-designed output module, safely freezes it after convergence, and then triggers the training of the next one. This process iterates until the whole model has been successfully trained. In this way, the backward computation of the frozen blocks and the corresponding memory space for storing the intermediate outputs and gradients are effectively saved. Except for the progressive training framework, SmartFreeze consists of the following two core components: a pace controller and a participant selector. The pace controller is designed to effectively monitor the training progress of each block at runtime and safely freezes them after convergence while the participant selector selects the right devices to participate in the training for each block by jointly considering the memory capacity, the statistical and system heterogeneity. Extensive experiments are conducted to evaluate the effectiveness of SmartFreeze on both simulation and hardware testbeds. The results demonstrate that SmartFreeze effectively reduces average memory usage by up to 82%. Moreover, it simultaneously improves the model accuracy by up to 83.1% and accelerates the training process up to 2.02×. |
Keyword | Federated Learning Heterogeneous Memory On-device Training Training Accuracy Runtime Perturbation Methods Memory Management Quality Of Service |
DOI | 10.1109/IWQoS61813.2024.10682916 |
URL | View the original |
Indexed By | CPCI-S |
Language | 英語English |
WOS Research Area | Computer Science ; Engineering ; Telecommunications |
WOS Subject | Computer Science, Information Systems ; Computer Science, Theory & Methods ; Engineering, Electrical & Electronic ; Telecommunications |
WOS ID | WOS:001327123500086 |
Scopus ID | 2-s2.0-85206376224 |
Fulltext Access | |
Citation statistics | |
Document Type | Conference paper |
Collection | THE STATE KEY LABORATORY OF INTERNET OF THINGS FOR SMART CITY (UNIVERSITY OF MACAU) DEPARTMENT OF COMPUTER AND INFORMATION SCIENCE |
Corresponding Author | Li, Li |
Affiliation | 1.University of Macau, State Key Lab of IoTSC, Macao 2.National University of Defense Technology, China 3.Dalian University of Technology, China 4.Zhejiang University, China |
First Author Affilication | University of Macau |
Corresponding Author Affilication | University of Macau |
Recommended Citation GB/T 7714 | Wu, Yebo,Li, Li,Tian, Chunlin,et al. Heterogeneity-Aware Memory Efficient Federated Learning via Progressive Layer Freezing[C]:Institute of Electrical and Electronics Engineers Inc., 2024. |
APA | Wu, Yebo., Li, Li., Tian, Chunlin., Chang, Tao., Lin, Chi., Wang, Cong., & Xu, Cheng Zhong (2024). Heterogeneity-Aware Memory Efficient Federated Learning via Progressive Layer Freezing. IEEE International Workshop on Quality of Service, IWQoS. |
Files in This Item: | There are no files associated with this item. |
Items in the repository are protected by copyright, with all rights reserved, unless otherwise indicated.
Edit Comment