Residential Collegefalse
Status已發表Published
Heterogeneity-Aware Memory Efficient Federated Learning via Progressive Layer Freezing
Wu, Yebo1; Li, Li1; Tian, Chunlin1; Chang, Tao2; Lin, Chi3; Wang, Cong4; Xu, Cheng Zhong1
2024-09
Conference Name2024 32nd IEEE/ACM International Symposium on Quality of Service, IWQoS
Source PublicationIEEE International Workshop on Quality of Service, IWQoS
Conference Date19-21 June 2024
Conference PlaceGuangzhou, China
CountryChina
PublisherInstitute of Electrical and Electronics Engineers Inc.
Abstract

Federated Learning (FL) emerges as a new learning paradigm that enables multiple devices to collaboratively train a shared model while preserving data privacy. However, intensive memory footprint during the training process severely bottlenecks the deployment of FL on resource-limited mobile devices in real-world cases. Thus, a framework that can effectively reduce the memory footprint while guaranteeing training efficiency and model accuracy is crucial for FL.

In this paper, we propose SmartFreeze, a framework that effectively reduces the memory footprint by conducting the training in a progressive manner. Instead of updating the full model in each training round, SmartFreeze divides the shared model into blocks consisting of a specified number of layers. It first trains the front block with a well-designed output module, safely freezes it after convergence, and then triggers the training of the next one. This process iterates until the whole model has been successfully trained. In this way, the backward computation of the frozen blocks and the corresponding memory space for storing the intermediate outputs and gradients are effectively saved. Except for the progressive training framework, SmartFreeze consists of the following two core components: a pace controller and a participant selector. The pace controller is designed to effectively monitor the training progress of each block at runtime and safely freezes them after convergence while the participant selector selects the right devices to participate in the training for each block by jointly considering the memory capacity, the statistical and system heterogeneity. Extensive experiments are conducted to evaluate the effectiveness of SmartFreeze on both simulation and hardware testbeds. The results demonstrate that SmartFreeze effectively reduces average memory usage by up to 82%. Moreover, it simultaneously improves the model accuracy by up to 83.1% and accelerates the training process up to 2.02×.

KeywordFederated Learning Heterogeneous Memory On-device Training Training Accuracy Runtime Perturbation Methods Memory Management Quality Of Service
DOI10.1109/IWQoS61813.2024.10682916
URLView the original
Indexed ByCPCI-S
Language英語English
WOS Research AreaComputer Science ; Engineering ; Telecommunications
WOS SubjectComputer Science, Information Systems ; Computer Science, Theory & Methods ; Engineering, Electrical & Electronic ; Telecommunications
WOS IDWOS:001327123500086
Scopus ID2-s2.0-85206376224
Fulltext Access
Citation statistics
Document TypeConference paper
CollectionTHE STATE KEY LABORATORY OF INTERNET OF THINGS FOR SMART CITY (UNIVERSITY OF MACAU)
DEPARTMENT OF COMPUTER AND INFORMATION SCIENCE
Corresponding AuthorLi, Li
Affiliation1.University of Macau, State Key Lab of IoTSC, Macao
2.National University of Defense Technology, China
3.Dalian University of Technology, China
4.Zhejiang University, China
First Author AffilicationUniversity of Macau
Corresponding Author AffilicationUniversity of Macau
Recommended Citation
GB/T 7714
Wu, Yebo,Li, Li,Tian, Chunlin,et al. Heterogeneity-Aware Memory Efficient Federated Learning via Progressive Layer Freezing[C]:Institute of Electrical and Electronics Engineers Inc., 2024.
APA Wu, Yebo., Li, Li., Tian, Chunlin., Chang, Tao., Lin, Chi., Wang, Cong., & Xu, Cheng Zhong (2024). Heterogeneity-Aware Memory Efficient Federated Learning via Progressive Layer Freezing. IEEE International Workshop on Quality of Service, IWQoS.
Files in This Item:
There are no files associated with this item.
Related Services
Recommend this item
Bookmark
Usage statistics
Export to Endnote
Google Scholar
Similar articles in Google Scholar
[Wu, Yebo]'s Articles
[Li, Li]'s Articles
[Tian, Chunlin]'s Articles
Baidu academic
Similar articles in Baidu academic
[Wu, Yebo]'s Articles
[Li, Li]'s Articles
[Tian, Chunlin]'s Articles
Bing Scholar
Similar articles in Bing Scholar
[Wu, Yebo]'s Articles
[Li, Li]'s Articles
[Tian, Chunlin]'s Articles
Terms of Use
No data!
Social Bookmark/Share
All comments (0)
No comment.
 

Items in the repository are protected by copyright, with all rights reserved, unless otherwise indicated.