UM  > Faculty of Science and Technology
Residential Collegefalse
Status已發表Published
DeepTM: Efficient Tensor Management in Heterogeneous Memory for DNN Training
Zhou, Haoran1,4; Rang, Wei2; Chen, Hongyang3; Zhou, Xiaobo4,5; Cheng, Dazhao1
2024-11
Source PublicationIEEE Transactions on Parallel and Distributed Systems
ISSN1045-9219
Volume35Issue:11Pages:1920-1935
Abstract

Deep Neural Networks (DNNs) have gained widespread adoption in diverse fields, including image classification, object detection, and natural language processing. However, training large-scale DNN models often encounters significant memory bottlenecks, which ask for efficient management of extensive tensors. Heterogeneous memory system, which combines persistent memory (PM) modules with traditional DRAM, offers an economically viable solution to address tensor management challenges during DNN training. However, existing memory management methods on heterogeneous memory systems often lead to low PM access efficiency, low bandwidth utilization, and incomplete analysis of model characteristics. To overcome these hurdles, we introduce an efficient tensor management approach, DeepTM, tailored for heterogeneous memory to alleviate memory bottlenecks during DNN training. DeepTM employs page-level tensor aggregation to enhance PM read and write performance and executes contiguous page migration to increase memory bandwidth. Through an analysis of tensor access patterns and model characteristics, we quantify the overall performance and transform the performance optimization problem into the framework of Integer Linear Programming. Additionally, we achieve tensor heat recognition by dynamically adjusting the weights of four key tensor characteristics and develop a global optimization strategy using Deep Reinforcement Learning. To validate the efficacy of our approach, we implement and evaluate DeepTM, utilizing the TensorFlow framework running on a PM-based heterogeneous memory system. The experimental results demonstrate that DeepTM achieves performance improvements of up to 36% and 49% compared to the current state-of-the-art memory management strategies AutoTM and Sentinel, respectively. Furthermore, our solution reduces the overhead by 18 times and achieves up to 29% cost reduction compared to AutoTM.

KeywordDeep Neural Network Training Heterogeneous Memory Memory Management Performance Optimization
DOI10.1109/TPDS.2024.3431910
URLView the original
Indexed BySCIE
Language英語English
WOS Research AreaComputer Science ; Engineering
WOS SubjectComputer Science, Theory & Methods ; Engineering, Electrical & Electronic
WOS IDWOS:001311204500003
PublisherIEEE COMPUTER SOC, 10662 LOS VAQUEROS CIRCLE, PO BOX 3014, LOS ALAMITOS, CA 90720-1314
Scopus ID2-s2.0-85199514030
Fulltext Access
Citation statistics
Document TypeJournal article
CollectionFaculty of Science and Technology
THE STATE KEY LABORATORY OF INTERNET OF THINGS FOR SMART CITY (UNIVERSITY OF MACAU)
DEPARTMENT OF COMPUTER AND INFORMATION SCIENCE
Corresponding AuthorCheng, Dazhao
Affiliation1.School of Computer Science, Wuhan University, Wuhan, Hubei, China
2.School of information science and engineering, Shandong Normal University, Jinan, Shandong, China
3.Zhejiang Lab, Hangzhou, Zhejiang, China
4.Laboratory of Internet of Things for Smart City, University of Macau, Macau 999078, China
5.Department of Computer and Information Science, University of Macau, Macau 999078, China
First Author AffilicationUniversity of Macau
Recommended Citation
GB/T 7714
Zhou, Haoran,Rang, Wei,Chen, Hongyang,et al. DeepTM: Efficient Tensor Management in Heterogeneous Memory for DNN Training[J]. IEEE Transactions on Parallel and Distributed Systems, 2024, 35(11), 1920-1935.
APA Zhou, Haoran., Rang, Wei., Chen, Hongyang., Zhou, Xiaobo., & Cheng, Dazhao (2024). DeepTM: Efficient Tensor Management in Heterogeneous Memory for DNN Training. IEEE Transactions on Parallel and Distributed Systems, 35(11), 1920-1935.
MLA Zhou, Haoran,et al."DeepTM: Efficient Tensor Management in Heterogeneous Memory for DNN Training".IEEE Transactions on Parallel and Distributed Systems 35.11(2024):1920-1935.
Files in This Item:
There are no files associated with this item.
Related Services
Recommend this item
Bookmark
Usage statistics
Export to Endnote
Google Scholar
Similar articles in Google Scholar
[Zhou, Haoran]'s Articles
[Rang, Wei]'s Articles
[Chen, Hongyang]'s Articles
Baidu academic
Similar articles in Baidu academic
[Zhou, Haoran]'s Articles
[Rang, Wei]'s Articles
[Chen, Hongyang]'s Articles
Bing Scholar
Similar articles in Bing Scholar
[Zhou, Haoran]'s Articles
[Rang, Wei]'s Articles
[Chen, Hongyang]'s Articles
Terms of Use
No data!
Social Bookmark/Share
All comments (0)
No comment.
 

Items in the repository are protected by copyright, with all rights reserved, unless otherwise indicated.