Residential Collegefalse
Status已發表Published
Towards accurate knowledge transfer via target-awareness representation disentanglement
Li, Xingjian1; Hu, Di2; Li, Xuhong3; Xiong, Haoyi3; Xu, Chengzhong4; Dou, Dejing5
2023-12-12
Source PublicationMachine Learning
ISSN0885-6125
Volume113Issue:2Pages:699-723
Abstract

Fine-tuning deep neural networks pre-trained on large scale datasets is one of the most practical transfer learning paradigm given limited quantity of training samples. To obtain better generalization, using the starting point as the reference (SPAR), either through weights or features, has been successfully applied to transfer learning as a regularizer. However, due to the domain discrepancy between the source and target task, there exists obvious risk of negative transfer in a straightforward manner of knowledge preserving. In this paper, we propose a novel transfer learning algorithm, introducing the idea of Target-awareness REpresentation Disentanglement (TRED), where the relevant knowledge with respect to the target task is disentangled from the original source model and used as a regularizer during fine-tuning the target model. Two alternative approaches, maximizing Maximum Mean Discrepancy (Max-MMD) and minimizing mutual information (Min-MI) are introduced to achieve the desired disentanglement. Experiments on various real world datasets show that our method stably improves the standard fine-tuning by more than 2% in average. TRED also outperforms related state-of-the-art transfer learning regularizers such as L - SP , AT , DELTA , and BSS . Moreover, our solution is compatible with different choices of disentangling strategies. While the combination of Max-MMD and Min-MI typically achieves higher accuracy, only using Max-MMD can be a preferred choice in applications with low resource budgets.

KeywordFine-tuning Representation Disentanglement Transfer Learning
DOI10.1007/s10994-023-06381-2
URLView the original
Indexed BySCIE
Language英語English
WOS Research AreaComputer Science
WOS SubjectComputer Science, Artificial Intelligence
WOS IDWOS:001121632800003
PublisherSPRINGER, VAN GODEWIJCKSTRAAT 30, 3311 GZ DORDRECHT, NETHERLANDS
Scopus ID2-s2.0-85179320706
Fulltext Access
Citation statistics
Document TypeJournal article
CollectionTHE STATE KEY LABORATORY OF INTERNET OF THINGS FOR SMART CITY (UNIVERSITY OF MACAU)
Faculty of Science and Technology
DEPARTMENT OF COMPUTER AND INFORMATION SCIENCE
Corresponding AuthorLi, Xingjian; Xu, Chengzhong
Affiliation1.Computational Biology Department, Carnegie Mellon University, Pittsburgh, United States
2.Gaoling School of Artificial Intelligence, Renmin University of China, Beijing, China
3.Big Data Lab, Baidu Research, Beijing, China
4.State Key Lab of IOTSC, University of Macau, Macao
5.BCG X, Boston Consulting Group, Beijing, China
Corresponding Author AffilicationUniversity of Macau
Recommended Citation
GB/T 7714
Li, Xingjian,Hu, Di,Li, Xuhong,et al. Towards accurate knowledge transfer via target-awareness representation disentanglement[J]. Machine Learning, 2023, 113(2), 699-723.
APA Li, Xingjian., Hu, Di., Li, Xuhong., Xiong, Haoyi., Xu, Chengzhong., & Dou, Dejing (2023). Towards accurate knowledge transfer via target-awareness representation disentanglement. Machine Learning, 113(2), 699-723.
MLA Li, Xingjian,et al."Towards accurate knowledge transfer via target-awareness representation disentanglement".Machine Learning 113.2(2023):699-723.
Files in This Item:
There are no files associated with this item.
Related Services
Recommend this item
Bookmark
Usage statistics
Export to Endnote
Google Scholar
Similar articles in Google Scholar
[Li, Xingjian]'s Articles
[Hu, Di]'s Articles
[Li, Xuhong]'s Articles
Baidu academic
Similar articles in Baidu academic
[Li, Xingjian]'s Articles
[Hu, Di]'s Articles
[Li, Xuhong]'s Articles
Bing Scholar
Similar articles in Bing Scholar
[Li, Xingjian]'s Articles
[Hu, Di]'s Articles
[Li, Xuhong]'s Articles
Terms of Use
No data!
Social Bookmark/Share
All comments (0)
No comment.
 

Items in the repository are protected by copyright, with all rights reserved, unless otherwise indicated.