Residential Collegefalse
Status已發表Published
A Simple yet Effective Layered Loss for Pre-training of Network Embedding
Chen, Junyang1; Li, Xueliang2; Li, Yuanman1; Li, Paul3; Wang, Mengzhu4; Zhang, Xiang5; Gong, Zhiguo6; Wu, Kaishun1; Leung, Victor C.M.7
2022-06
Source PublicationIEEE Transactions on Network Science and Engineering
ISSN2327-4697
Volume9Issue:3Pages:1827 - 1837
Abstract

Pre-training of network embedding aims to encode unlabeled node proximity into a low-dimensional space, where nodes are close to their neighbors while being far from negative samples. In recent years, Graph Neural Networks have shown groundbreaking performance in semi-supervised learning on the node classification and link prediction tasks. However, because of their inherent information aggregation pattern, almost all these methods can only obtain inferior embedding results in the pre-training of the unlabeled nodes. The margins between a target node and its multi-hop neighbors become hard distinguishable during node message aggregation. To address this problem, we propose a simple yet effective layered loss to combine with a graph attention network, dubbed as LlossNet, for pre-training. We regard the proximity of a target node and its two-hop neighbors as a unit (called a unit graph), where a target node is needed to be more closer to its direct neighbor than its two-hop neighbors. As such, LlossNet would be able to preserve the margins of nodes in the learned embedding space. Experimental results of various downstream tasks including classification and clustering demonstrate the effectiveness of our method on learning discriminative node representations.

KeywordGraph Neural Networks Layered Loss Network Embedding Pre-training Of Unlabeled Nodes
DOI10.1109/TNSE.2022.3153643
URLView the original
Indexed BySCIE
Language英語English
WOS Research AreaEngineering ; Mathematics
WOS IDWOS:000800200900072
PublisherIEEE COMPUTER SOC, 10662 LOS VAQUEROS CIRCLE, PO BOX 3014, LOS ALAMITOS, CA 90720-1314
Scopus ID2-s2.0-85125312443
Fulltext Access
Citation statistics
Document TypeJournal article
CollectionDEPARTMENT OF COMPUTER AND INFORMATION SCIENCE
Corresponding AuthorLi, Xueliang
Affiliation1.Shenzhen University, 47890 Shenzhen, Guangdong, China, 518060
2.Shenzhen University, 47890 Shenzhen, Guangdong, China
3.Baidu Research, 538732 Beijing, Beijing, China
4.National University of Defense Technology, 58294 Changsha, Hunan, China
5.College of Computer, National University of Defense Technology, 58294 Changsha, Hunan, China
6.Department of Computer and Information Science, University of Macau, Macao, Macao, Macao
7.College of Computer Science and Software Engineering, Shenzhen University, 47890 Shenzhen, British Columbia, Canada, 518060
Recommended Citation
GB/T 7714
Chen, Junyang,Li, Xueliang,Li, Yuanman,et al. A Simple yet Effective Layered Loss for Pre-training of Network Embedding[J]. IEEE Transactions on Network Science and Engineering, 2022, 9(3), 1827 - 1837.
APA Chen, Junyang., Li, Xueliang., Li, Yuanman., Li, Paul., Wang, Mengzhu., Zhang, Xiang., Gong, Zhiguo., Wu, Kaishun., & Leung, Victor C.M. (2022). A Simple yet Effective Layered Loss for Pre-training of Network Embedding. IEEE Transactions on Network Science and Engineering, 9(3), 1827 - 1837.
MLA Chen, Junyang,et al."A Simple yet Effective Layered Loss for Pre-training of Network Embedding".IEEE Transactions on Network Science and Engineering 9.3(2022):1827 - 1837.
Files in This Item:
There are no files associated with this item.
Related Services
Recommend this item
Bookmark
Usage statistics
Export to Endnote
Google Scholar
Similar articles in Google Scholar
[Chen, Junyang]'s Articles
[Li, Xueliang]'s Articles
[Li, Yuanman]'s Articles
Baidu academic
Similar articles in Baidu academic
[Chen, Junyang]'s Articles
[Li, Xueliang]'s Articles
[Li, Yuanman]'s Articles
Bing Scholar
Similar articles in Bing Scholar
[Chen, Junyang]'s Articles
[Li, Xueliang]'s Articles
[Li, Yuanman]'s Articles
Terms of Use
No data!
Social Bookmark/Share
All comments (0)
No comment.
 

Items in the repository are protected by copyright, with all rights reserved, unless otherwise indicated.