Residential College | false |
Status | 已發表Published |
GNNVis: Visualize Large-Scale Data by Learning a Graph Neural Network Representation | |
Yajun Huang1; Jingbin Zhang2; Yiyang Yang3; Zhiguo Gong2; Zhifeng Hao4 | |
2020-10-19 | |
Conference Name | 29th ACM International Conference on Information and Knowledge Management, CIKM 2020 |
Source Publication | CIKM '20: Proceedings of the 29th ACM International Conference on Information & Knowledge Management |
Pages | 545-554 |
Conference Date | 2020/10/19-2020/10/23 |
Conference Place | Virtual Event, Ireland |
Abstract | Many achievements have been made by studying how to visualize large-scale and high-dimensional data in typically 2D or 3D space. Normally, such a process is performed through a non-parametric (unsupervised) approach which is limited in handling the unseen data. In this work, we study the parametric (supervised) model which is capable to learn a mapping between high-dimensional data space Rd and low-dimensional latent space Rs with similarity structure in Rd preserved where s l d. The GNNVis is proposed, a framework that applies the idea of Graph Neural Networks (GNNs) to the parametric learning process and the learned mapping serves as a Visualizer (Vis) to compute the low-dimensional embeddings of unseen data online. In our framework, the features of data nodes, as well as the (hidden) information of their neighbors are fused to conduct Dimension Reduction. To the best of our knowledge, none of the existing visualization works have studied how to combine such information into the learning representation. Moreover, the learning process of GNNVis is designed as an end-to-end manner and can easily be extended to arbitrary Dimension Reduction methods if the corresponding objective function is given. Based on GNNVis, several typical dimension reduction methods t-SNE, LargeVis, and UMAP are investigated. As a parametric framework, GNNVis is an inherently efficient Visualizer capable of computing the embeddings of large-scale unseen data. To guarantee its scalability in the Training Stage, a novel training strategy with Subgraph Negative Sampling (SNS) is conducted to reduce the corresponding cost. Experimental results in real datasets demonstrate the advantages of GNNVis. The visualization quality of GNNVis outperforms the state-of-the-art parametric models, and is comparable to that of the non-parametric models. |
Keyword | Big Data Graph Neural Networks High-dimensional Data Neural Networks Semi-supervised Learning Visualization |
DOI | 10.1145/3340531.3411987 |
URL | View the original |
Indexed By | CPCI-S |
Language | 英語English |
WOS Research Area | Computer Science |
WOS Subject | Computer Science, Information Systems ; Computer Science, Theory & Methods |
WOS ID | WOS:000749561300057 |
Scopus ID | 2-s2.0-85095864770 |
Fulltext Access | |
Citation statistics | |
Document Type | Conference paper |
Collection | DEPARTMENT OF COMPUTER AND INFORMATION SCIENCE |
Corresponding Author | Yiyang Yang; Zhiguo Gong |
Affiliation | 1.Dingxiang Technologies Hangzhou, China 2.University of Macau,Macao 3.Guangdong University of Technology,Guangzhou,China 4.Foshan University Foshan, China |
Corresponding Author Affilication | University of Macau |
Recommended Citation GB/T 7714 | Yajun Huang,Jingbin Zhang,Yiyang Yang,et al. GNNVis: Visualize Large-Scale Data by Learning a Graph Neural Network Representation[C], 2020, 545-554. |
APA | Yajun Huang., Jingbin Zhang., Yiyang Yang., Zhiguo Gong., & Zhifeng Hao (2020). GNNVis: Visualize Large-Scale Data by Learning a Graph Neural Network Representation. CIKM '20: Proceedings of the 29th ACM International Conference on Information & Knowledge Management, 545-554. |
Files in This Item: | There are no files associated with this item. |
Items in the repository are protected by copyright, with all rights reserved, unless otherwise indicated.
Edit Comment