UM  > Faculty of Science and Technology
Residential Collegefalse
Status已發表Published
Self-paced Enhanced Low-rank Tensor Kernelized Multi-view Subspace Clustering
Chen, Yongyong1; Wang, Shuqin2; Xiao, Xiaolin3; Liu, Youfa4; Hua, Zhongyun1; Zhou, Yicong5
2022-01
Source PublicationIEEE Transactions on Multimedia
ISSN1520-9210
Volume24Pages:4054-4065
Abstract

This paper addresses the multi-view subspace clustering problem and proposes the self-paced enhanced low-rank tensor kernelized multi-view subspace clustering (SETKMC) method, which is based on two motivations: (1) singular values of the representations and multiple instances should be treated differently. The reasons are that larger singular values of the representations usually quantify the major information and should be less penalized; samples with different degrees of noise may have various reliability for clustering. (2) many existing methods may cause the degraded performance when multi-view features reside in different nonlinear subspaces. This is because they usually assumed that multiple features lie within the union of several linear subspaces. SETKMC integrates the nonconvex tensor norm, self-paced learning, and kernel trick into a unified model for multi-view subspace clustering. The nonconvex tensor norm imposes different weights on different singular values. The self-paced learning gradually involves instances from more reliable to less reliable ones while the kernel trick aims to handle the multi-view data in nonlinear subspaces. One iterative algorithm is proposed based on the alternating direction method of multipliers. Extensive results on seven real-world datasets show the effectiveness of the proposed SETKMC compared to fifteen state-of-the-art multi-view clustering methods.

KeywordMulti-view Clustering Low-rank Tensor Representation Kernel Enhanced Low-rank Representation Self-paced Learning
DOI10.1109/TMM.2021.3112230
URLView the original
Indexed BySCIE
Language英語English
WOS Research AreaComputer Science ; Telecommunications
WOS SubjectComputer Science, Information Systems ; Computer Science, Software Engineering ; Telecommunications
WOS IDWOS:000838704400027
PublisherIEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC, 445 HOES LANE, PISCATAWAY, NJ 08855-4141
Scopus ID2-s2.0-85115670306
Fulltext Access
Citation statistics
Document TypeJournal article
CollectionFaculty of Science and Technology
Corresponding AuthorLiu, Youfa; Hua, Zhongyun
Affiliation1.School of Computer Science and Technology, Harbin Institute of Technology Shenzhen
2.Institute of Information Science, Beijing Jiaotong University
3.School of Computer Science and Engineering, South China University of Technology
4.College of Informatics, Huazhong Agricultural University
5.Department of Computer and Information Science, University of Macau, Macau
Recommended Citation
GB/T 7714
Chen, Yongyong,Wang, Shuqin,Xiao, Xiaolin,et al. Self-paced Enhanced Low-rank Tensor Kernelized Multi-view Subspace Clustering[J]. IEEE Transactions on Multimedia, 2022, 24, 4054-4065.
APA Chen, Yongyong., Wang, Shuqin., Xiao, Xiaolin., Liu, Youfa., Hua, Zhongyun., & Zhou, Yicong (2022). Self-paced Enhanced Low-rank Tensor Kernelized Multi-view Subspace Clustering. IEEE Transactions on Multimedia, 24, 4054-4065.
MLA Chen, Yongyong,et al."Self-paced Enhanced Low-rank Tensor Kernelized Multi-view Subspace Clustering".IEEE Transactions on Multimedia 24(2022):4054-4065.
Files in This Item:
There are no files associated with this item.
Related Services
Recommend this item
Bookmark
Usage statistics
Export to Endnote
Google Scholar
Similar articles in Google Scholar
[Chen, Yongyong]'s Articles
[Wang, Shuqin]'s Articles
[Xiao, Xiaolin]'s Articles
Baidu academic
Similar articles in Baidu academic
[Chen, Yongyong]'s Articles
[Wang, Shuqin]'s Articles
[Xiao, Xiaolin]'s Articles
Bing Scholar
Similar articles in Bing Scholar
[Chen, Yongyong]'s Articles
[Wang, Shuqin]'s Articles
[Xiao, Xiaolin]'s Articles
Terms of Use
No data!
Social Bookmark/Share
All comments (0)
No comment.
 

Items in the repository are protected by copyright, with all rights reserved, unless otherwise indicated.