Residential Collegefalse
Status已發表Published
Scalable and memory-efficient sparse learning for classification with approximate Bayesian regularization priors
Luo, Jiahua1; Gan, Yanfen2; Vong, Chi Man1; Wong, Chi Man1; Chen, Chuangquan3
2021-10-07
Source PublicationNeurocomputing
ISSN0925-2312
Volume457Pages:106-116
Abstract

Sparse Bayesian learning (SBL) provides state-of-the-art performance in accuracy, sparsity and probabilistic prediction for classification. In SBL, the regularization priors are automatically determined that avoids an exhaustive hyperparameter selection by cross-validation. However, scalability to large problems is a drawback of SBL because of the inversion of a potentially enormous covariance matrix for updating the regularization priors in every iteration. This paper develops an approximate SBL algorithm called ARP-SBL, where the regularization priors are approximated without inverting the covariance matrix. Therefore, our approach can easily scale up to problems with large data size or feature dimension. It alleviates the long training time and high memory complexity in SBL. Based on ARP-SBL, two scalable nonlinear SBL models: scalable relevance vector machine (ARP-RVM) and scalable sparse Bayesian extreme learning machine (ARP-SBELM) are developed for problems of large data size and large feature size respectively. Experiments on a variety of benchmarks have shown that the proposed models are with competitive accuracy compared to existing methods while i) converging faster; ii) requiring thousands of times less memory; iii) without exhaustive regularized hyperparameter selection; and iv) easily scaling up to large data size and high dimensional features.

KeywordApproximate Bayesian Regularization Priors Relevance Vector Machine Scalable Sparse Bayesian Learning Sparse Bayesian Extreme Learning Machine
DOI10.1016/j.neucom.2021.06.025
URLView the original
Indexed BySCIE
Language英語English
WOS Research AreaComputer Science
WOS SubjectComputer Science, Artificial Intelligence
WOS IDWOS:000689714800008
PublisherELSEVIER, RADARWEG 29, 1043 NX AMSTERDAM, NETHERLANDS
Scopus ID2-s2.0-85108915424
Fulltext Access
Citation statistics
Document TypeJournal article
CollectionDEPARTMENT OF COMPUTER AND INFORMATION SCIENCE
Corresponding AuthorChen, Chuangquan
Affiliation1.Department of Computer and Information Science, University of Macau, Macau, China
2.School of Information Science and Technology, South China Business College, Guangdong University of Foreign Studies, Guangzhou, 510545, China
3.Faculty of Intelligent Manufacturing, Wuyi University, Jiangmen, 529020, China
First Author AffilicationUniversity of Macau
Recommended Citation
GB/T 7714
Luo, Jiahua,Gan, Yanfen,Vong, Chi Man,et al. Scalable and memory-efficient sparse learning for classification with approximate Bayesian regularization priors[J]. Neurocomputing, 2021, 457, 106-116.
APA Luo, Jiahua., Gan, Yanfen., Vong, Chi Man., Wong, Chi Man., & Chen, Chuangquan (2021). Scalable and memory-efficient sparse learning for classification with approximate Bayesian regularization priors. Neurocomputing, 457, 106-116.
MLA Luo, Jiahua,et al."Scalable and memory-efficient sparse learning for classification with approximate Bayesian regularization priors".Neurocomputing 457(2021):106-116.
Files in This Item:
There are no files associated with this item.
Related Services
Recommend this item
Bookmark
Usage statistics
Export to Endnote
Google Scholar
Similar articles in Google Scholar
[Luo, Jiahua]'s Articles
[Gan, Yanfen]'s Articles
[Vong, Chi Man]'s Articles
Baidu academic
Similar articles in Baidu academic
[Luo, Jiahua]'s Articles
[Gan, Yanfen]'s Articles
[Vong, Chi Man]'s Articles
Bing Scholar
Similar articles in Bing Scholar
[Luo, Jiahua]'s Articles
[Gan, Yanfen]'s Articles
[Vong, Chi Man]'s Articles
Terms of Use
No data!
Social Bookmark/Share
All comments (0)
No comment.
 

Items in the repository are protected by copyright, with all rights reserved, unless otherwise indicated.