Residential Collegefalse
Status已發表Published
Coarse-to-Fine CNN for Image Super-Resolution
Tian, Chunwei1,2; Xu, Yong1; Zuo, Wangmeng3; Zhang, Bob4; Fei, Lunke5; Lin, Chia Wen1
2021-06-01
Source PublicationIEEE Transactions on Multimedia
ISSN1520-9210
Volume23Pages:1489-1502
Abstract

Deep convolutional neural networks (CNNs) have been popularly adopted in image super-resolution (SR). However, deep CNNs for SR often suffer from the instability of training, resulting in poor image SR performance. Gathering complementary contextual information can effectively overcome the problem. Along this line, we propose a coarse-to-fine SR CNN (CFSRCNN) to recover a high-resolution (HR) image from its low-resolution version. The proposed CFSRCNN consists of a stack of feature extraction blocks (FEBs), an enhancement block (EB), a construction block (CB) and, a feature refinement block (FRB) to learn a robust SR model. Specifically, the stack of FEBs learns the long- and short-path features, and then fuses the learned features by expending the effect of the shallower layers to the deeper layers to improve the representing power of learned features. A compression unit is then used in each FEB to distill important information of features so as to reduce the number of parameters. Subsequently, the EB utilizes residual learning to integrate the extracted features to prevent from losing edge information due to repeated distillation operations. After that, the CB applies the global and local LR features to obtain coarse features, followed by the FRB to refine the features to reconstruct a high-resolution image. Extensive experiments demonstrate the high efficiency and good performance of our CFSRCNN model on benchmark datasets compared with state-of-the-art SR models. The code of CFSRCNN is accessible on https://github.com/hellloxiaotian/CFSRCNN.

KeywordCascaded Structure Convolutional Neural Network Feature Fusion Feature Refinement Image Super-resolution
DOI10.1109/TMM.2020.2999182
URLView the original
Indexed BySCIE
Language英語English
WOS Research AreaComputer Science ; Telecommunications
WOS SubjectComputer Science, Information Systems ; Computer Science, Software Engineering ; Telecommunications
WOS IDWOS:000655830300003
Scopus ID2-s2.0-85107131782
Fulltext Access
Citation statistics
Document TypeJournal article
CollectionDEPARTMENT OF COMPUTER AND INFORMATION SCIENCE
Corresponding AuthorXu, Yong
Affiliation1.Bio-Computing Research Center, Harbin Institute of Technology, Shenzhen, Shenzhen, 518055, China
2.Shenzhen Key Laboratory of Visual Object Detection and Recognition, Shenzhen, 518055, China
3.School of Computer Science and Technology, Harbin Institute of Technology, Harbin, 518055, China
4.Department of Computer and Information Science, University of Macau, 150001, Macao
5.Department of Electrical Engineering, Institute of Communications Engineering, National Tsing Hua University, Hsinchu, 999078, Taiwan
Recommended Citation
GB/T 7714
Tian, Chunwei,Xu, Yong,Zuo, Wangmeng,et al. Coarse-to-Fine CNN for Image Super-Resolution[J]. IEEE Transactions on Multimedia, 2021, 23, 1489-1502.
APA Tian, Chunwei., Xu, Yong., Zuo, Wangmeng., Zhang, Bob., Fei, Lunke., & Lin, Chia Wen (2021). Coarse-to-Fine CNN for Image Super-Resolution. IEEE Transactions on Multimedia, 23, 1489-1502.
MLA Tian, Chunwei,et al."Coarse-to-Fine CNN for Image Super-Resolution".IEEE Transactions on Multimedia 23(2021):1489-1502.
Files in This Item:
There are no files associated with this item.
Related Services
Recommend this item
Bookmark
Usage statistics
Export to Endnote
Google Scholar
Similar articles in Google Scholar
[Tian, Chunwei]'s Articles
[Xu, Yong]'s Articles
[Zuo, Wangmeng]'s Articles
Baidu academic
Similar articles in Baidu academic
[Tian, Chunwei]'s Articles
[Xu, Yong]'s Articles
[Zuo, Wangmeng]'s Articles
Bing Scholar
Similar articles in Bing Scholar
[Tian, Chunwei]'s Articles
[Xu, Yong]'s Articles
[Zuo, Wangmeng]'s Articles
Terms of Use
No data!
Social Bookmark/Share
All comments (0)
No comment.
 

Items in the repository are protected by copyright, with all rights reserved, unless otherwise indicated.