Residential Collegefalse
Status已發表Published
Multilevel Feature Fusion for End-to-End Blind Image Quality Assessment
Lan,Xuting1; Zhou,Mingliang1; Xu,Xueyong2; Wei,Xuekai3; Liao,Xingran4; Pu,Huayan5; Luo,Jun5; Xiang,Tao1; Fang,Bin1; Shang,Zhaowei1
2023-09-01
Source PublicationIEEE Transactions on Broadcasting
ISSN0018-9316
Volume69Issue:3Pages:801-811
Abstract

In this paper, a framework based on two feature extraction networks and a multilevel feature fusion (MFF) network is proposed. Multilevel degradation features can be obtained through this method, and combined with the human visual perception system, the local and global feature information contained in these features can be captured, which is conducive to the prediction of distorted images. First, a restored image approximating a reference image is generated by a restorative generative adversarial network (GAN). Furthermore, the multilevel degradation features of distorted images and the restored image features are extracted by EfficientNet. Second, the features extracted by EfficientNet are input into the MFF network and are fully expressed by the top-down, bottom-up and third edge joining methods. Moreover, the features provide more low-level details and high-level semantic features for the prediction of image quality scores. In addition, after the MFF stage, the framework calculates the score of each branch feature and obtains the average quality score. Experimental results show that our method achieves greatly improved prediction accuracy and performance on five standard databases.

KeywordBlind Image Quality Assessment Deep Learning Distortion Feature Extraction Feature Extraction Generative Adversarial Networks Image Quality Multilevel Feature Fusion Predictive Models Semantics
DOI10.1109/TBC.2023.3262163
URLView the original
Indexed BySCIE
Language英語English
WOS Research AreaEngineering ; Telecommunications
WOS SubjectEngineering, Electrical & Electronic ; Telecommunications
WOS IDWOS:000967437200001
PublisherInstitute of Electrical and Electronics Engineers Inc.
Scopus ID2-s2.0-85153369477
Fulltext Access
Citation statistics
Document TypeJournal article
CollectionTHE STATE KEY LABORATORY OF INTERNET OF THINGS FOR SMART CITY (UNIVERSITY OF MACAU)
Corresponding AuthorZhou,Mingliang
Affiliation1.School of Computer Science, Chongqing University, Chongqing, China
2.Beidou Application Research and Development Department, North Information Control Research Academy Group Company Ltd, Nanjing, China
3.State Key Laboratory of Internet of Things for Smart City and the Department of Electrical and Computer Engineering, University of Macau, Macau, China
4.Computer Science Department, The City University of Hong Kong, Hong Kong, China
5.State Key Laboratory of Mechanical Transmissions, Chongqing University, Chongqing, China
Recommended Citation
GB/T 7714
Lan,Xuting,Zhou,Mingliang,Xu,Xueyong,et al. Multilevel Feature Fusion for End-to-End Blind Image Quality Assessment[J]. IEEE Transactions on Broadcasting, 2023, 69(3), 801-811.
APA Lan,Xuting., Zhou,Mingliang., Xu,Xueyong., Wei,Xuekai., Liao,Xingran., Pu,Huayan., Luo,Jun., Xiang,Tao., Fang,Bin., & Shang,Zhaowei (2023). Multilevel Feature Fusion for End-to-End Blind Image Quality Assessment. IEEE Transactions on Broadcasting, 69(3), 801-811.
MLA Lan,Xuting,et al."Multilevel Feature Fusion for End-to-End Blind Image Quality Assessment".IEEE Transactions on Broadcasting 69.3(2023):801-811.
Files in This Item:
There are no files associated with this item.
Related Services
Recommend this item
Bookmark
Usage statistics
Export to Endnote
Google Scholar
Similar articles in Google Scholar
[Lan,Xuting]'s Articles
[Zhou,Mingliang]'s Articles
[Xu,Xueyong]'s Articles
Baidu academic
Similar articles in Baidu academic
[Lan,Xuting]'s Articles
[Zhou,Mingliang]'s Articles
[Xu,Xueyong]'s Articles
Bing Scholar
Similar articles in Bing Scholar
[Lan,Xuting]'s Articles
[Zhou,Mingliang]'s Articles
[Xu,Xueyong]'s Articles
Terms of Use
No data!
Social Bookmark/Share
All comments (0)
No comment.
 

Items in the repository are protected by copyright, with all rights reserved, unless otherwise indicated.