UM  > Faculty of Science and Technology
Residential Collegefalse
Status已發表Published
Learning self-target knowledge for few-shot segmentation
Chen, Yadang1,2; Chen, Sihan1,2; Yang, Zhi Xin3; Wu, Enhua4
2024-05-01
Source PublicationPattern Recognition
ISSN0031-3203
Volume149Pages:110266
Abstract

Few-shot semantic segmentation uses a few annotated data of a specific class in the support set to segment the target of the same class in the query set. Most existing approaches fail to perform well when there are significant intra-class variances. This paper alleviates the problem by concentrating on mining the query image and using the support set as supplementary information. First, it proposes a Query Prototype Generation Module to generate a query foreground prototype from the query features. Specifically, we use both prototype-level and pixel-level similarity matching to generate two complementary initial prototypes, which we then integrate to create a discriminative query foreground prototype. Second, we propose a Support Auxiliary Refinement Module to further guide the final precise prediction of the query image by leveraging the target category information of the support set through step-by-step mining. Specifically, we generate a query-support mixture prototype based on the support prototype representation obtained using the attention mechanism. Then we generate a support supplement prototype to complement the missing information by encoding over the foreground regions that the query-support mixture prototype fails to segment out. Extensive experiments on PASCAL-5 and COCO-20 demonstrate that our model outperforms the prior works of few-shot segmentation.

KeywordAttention Mechanism Few-shot Segmentation Step-by-step Mining Two-level Similarity Matching
DOI10.1016/j.patcog.2024.110266
URLView the original
Indexed BySCIE
Language英語English
WOS Research AreaComputer Science ; Engineering
WOS SubjectComputer Science, Artificial Intelligence ; Engineering, Electrical & Electronic
WOS IDWOS:001164118700001
PublisherELSEVIER SCI LTD, THE BOULEVARD, LANGFORD LANE, KIDLINGTON, OXFORD OX5 1GB, OXON, ENGLAND
Scopus ID2-s2.0-85182517404
Fulltext Access
Citation statistics
Document TypeJournal article
CollectionFaculty of Science and Technology
THE STATE KEY LABORATORY OF INTERNET OF THINGS FOR SMART CITY (UNIVERSITY OF MACAU)
DEPARTMENT OF ELECTROMECHANICAL ENGINEERING
Corresponding AuthorYang, Zhi Xin
Affiliation1.Engineering Research Center of Digital Forensics, Ministry of Education, Nanjing University of Information Science and Technology, Nanjing, 210044, China
2.School of Computer Science, Nanjing University of Information Science and Technology, Nanjing, 210044, China
3.State Key Laboratory of Internet of Things for Smart City, Department of Electromechanical Engineering, University of Macau, 999078, China
4.State Key Laboratory of Computer Science, Institute of Software, University of Chinese Academy of Sciences, Beijing, 100190, China
Corresponding Author AffilicationUniversity of Macau
Recommended Citation
GB/T 7714
Chen, Yadang,Chen, Sihan,Yang, Zhi Xin,et al. Learning self-target knowledge for few-shot segmentation[J]. Pattern Recognition, 2024, 149, 110266.
APA Chen, Yadang., Chen, Sihan., Yang, Zhi Xin., & Wu, Enhua (2024). Learning self-target knowledge for few-shot segmentation. Pattern Recognition, 149, 110266.
MLA Chen, Yadang,et al."Learning self-target knowledge for few-shot segmentation".Pattern Recognition 149(2024):110266.
Files in This Item:
There are no files associated with this item.
Related Services
Recommend this item
Bookmark
Usage statistics
Export to Endnote
Google Scholar
Similar articles in Google Scholar
[Chen, Yadang]'s Articles
[Chen, Sihan]'s Articles
[Yang, Zhi Xin]'s Articles
Baidu academic
Similar articles in Baidu academic
[Chen, Yadang]'s Articles
[Chen, Sihan]'s Articles
[Yang, Zhi Xin]'s Articles
Bing Scholar
Similar articles in Bing Scholar
[Chen, Yadang]'s Articles
[Chen, Sihan]'s Articles
[Yang, Zhi Xin]'s Articles
Terms of Use
No data!
Social Bookmark/Share
All comments (0)
No comment.
 

Items in the repository are protected by copyright, with all rights reserved, unless otherwise indicated.