Residential College | false |
Status | 已發表Published |
Psanet: prototype-guided salient attention for few-shot segmentation | |
Li, Hao1; Huang, Guoheng1; Yuan, Xiaochen2; Zheng, Zewen1; Chen, Xuhang3; Zhong, Guo4; Pun, Chi Man5 | |
2024 | |
Source Publication | Visual Computer |
ISSN | 0178-2789 |
Abstract | Few-shot semantic segmentation aims to learn a generalized model for unseen-class segmentation with just a few densely annotated samples. Most current metric-based prototype learning models utilize prototypes to assist in query sample segmentation by directly utilizing support samples through Masked Average Pooling. However, these methods frequently fail to consider the semantic ambiguity of prototypes, the limitations in performance when dealing with extreme variations in objects, and the semantic similarities between different classes. In this paper, we introduce a novel network architecture named Prototype-guided Salient Attention Network (PSANet). Specifically, we employ prototype-guided attention to learn salient regions, allocating different attention weights to features at different spatial locations of the target to enhance the significance of salient regions within the prototype. In order to mitigate the impact of external distractor categories on the prototype, our proposed contrastive loss has the capability to acquire a more discriminative prototype to promote inter-class feature separation and intra-class feature compactness. Moreover, we suggest implementing a refinement operation for the multi-scale module in order to enhance the ability to capture complete contextual information regarding features at various scales. The effectiveness of our strategy is demonstrated by extensive tests performed on the PASCAL-5 and COCO-20 datasets, despite its inherent simplicity. Our code is available at https://github.com/woaixuexixuexi/PSANet. |
Keyword | Attention Mechanism Contrastive Learning Few-shot Segmentation Semantic Segmentation |
DOI | 10.1007/s00371-024-03582-1 |
URL | View the original |
Indexed By | SCIE |
Language | 英語English |
WOS Research Area | Computer Science |
WOS Subject | Computer Science, Software Engineering |
WOS ID | WOS:001282038800001 |
Publisher | SPRINGERONE NEW YORK PLAZA, SUITE 4600 , NEW YORK, NY 10004, UNITED STATES |
Scopus ID | 2-s2.0-85200143637 |
Fulltext Access | |
Citation statistics | |
Document Type | Journal article |
Collection | DEPARTMENT OF COMPUTER AND INFORMATION SCIENCE |
Corresponding Author | Huang, Guoheng; Zhong, Guo |
Affiliation | 1.Guangdong University of Technology, Guangzhou, China 2.Macao Polytechnic University, Macao 3.Huizhou University, Huizhou, China 4.Guangdong University of Foreign Studies, Guangzhou, China 5.University of Macau, Macao |
Recommended Citation GB/T 7714 | Li, Hao,Huang, Guoheng,Yuan, Xiaochen,et al. Psanet: prototype-guided salient attention for few-shot segmentation[J]. Visual Computer, 2024. |
APA | Li, Hao., Huang, Guoheng., Yuan, Xiaochen., Zheng, Zewen., Chen, Xuhang., Zhong, Guo., & Pun, Chi Man (2024). Psanet: prototype-guided salient attention for few-shot segmentation. Visual Computer. |
MLA | Li, Hao,et al."Psanet: prototype-guided salient attention for few-shot segmentation".Visual Computer (2024). |
Files in This Item: | There are no files associated with this item. |
Items in the repository are protected by copyright, with all rights reserved, unless otherwise indicated.
Edit Comment