Residential Collegefalse
Status已發表Published
The Neglected Tails of Vision-Language Models
Parashar, Shubham2; Lin, Zhiqiu3; Liu, Tian2; Dong, Xiangjue2; Li, Yanan4; Ramanan, Deva3; Caverlee, James2; KONG, SHU1
2024-06
Conference NameThe IEEE / CVF Computer Vision and Pattern Recognition Conference (CVPR)
Conference DateJune 21, 2024
Conference PlaceSeattle
CountryUSA
Abstract

Vision-language models (VLMs) excel in zero-shot recognition but their performance varies greatly across different visual concepts. For example, although CLIP achieves impressive accuracy on ImageNet (60-80%), its performance drops below 10% for more than ten concepts like night snake, presumably due to their limited presence in the pretraining data. However, measuring the frequency of concepts in VLMs’ large-scale datasets is challenging. We address this by using large language models (LLMs) to count the number of pretraining texts that contain synonyms of these concepts. Our analysis confirms that popular datasets, such as LAION, exhibit a long-tailed concept distribution, yielding biased performance in VLMs. We also find that downstream applications of VLMs, including visual chatbots (e.g., GPT-4V) and text-to-image models (e.g., Stable Diffusion), often fail to recognize or generate images of rare concepts identified by our method. To mitigate the imbalanced performance of zero-shot VLMs, we propose REtrieval-Augmented Learning (REAL). First, instead of prompting VLMs using the original class names, REAL uses their most frequent synonyms found in pretraining texts. This simple change already outperforms costly human-engineered and LLM-enriched prompts over nine benchmark datasets. Second, REAL trains a linear classifier on a small yet balanced set of pretraining data retrieved using concept synonyms. REAL surpasses the previous zero-shot SOTA, using 400× less storage and 10,000× less training time!

Document TypeConference paper
CollectionDEPARTMENT OF COMPUTER AND INFORMATION SCIENCE
Corresponding AuthorKONG, SHU
Affiliation1.University of Macau
2.Texas A&M University
3.Carnegie Mellon University
4.Zhejiang Lab
Corresponding Author AffilicationUniversity of Macau
Recommended Citation
GB/T 7714
Parashar, Shubham,Lin, Zhiqiu,Liu, Tian,et al. The Neglected Tails of Vision-Language Models[C], 2024.
APA Parashar, Shubham., Lin, Zhiqiu., Liu, Tian., Dong, Xiangjue., Li, Yanan., Ramanan, Deva., Caverlee, James., & KONG, SHU (2024). The Neglected Tails of Vision-Language Models. .
Files in This Item: Download All
File Name/Size Publications Version Access License
TailVLM_CVPR_24.pdf(44723KB)会议论文 开放获取CC BY-NC-SAView Download
Related Services
Recommend this item
Bookmark
Usage statistics
Export to Endnote
Google Scholar
Similar articles in Google Scholar
[Parashar, Shubham]'s Articles
[Lin, Zhiqiu]'s Articles
[Liu, Tian]'s Articles
Baidu academic
Similar articles in Baidu academic
[Parashar, Shubham]'s Articles
[Lin, Zhiqiu]'s Articles
[Liu, Tian]'s Articles
Bing Scholar
Similar articles in Bing Scholar
[Parashar, Shubham]'s Articles
[Lin, Zhiqiu]'s Articles
[Liu, Tian]'s Articles
Terms of Use
No data!
Social Bookmark/Share
File name: TailVLM_CVPR_24.pdf
Format: Adobe PDF
All comments (0)
No comment.
 

Items in the repository are protected by copyright, with all rights reserved, unless otherwise indicated.