Residential College | false |
Status | 已發表Published |
The Neglected Tails of Vision-Language Models | |
Parashar, Shubham2; Lin, Zhiqiu3; Liu, Tian2; Dong, Xiangjue2; Li, Yanan4; Ramanan, Deva3; Caverlee, James2; KONG, SHU1![]() ![]() | |
2024-06 | |
Conference Name | The IEEE / CVF Computer Vision and Pattern Recognition Conference (CVPR) |
Conference Date | June 21, 2024 |
Conference Place | Seattle |
Country | USA |
Abstract | Vision-language models (VLMs) excel in zero-shot recognition but their performance varies greatly across different visual concepts. For example, although CLIP achieves impressive accuracy on ImageNet (60-80%), its performance drops below 10% for more than ten concepts like night snake, presumably due to their limited presence in the pretraining data. However, measuring the frequency of concepts in VLMs’ large-scale datasets is challenging. We address this by using large language models (LLMs) to count the number of pretraining texts that contain synonyms of these concepts. Our analysis confirms that popular datasets, such as LAION, exhibit a long-tailed concept distribution, yielding biased performance in VLMs. We also find that downstream applications of VLMs, including visual chatbots (e.g., GPT-4V) and text-to-image models (e.g., Stable Diffusion), often fail to recognize or generate images of rare concepts identified by our method. To mitigate the imbalanced performance of zero-shot VLMs, we propose REtrieval-Augmented Learning (REAL). First, instead of prompting VLMs using the original class names, REAL uses their most frequent synonyms found in pretraining texts. This simple change already outperforms costly human-engineered and LLM-enriched prompts over nine benchmark datasets. Second, REAL trains a linear classifier on a small yet balanced set of pretraining data retrieved using concept synonyms. REAL surpasses the previous zero-shot SOTA, using 400× less storage and 10,000× less training time! |
Document Type | Conference paper |
Collection | DEPARTMENT OF COMPUTER AND INFORMATION SCIENCE |
Corresponding Author | KONG, SHU |
Affiliation | 1.University of Macau 2.Texas A&M University 3.Carnegie Mellon University 4.Zhejiang Lab |
Corresponding Author Affilication | University of Macau |
Recommended Citation GB/T 7714 | Parashar, Shubham,Lin, Zhiqiu,Liu, Tian,et al. The Neglected Tails of Vision-Language Models[C], 2024. |
APA | Parashar, Shubham., Lin, Zhiqiu., Liu, Tian., Dong, Xiangjue., Li, Yanan., Ramanan, Deva., Caverlee, James., & KONG, SHU (2024). The Neglected Tails of Vision-Language Models. . |
Files in This Item: | Download All | |||||
File Name/Size | Publications | Version | Access | License | ||
TailVLM_CVPR_24.pdf(44723KB) | 会议论文 | 开放获取 | CC BY-NC-SA | View Download |
Items in the repository are protected by copyright, with all rights reserved, unless otherwise indicated.
Edit Comment