Residential Collegefalse
Status已發表Published
Bilateral Cross-Modality Graph Matching Attention for Feature Fusion in Visual Question Answering
Cao, Jianjian1; Qin, Xiameng2; Zhao, Sanyuan1; Shen, Jianbing3
2022-02-07
Source PublicationIEEE Transactions on Neural Networks and Learning Systems
ISSN2162-237X
Abstract

Answering semantically complicated questions according to an image is challenging in a visual question answering (VQA) task. Although the image can be well represented by deep learning, the question is always simply embedded and cannot well indicate its meaning. Besides, the visual and textual features have a gap for different modalities, it is difficult to align and utilize the cross-modality information. In this article, we focus on these two problems and propose a graph matching attention (GMA) network. First, it not only builds graph for the image but also constructs graph for the question in terms of both syntactic and embedding information. Next, we explore the intramodality relationships by a dual-stage graph encoder and then present a bilateral cross-modality GMA to infer the relationships between the image and the question. The updated cross-modality features are then sent into the answer prediction module for final answer prediction. Experiments demonstrate that our network achieves the state-of-the-art performance on the GQA dataset and the VQA 2.0 dataset. The ablation studies verify the effectiveness of each module in our GMA network.

KeywordGraph Matching Attention (Gma) Relational Reasoning Visual Question Answering (Vqa).
DOI10.1109/TNNLS.2021.3135655
URLView the original
Indexed BySCIE
Language英語English
WOS Research AreaComputer Science ; Engineering
WOS SubjectComputer Science, Artificial Intelligence ; Computer Science, Hardware & Architecture ; Computer Science, Theory & Methods ; Engineering, Electrical & Electronic
WOS IDWOS:000754286600001
PublisherIEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC, 445 HOES LANE, PISCATAWAY, NJ 08855-4141
Scopus ID2-s2.0-85124748370
Fulltext Access
Citation statistics
Document TypeJournal article
CollectionDEPARTMENT OF COMPUTER AND INFORMATION SCIENCE
Faculty of Science and Technology
THE STATE KEY LABORATORY OF INTERNET OF THINGS FOR SMART CITY (UNIVERSITY OF MACAU)
Corresponding AuthorZhao, Sanyuan
Affiliation1.Department of Computer Science, Beijing Institute of Technology, Beijing 100081, China.
2.Baidu Inc., Beijing 100193, China.
3.State Key Laboratory of Internet of Things for Smart City, Department of Computer and Information Science, University of Macau, Macau, China.
Recommended Citation
GB/T 7714
Cao, Jianjian,Qin, Xiameng,Zhao, Sanyuan,et al. Bilateral Cross-Modality Graph Matching Attention for Feature Fusion in Visual Question Answering[J]. IEEE Transactions on Neural Networks and Learning Systems, 2022.
APA Cao, Jianjian., Qin, Xiameng., Zhao, Sanyuan., & Shen, Jianbing (2022). Bilateral Cross-Modality Graph Matching Attention for Feature Fusion in Visual Question Answering. IEEE Transactions on Neural Networks and Learning Systems.
MLA Cao, Jianjian,et al."Bilateral Cross-Modality Graph Matching Attention for Feature Fusion in Visual Question Answering".IEEE Transactions on Neural Networks and Learning Systems (2022).
Files in This Item:
There are no files associated with this item.
Related Services
Recommend this item
Bookmark
Usage statistics
Export to Endnote
Google Scholar
Similar articles in Google Scholar
[Cao, Jianjian]'s Articles
[Qin, Xiameng]'s Articles
[Zhao, Sanyuan]'s Articles
Baidu academic
Similar articles in Baidu academic
[Cao, Jianjian]'s Articles
[Qin, Xiameng]'s Articles
[Zhao, Sanyuan]'s Articles
Bing Scholar
Similar articles in Bing Scholar
[Cao, Jianjian]'s Articles
[Qin, Xiameng]'s Articles
[Zhao, Sanyuan]'s Articles
Terms of Use
No data!
Social Bookmark/Share
All comments (0)
No comment.
 

Items in the repository are protected by copyright, with all rights reserved, unless otherwise indicated.