UM  > Faculty of Science and Technology
Residential Collegefalse
Status已發表Published
Event Voxel Set Transformer for Spatiotemporal Representation Learning on Event Streams
Xie, Bochen1; Deng, Yongjian2; Shao, Zhanpeng3; Xu, Qingsong4; Li, Youfu1
2024-08
Source PublicationIEEE Transactions on Circuits and Systems for Video Technology
ISSN1051-8215
Abstract

Event cameras are neuromorphic vision sensors that record a scene as sparse and asynchronous event streams. Most event-based methods project events into dense frames and process them using conventional vision models, resulting in high computational complexity. A recent trend is to develop point-based networks that achieve efficient event processing by learning sparse representations. However, existing works may lack robust local information aggregators and effective feature interaction operations, thus limiting their modeling capabilities. To this end, we propose an attention-aware model named Event Voxel Set Transformer (EVSTr) for efficient spatiotemporal representation learning on event streams. It first converts the event stream into voxel sets and then hierarchically aggregates voxel features to obtain robust representations. The core of EVSTr is an event voxel transformer encoder that consists of two well-designed components, including the Multi-Scale Neighbor Embedding Layer (MNEL) for local information aggregation and the Voxel Self-Attention Layer (VSAL) for global feature interaction. Enabling the network to incorporate a long-range temporal structure, we introduce a segment modeling strategy (S2TM) to learn motion patterns from a sequence of segmented voxel sets. The proposed model is evaluated on two recognition tasks, including object classification and action recognition. To provide a convincing model evaluation, we present a new event-based action recognition dataset (NeuroHAR) recorded in challenging scenarios. Comprehensive experiments show that EVSTr achieves state-of-the-art performance while maintaining low model complexity.

KeywordEvent Camera Neuromorphic Vision Attention Mechanism Object Classification Action Recognition
DOI10.1109/TCSVT.2024.3448615
URLView the original
Indexed BySCIE
Language英語English
PublisherInstitute of Electrical and Electronics Engineers Inc.
Scopus ID2-s2.0-85201786533
Fulltext Access
Citation statistics
Document TypeJournal article
CollectionFaculty of Science and Technology
DEPARTMENT OF ELECTROMECHANICAL ENGINEERING
Corresponding AuthorLi, Youfu
Affiliation1.Department of Mechanical Engineering, City University of Hong Kong, Hong Kong, SAR, China
2.College of Computer Science, Beijing University of Technology, Beijing, China
3.College of Information Science and Engineering, Hunan Normal University, Changsha, China
4.Department of Electromechanical Engineering, University of Macau, Macao, SAR, China
Recommended Citation
GB/T 7714
Xie, Bochen,Deng, Yongjian,Shao, Zhanpeng,et al. Event Voxel Set Transformer for Spatiotemporal Representation Learning on Event Streams[J]. IEEE Transactions on Circuits and Systems for Video Technology, 2024.
APA Xie, Bochen., Deng, Yongjian., Shao, Zhanpeng., Xu, Qingsong., & Li, Youfu (2024). Event Voxel Set Transformer for Spatiotemporal Representation Learning on Event Streams. IEEE Transactions on Circuits and Systems for Video Technology.
MLA Xie, Bochen,et al."Event Voxel Set Transformer for Spatiotemporal Representation Learning on Event Streams".IEEE Transactions on Circuits and Systems for Video Technology (2024).
Files in This Item:
There are no files associated with this item.
Related Services
Recommend this item
Bookmark
Usage statistics
Export to Endnote
Google Scholar
Similar articles in Google Scholar
[Xie, Bochen]'s Articles
[Deng, Yongjian]'s Articles
[Shao, Zhanpeng]'s Articles
Baidu academic
Similar articles in Baidu academic
[Xie, Bochen]'s Articles
[Deng, Yongjian]'s Articles
[Shao, Zhanpeng]'s Articles
Bing Scholar
Similar articles in Bing Scholar
[Xie, Bochen]'s Articles
[Deng, Yongjian]'s Articles
[Shao, Zhanpeng]'s Articles
Terms of Use
No data!
Social Bookmark/Share
All comments (0)
No comment.
 

Items in the repository are protected by copyright, with all rights reserved, unless otherwise indicated.