UM  > Faculty of Science and Technology
Residential Collegefalse
Status已發表Published
BRNet: Exploring Comprehensive Features for Monocular Depth Estimation
Han, Wencheng1; Yin, Junbo2; Jin, Xiaogang3; Dai, Xiangdong4; Shen, Jianbing1
2022-10-23
Conference Name17th European Conference on Computer Vision (ECCV)
Source PublicationLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Volume13698
Pages586-602
Conference DateOCT 23-27, 2022
Conference PlaceTel Aviv, ISRAEL
Abstract

Self-supervised monocular depth estimation has achieved encouraging performance recently. A consensus is that high-resolution inputs often yield better results. However, we find that the performance gap between high and low resolutions in this task mainly lies in the inappropriate feature representation of the widely used U-Net backbone rather than the information difference. In this paper, we address the comprehensive feature representation problem for self-supervised depth estimation by paying attention to both local and global feature representation. Specifically, we first provide an in-depth analysis of the influence of different input resolutions and find out that the receptive fields play a more crucial role than the information disparity between inputs. To this end, we propose a bilateral depth encoder that can fully exploit detailed and global information. It benefits from more broad receptive fields and thus achieves substantial improvements. Furthermore, we propose a residual decoder to facilitate depth regression as well as save computations by focusing on the information difference between different layers. We named our new depth estimation model Bilateral Residual Depth Network (BRNet). Experimental results show that BRNet achieves new state-of-the-art performance on the KITTI benchmark with three types of self-supervision. Codes are available at: https://github.com/wencheng256/BRNet.

DOI10.1007/978-3-031-19839-7_34
URLView the original
Indexed ByCPCI-S
Language英語English
WOS Research AreaComputer Science ; Imaging Science & Photographic Technology
WOS SubjectComputer Science, Artificial Intelligence ; Imaging Science & Photographic Technology
WOS IDWOS:000903760400034
Scopus ID2-s2.0-85142678479
Fulltext Access
Citation statistics
Document TypeConference paper
CollectionFaculty of Science and Technology
THE STATE KEY LABORATORY OF INTERNET OF THINGS FOR SMART CITY (UNIVERSITY OF MACAU)
DEPARTMENT OF COMPUTER AND INFORMATION SCIENCE
Corresponding AuthorShen, Jianbing
Affiliation1.SKL-IOTSC, Computer and Information Science, University of Macau, Zhuhai, China
2.School of Computer Science, Beijing Institute of Technology, Beijing, China
3.State Key Lab of CAD &CG, Zhejiang University, Hangzhou, 310058, China
4.Guangdong OPPO Mobile Telecommunications Corp., Ltd., Dongguan, China
First Author AffilicationUniversity of Macau
Corresponding Author AffilicationUniversity of Macau
Recommended Citation
GB/T 7714
Han, Wencheng,Yin, Junbo,Jin, Xiaogang,et al. BRNet: Exploring Comprehensive Features for Monocular Depth Estimation[C], 2022, 586-602.
APA Han, Wencheng., Yin, Junbo., Jin, Xiaogang., Dai, Xiangdong., & Shen, Jianbing (2022). BRNet: Exploring Comprehensive Features for Monocular Depth Estimation. Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 13698, 586-602.
Files in This Item:
There are no files associated with this item.
Related Services
Recommend this item
Bookmark
Usage statistics
Export to Endnote
Google Scholar
Similar articles in Google Scholar
[Han, Wencheng]'s Articles
[Yin, Junbo]'s Articles
[Jin, Xiaogang]'s Articles
Baidu academic
Similar articles in Baidu academic
[Han, Wencheng]'s Articles
[Yin, Junbo]'s Articles
[Jin, Xiaogang]'s Articles
Bing Scholar
Similar articles in Bing Scholar
[Han, Wencheng]'s Articles
[Yin, Junbo]'s Articles
[Jin, Xiaogang]'s Articles
Terms of Use
No data!
Social Bookmark/Share
All comments (0)
No comment.
 

Items in the repository are protected by copyright, with all rights reserved, unless otherwise indicated.