Residential College | false |
Status | 已發表Published |
Unsupervised Cross-Spectrum Depth Estimation by Visible-Light and Thermal Cameras | |
Guo,Yubin1; Qi,Xinlei1; Xie,Jin1; Xu,Cheng Zhong2; Kong,Hui3 | |
2023-06-08 | |
Source Publication | IEEE Transactions on Intelligent Transportation Systems |
ISSN | 1524-9050 |
Volume | 24Issue:10Pages:10937-10947 |
Abstract | Cross-spectrum depth estimation aims to provide a reliable depth map under variant-illumination conditions with a pair of dual-spectrum images. It is valuable for autonomous driving applications when vehicles are equipped with two cameras of different modalities. However, images captured by different-modality cameras can be photometrically quite different, which makes cross-spectrum depth estimation a very challenging problem. Moreover, the shortage of large-scale open-source datasets also retards further research in this field. In this paper, we propose an unsupervised visible light(VIS)-image-guided cross-spectrum (i.e., thermal and visible-light, TIR-VIS in short) depth-estimation framework. The input of the framework consists of a cross-spectrum stereo pair (one VIS image and one thermal image). First, we train a depth-estimation base network using VIS-image stereo pairs. To adapt the trained depth-estimation network to the cross-spectrum images, we propose a multi-scale feature-transfer network to transfer features from the TIR domain to the VIS domain at the feature level. Furthermore, we introduce a mechanism of cross-spectrum depth cycle-consistency to improve the depth estimation result of dual-spectrum image pairs. Meanwhile, we release to society a large cross-spectrum dataset with visible-light and thermal stereo images captured in different scenes. The experiment result shows that our method achieves better depth-estimation results than the compared existing methods. Our code and dataset are available on https://github.com/whitecrow1027/CrossSP_Depth. |
Keyword | Unsupervised Learning Transfer Learning Multispectral Imaging Computer Vision |
DOI | 10.1109/TITS.2023.3279559 |
URL | View the original |
Indexed By | SCIE |
Language | 英語English |
WOS Research Area | Engineering ; Transportation |
WOS Subject | Engineering, Civil ; Engineering, Electrical & Electronic ; Transportation Science & Technology |
WOS ID | WOS:001012534900001 |
Publisher | IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC, 445 HOES LANE, PISCATAWAY, NJ 08855-4141 |
Scopus ID | 2-s2.0-85162618599 |
Fulltext Access | |
Citation statistics | |
Document Type | Journal article |
Collection | DEPARTMENT OF COMPUTER AND INFORMATION SCIENCE Faculty of Science and Technology THE STATE KEY LABORATORY OF INTERNET OF THINGS FOR SMART CITY (UNIVERSITY OF MACAU) DEPARTMENT OF ELECTROMECHANICAL ENGINEERING |
Corresponding Author | Kong,Hui |
Affiliation | 1.School of Computer Science and Engineering, Nanjing University of Science and Technology, Nanjing, China 2.Department of Computer Science, State Key Laboratory of Internet of Things for Smart City (SKL-IOTSC), University of Macau, Macau, China 3.Department of Electromechanical Engineering (EME), State Key Laboratory of Internet of Things for Smart City (SKL-IOTSC), University of Macau, Macau, China |
Corresponding Author Affilication | University of Macau |
Recommended Citation GB/T 7714 | Guo,Yubin,Qi,Xinlei,Xie,Jin,et al. Unsupervised Cross-Spectrum Depth Estimation by Visible-Light and Thermal Cameras[J]. IEEE Transactions on Intelligent Transportation Systems, 2023, 24(10), 10937-10947. |
APA | Guo,Yubin., Qi,Xinlei., Xie,Jin., Xu,Cheng Zhong., & Kong,Hui (2023). Unsupervised Cross-Spectrum Depth Estimation by Visible-Light and Thermal Cameras. IEEE Transactions on Intelligent Transportation Systems, 24(10), 10937-10947. |
MLA | Guo,Yubin,et al."Unsupervised Cross-Spectrum Depth Estimation by Visible-Light and Thermal Cameras".IEEE Transactions on Intelligent Transportation Systems 24.10(2023):10937-10947. |
Files in This Item: | There are no files associated with this item. |
Items in the repository are protected by copyright, with all rights reserved, unless otherwise indicated.
Edit Comment