SAFENet: self-supervised monocular depth estimation with semantic-aware feature extraction의미 인식 특징 추출을 통한 자가 감독 단안 깊이 추정

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 239
  • Download : 0
Self-supervised monocular depth estimation has emerged as a promising method because it does not require groundtruth depth maps during training. As an alternative for the groundtruth depth map, the photometric loss enables to provide self-supervision on depth prediction by matching the input image frames. However, the photometric loss causes various problems, resulting in less accurate depth values compared with supervised approaches. In this paper, we propose SAFENet that is designed to leverage semantic information to overcome the limitations of the photometric loss. Our key idea is to exploit semantic-aware depth features that integrate the semantic and geometric knowledge. Therefore, we introduce multi-task learning schemes to incorporate semantic-awareness into the representation of depth features. Experiments on KITTI dataset demonstrate that our methods compete or even outperform the state-of-the-art methods. Furthermore, extensive experiments on different datasets show its better generalization ability and robustness to various conditions, such as low-light or adverse weather.
Advisors
Kim, Changickresearcher김창익researcher
Description
한국과학기술원 :전기및전자공학부,
Publisher
한국과학기술원
Issue Date
2021
Identifier
325007
Language
eng
Description

학위논문(석사) - 한국과학기술원 : 전기및전자공학부, 2021.2,[iv, 40 p. :]

Keywords

self-supervised learning▼amonocular depth estimation▼asemantic segmentation▼amulti-task learning; 자가 지도 학습▼a단안 깊이 추정▼a의미론적 분할▼a다중 작업 학습

URI
http://hdl.handle.net/10203/295960
Link
http://library.kaist.ac.kr/search/detail/view.do?bibCtrlNo=948974&flag=dissertation
Appears in Collection
EE-Theses_Master(석사논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0