Low-level radar-camera dataset construction and sensor fusion network for robust vehicle detection강인한 차량 검출을 위한 로우-레벨 레이다-카메라 데이터셋 구축 및 센서 퓨전 네트워크

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 243
  • Download : 0
In autonomous driving technology, perception technology for recognizing a peripheral environment is rapidly developed with attention of deep learning. Because sensors used in perception technology have clear advantages and disadvantages, fusion technologies using two or more sensors have been studied to compensate the disadvantage of each sensors and performance improvement. Radar sensor is cheaper than LiDAR, and is robust to external environmental changes, but because of its poor lateral accuracy, research for commercialization has been underway by integrating it with camera to compensate its weakness. However, conventional high level Radar data has a disadvantage that features existing in the low-level data disappear during the data signal processing. Research using low-level radar is attempted to take advantage of the characteristics of low-level data, however, it is difficult because there is no public training dataset including low-level Radar data. In this paper, a deep learning algorithm detecting peripheral vehicles by fusing a range-azimuth low-level Radar signal which minimize a raider signal processing process with a camera image is proposed. Training data including a low-level Radar signal and image data are collected to construct new dataset for network training. The proposed algorithm proceeds with the following process to detect the vehicle. In order to merge radar-camera data that exist in different coordinate systems, a box is generated on a 3D space and projected to each sensor’s feature map to extract and merge feature values of corresponding positions in each coordinate system. The merged feature values output vehicle position, size, and probability through the network, and the box with the higher probability are selected and projected again to each feature map to finally output the vehicle position, size per meter and probability. This paper is expected to demonstrate robust vehicle detection performance by collecting and using low-level Radar data that have not been included in public datasets yet, utilizing features that have been lost from Radar signal process, and merging them with image data.
Advisors
Kum, Dongsukresearcher금동석researcher
Description
한국과학기술원 :조천식녹색교통대학원,
Publisher
한국과학기술원
Issue Date
2020
Identifier
325007
Language
eng
Description

학위논문(석사) - 한국과학기술원 : 조천식녹색교통대학원, 2020.2,[iv, 48 p. :]

Keywords

autonomous vehicle▼alow-level radar▼alow-level radar▼adeep learning network▼asensor fusion; 자율 주행▼a로우-레벨 레이다▼a차량 검출▼a딥러닝 네트워크▼a센서 퓨전

URI
http://hdl.handle.net/10203/283905
Link
http://library.kaist.ac.kr/search/detail/view.do?bibCtrlNo=910022&flag=dissertation
Appears in Collection
GT-Theses_Master(석사논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0