Research about AI-based sensor advancement for autonomous driving in various weather conditions and urban environments다양한 날씨 조건 및 도심 환경에서 자율 주행을 위한 인공지능 기반 센서 고도화 연구

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 262
  • Download : 0
DC FieldValueLanguage
dc.contributor.advisorKong, Seung-Hyun-
dc.contributor.advisor공승현-
dc.contributor.authorPaek, Dong-Hee-
dc.date.accessioned2022-04-15T07:58:09Z-
dc.date.available2022-04-15T07:58:09Z-
dc.date.issued2021-
dc.identifier.urihttp://library.kaist.ac.kr/search/detail/view.do?bibCtrlNo=963558&flag=dissertationen_US
dc.identifier.urihttp://hdl.handle.net/10203/295090-
dc.description학위논문(석사) - 한국과학기술원 : 로봇공학학제전공, 2021.8,[v, 67 p. :]-
dc.description.abstractLiDAR (Light Detection and Ranging) sensors and radar (Radio Detection and Ranging) sensors are used as key sensors for autonomous-driving vehicles. The usage of these sensors ranges from perception (e.g. object recognition, semantic segmentation, lane detection, etc.) to localization (e.g. LiDAR, radar odometry). In this paper, we introduce how lanes are detected in various urban environments with LiDAR pointcloud and how objects such as compact cars and large cars are detected under various weather conditions with range-azimuth tensors of radar. First, lane detection using LiDAR sensors has the advantage of being able to immediately get the position of lanes in Bird’s eye View, and the detection performance is not affected by illumination change. Lane detection using LiDAR sensors gives the pseudo-image represented in bird’s eye view by applying pillar-based encoding. The image which is extracted is given as input to a convolutional neural network-based network to obtain the corresponding positions for the lanes immediately. Object detection with radar sensors has the advantage of being less effected by the various weathers due to their long wavelength of microwave signals, unlike cameras and lidars whose performance decreases rapidly under various weather conditions. Object detection using radar sensors changes the range-azimuth tensor to an image of bird’s eye view represented as Cartesian coordinate and provides input to a convolutional neural network-based network to extract the position and azimuth of objects such as compact cars and large cars into bounding boxes.-
dc.languageeng-
dc.publisher한국과학기술원-
dc.subjectLiDAR▼aRadar▼aAutonomous Vehicle▼aLane Detection▼aObject Detection▼aArtificial Intelligence-
dc.subject라이다▼a레이다▼a자율주행▼a차선인식▼a객체인식▼a인공지능-
dc.titleResearch about AI-based sensor advancement for autonomous driving in various weather conditions and urban environments-
dc.title.alternative다양한 날씨 조건 및 도심 환경에서 자율 주행을 위한 인공지능 기반 센서 고도화 연구-
dc.typeThesis(Master)-
dc.identifier.CNRN325007-
dc.description.department한국과학기술원 :로봇공학학제전공,-
dc.contributor.alternativeauthor백동희-
Appears in Collection
RE-Theses_Master(석사논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0