Visual localization utilizing spatially uniform feature points selection and a GRU network공간적으로 균일한 특징점 선택과 GRU 네트워크를 이용한 시각적 위치 추정

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 232
  • Download : 0
Feature points play a crucial role in visual localization. However, when extracting feature points, the feature points are easily concentrated in a specific image area. When estimating the pose, the estimated pose becomes inaccurate because of feature points biased to one side. In this paper, a method for distributing feature points more uniformly to the whole image area is proposed. The proposed feature points selection method limits the number of feature points in the equally separated image using an adaptive threshold that varies with the number of feature points extracted from the image. The performance is enhanced by shifting the focus for estimating the pose to the whole image area. The KITTI dataset is used for simulation. On average, concerning the KITTI sequences, performance is improved as much as 1.519% in translation error and 0.930 deg/100m in rotation error. Also, Lately, pose estimation based on learning-based Visual Odometry (VO) methods, where raw image data are provided as the input of a neural network to get 6 Degrees of Freedom information, has been investigated. Despite its recent advances, learning-based VO methods still perform worse than the classical VO, which consists of feature-based VO methods and direct VO methods. In this paper, a new pose estimation method based on a Gated Recurrent Unit (GRU) network, where historical trajectory data of yaw angle is provided to the network to get a yaw angle at current timestep, trained by accurate sensor acquired pose data is proposed. The proposed pose estimation method can be easily combined with other VO methods to enhance the overall performance via an ensemble of predicted results. Pose estimation using the proposed pose estimation method is especially advantageous in the cornering section, commonly prone to error. The performance is improved by reconstructing the rotation matrix using a yaw angle that is the fusion of the yaw angles estimated from the proposed GRU network and other VO methods. The KITTI dataset is utilized for training the network and the simulation section. On average, regarding the KITTI sequences, performance is improved as much as 1.426% in translation error and 0.805 deg/100m in rotation error.
Advisors
Har, Dongsooresearcher하동수researcher
Description
한국과학기술원 :조천식녹색교통대학원,
Publisher
한국과학기술원
Issue Date
2021
Identifier
325007
Language
eng
Description

학위논문(석사) - 한국과학기술원 : 조천식녹색교통대학원, 2021.2,[v, 54 p. :]

Keywords

visual localization▼aVisual Odometry (VO)▼apose estimation▼afeature points selection▼aGRU (Gated Recurrent Unit)▼aautonomous robot▼aautonomous vehicle; 시각적 위치 추정▼a시각적 주행거리 측정▼a자세 추정▼a특징점 선택▼a게이트 순환 유닛▼a자율 로봇▼a자율주행 차량

URI
http://hdl.handle.net/10203/296215
Link
http://library.kaist.ac.kr/search/detail/view.do?bibCtrlNo=948358&flag=dissertation
Appears in Collection
GT-Theses_Master(석사논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0