Deep learning based relative pose estimation of non-cooperative spacecraft using vision sensors영상 센서를 활용한 비협조적 위성의 딥러닝 기반 상대 위치 추정

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 3
  • Download : 0
The objective of the proposed work is to perform monocular vision based relative 6DOF pose estimation of the non-cooperative spacecraft autonomously during proximity operations in the orbit. This area of research has gained importance in recent years due to the increase of interest in the fields of Active Debris Removal (ADR) and On-Orbit Servicing (OOS). The aim is to provide an integrated robust solution to estimate the pose of the non-cooperative target spacecraft with respect to the chaser satellite in Rendezvous operations by using only monocular camera. The relative pose estimation has been performed using Convolutional Neural Networks (CNNs) and has outperformed the conventional image processing methods in achieving better accuracy and more availability of the pose solutions in harsh space environment. In this work, CNNs are replaced by the High-Resolution Transformer network to further improve the pose estimation accuracy by using less computation resources and low-resolution image. Moreover, the Transformers have inherited advantage to overcome the shortcomings of the translation equivariance and 2D neighborhood awareness in CNNs. Transformers also perform better for long range dependencies and does not generalize to just local features of the target objects. First, the 3D model of the target satellite is reconstructed using corresponding 2D keypoints among different images of the dataset with the help of Inverse Direct Linear Transform (IDLT) method. Then, the pose estimation pipeline is developed with deep learning-based image processing subsystem and geometric optimization of the pose solver. The image processing subsystem perform target localization and draw a bounded box around the satellite body using CNN based architectures. Then, Keypoints Detection network perform regression to predict the 2D keypoints using Transformer based network architecture. Afterwards, the predicted keypoints based on their confidence scores are projected onto the corresponding 3D points of the known reconstructed 3D model, then the pose value is computed by reducing the reprojection error between 3D-2D points using PnP method. The pose is further refined between 3D model points and predicted keypoints using the Gauss Newton method. The proposed architecture is trained and tested on Spacecraft Pose Estimation Dataset (SPEED) dataset and it shows the superior accuracy both in translation and rotation values. The architecture shows robustness against the drastically changing clutter background and light conditions in the images. Moreover, this method is using less floating-point operations and trainable parameters with low image resolution that make it one step closer for implementation on a space hardware. The resource efficient architecture is feasible for small satellites with low mass and power budget.
Advisors
방효충researcher
Description
한국과학기술원 :항공우주공학과,
Publisher
한국과학기술원
Issue Date
2024
Identifier
325007
Language
eng
Description

학위논문(박사) - 한국과학기술원 : 항공우주공학과, 2024.2,[vi, 67 p. :]

Keywords

포즈 추정▼a​ 비협조적인 우주선▼a딥러닝▼a키 포인트▼a속도; Speed; Pose estimation▼aNon-cooperative spacecraft▼aDeep learning▼aKey points

URI
http://hdl.handle.net/10203/322225
Link
http://library.kaist.ac.kr/search/detail/view.do?bibCtrlNo=1100139&flag=dissertation
Appears in Collection
AE-Theses_Ph.D.(박사논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0