DSpace Community:http://hdl.handle.net/10203/254482024-03-19T09:02:59Z2024-03-19T09:02:59ZAmbiguity-aware multi-object pose optimization toward visually-assisted robot manipulationJeon, Myung-Hwanhttp://hdl.handle.net/10203/3079562023-06-21T19:33:53Z2023-01-01T00:00:00ZTitle: Ambiguity-aware multi-object pose optimization toward visually-assisted robot manipulation
Authors: Jeon, Myung-Hwan
Abstract: For a robot manipulator to effectively manipulate a target object, recognition of target objects placed in the workspace of the robot manipulator is essential. In this thesis, we construct a robot system using one robot manipulator and a RGB-D sensor attached to the end effector of the robot manipulator. Before this robot system manipulates the target object, it scans the target objects placed in the workspace. At this time, we can obtain the robot joint angles and RGB-D data. By using all these sensor data, we propose a method to prepare a basis for robot manipulation on the multi-object by detecting and estimating the 6D object poses placed in the workspace. The technical summary of this thesis is as follows.
First, we propose a method for the 6D object pose estimation. To effectively manipulate an object in 3D space, estimating the orientation of the target object is more crucial than the translation of the target object. Therefore, we introduce a new concept explaining the orientation of the object, a rotation primitive. The rotation primitive concentrates and emphasizes the orientation information. Using this rotation primitive, we propose a novel 6D object pose estimation method called PrimA6D. In the experimental result, we verified that the proposed method performs better than other existing methods in the benchmark dataset.
Second, we propose an ambiguity-aware 6D object pose estimation method, PrimA6D++, as a generic uncertainty prediction method. In the object pose estimation field, we usually considered two types of objects; asymmetric objects and symmetric objects. For most cases of asymmetric objects, all three rotation axes can be uniquely defined, so there is no posture ambiguity. For the symmetric objects, we cannot define the rotation axes uniquely, but can only define a dominant axis. To solve this problem, most existing methods provide prior information on the object shape in advance, which is arduous to obtain in reality. Furthermore, there are cases in which an asymmetric object appears as a symmetric object due to camera viewpoint change. Therefore, a generic method for ambiguity-aware 6D object pose estimation could be more viable in the field of robotics when such prior information is unavailable. In this thesis, we propose a new network that predicts three rotation axis primitive images, each corresponding to the orientation axis of the object. In addition, the uncertainty for each rotation axis primitive image is estimated via unsupervised learning. Based on these uncertainties, we discern object ambiguity caused by shape symmetricity and occlusion by rejecting unreliable rotation axis primitive images. For the evaluation, we present examples of awaring the object ambiguity, and we verified that the proposed method performs better than other existing methods in the benchmark dataset.
Third, we formulate the problem as Object-SLAM by introducing the camera pose factor and object pose factor to refine the multi-object poses with camera poses. In the evaluation, we verified that the proposed method performs better than other existing methods in the benchmark dataset. We demonstrate real-time scene recognition capability for visually-assisted robot manipulation.
Fouth, By using a robot system which composes a robot manipulator and a RGB-D sensor, we execute the robotic pick-and-place in real-world. We verified the the method presented in this study can be easily used for robotic pick-and-place.
Description: 학위논문(박사) - 한국과학기술원 : 로봇공학학제전공, 2023.2,[viii, 70 p. :]2023-01-01T00:00:00ZSpatiotemporal defect detection using convolutional recurrent network and detection transformerKim, Young-Minhttp://hdl.handle.net/10203/3079512023-06-21T19:33:51Z2023-01-01T00:00:00ZTitle: Spatiotemporal defect detection using convolutional recurrent network and detection transformer
Authors: Kim, Young-Min
Abstract: Single image-based defect detection can be widely used in several production lines. Depending on the performance of these defect detections, accurate detection of defects is essential because many lives and property damage can occur. However, existing defect detection algorithms detect defects based on a single image. Therefore, it is vulnerable to noise generated in the image due to poor environment, such as vibration noise can be generated while capturing the image in the actual field. In addition, to train a deep learning network, pixel-level annotation is generally required. However, obtaining pixel-level annotation costs a lot of cost and time. Therefore, we propose a welding defect detection method that processes spatio-temporal data using a Convolutional Recurrent Reconstructive Network (CRRN). In addition, for the efficiency of defect data annotation, we propose a weakly supervised defect detection method. First, for an environment where pixel-level labeling data can be acquired, we design a bi-directional recurrent reconstructive network (bi-CRRN) that detects defects based on supervised learning based on spatio-temporal data. In addition, we propose a spatio-temporal deformable detection transformer (STD-DETR) that can perform welding defect detection by supervised learning method, which only needs a frame-level label for training, based on spatio-temporal data. To verify the proposed method, a dataset is generated by capturing a welding bead on an actual ship using a vision camera. We also demonstrate the superiority of defect detection performance by applying the proposed method to the generated dataset.
Description: 학위논문(박사) - 한국과학기술원 : 로봇공학학제전공, 2023.2,[vi, 55 p. :]2023-01-01T00:00:00ZPurpose-built sensor fusion for autonomous vehiclesJeong, Yongseophttp://hdl.handle.net/10203/3079482023-06-21T19:33:50Z2023-01-01T00:00:00ZTitle: Purpose-built sensor fusion for autonomous vehicles
Authors: Jeong, Yongseop
Abstract: This dissertation proposes a purpose-built sensor fusion, constructing sensor system for autonomous vehicles and selectively fusing them based on purposes. The components and implementation details of an autonomous vehicle are introduced in order to show the process of transforming a mass-produced car into a self-driving car. RGB cameras, near-infrared (NIR) cameras, an inertial measurement device, a GNSS, LiDARs, and a vehicle control area network (CAN) grabber are selectively fused to collect two driving datasets. Specifically, a multi-modal depth dataset for changing environments and a large-scale driving dataset are proposed. Adaptive cost volume fusion network for depth estimation is verified from the proposed multi-modal depth dataset. A lightweight depth completion network with local similarity-preserving knowledge distillation is proposed and verified. Application of the proposed system are introduced. Representation learning method from driving scenes is performed from images with vehicle motion information. Proposed system is utilized to verify the sensor fusion based methods for estimating vehicle dynamics from sensors on the actual road. Dead-reckoning in GNSS-denied environments is performed by fusing the lane information from front camera and the response of the inertial measurement unit, with verification of the performance by comparing the results between the proposed method and those of previous works.
Description: 학위논문(박사) - 한국과학기술원 : 로봇공학학제전공, 2023.2,[v, 85 p. :]2023-01-01T00:00:00ZLifelong reinforcement learning framework for energy-efficient drone deliveryHong, Dooyounghttp://hdl.handle.net/10203/3079522023-06-21T19:33:51Z2023-01-01T00:00:00ZTitle: Lifelong reinforcement learning framework for energy-efficient drone delivery
Authors: Hong, Dooyoung
Abstract: Drones are attracting a lot of attention as a new means of logistics delivery to the extent that many companies are already applying drones to their systems. However, the insufficient flight time that is a chronic problem of drones limits the application of drones to the field, and this problem is difficult to solve within a hardware environment mature enough. This work presents a two-step drone delivery framework to improve the drone delivery operation more energy efficiently. The proposed method achieves the energy-efficient drone delivery system through an offline manner that allocates missions to drones using centralized calculation and reinforcement learning to avoid risks of collision and perform path planning in real-time. This paper performs the modeling based on the actual flight data of the drone and implements the simulation to consider the environmental variation. This work also proposes a reinforcement learning algorithm containing a continual learning technique responding to the changing environment in drone delivery scenarios. The proposed method achieves near-optimal energy consumption compared with the optimal solution of centralized calculation through energy-efficient drone delivery, including task assignment and path planning using reinforcement learning.
Description: 학위논문(박사) - 한국과학기술원 : 로봇공학학제전공, 2023.2,[vii, 76 p. :]2023-01-01T00:00:00Z