Unsupervised sim-real adaptation of the proprioceptive soft robot고유 감각 소프트 로봇의 비지도 sim-real 적응

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 182
  • Download : 0
The skin of natural organisms makes them sense and detect their state and external stimuli without sight to unknown environments, helping to trigger behavioral responses. Inspired by this phenomenon, proprioceptive soft robots constructed from highly compliant materials are being highlighted these days as safer, more adaptable, and bio-inspired alternatives to traditional rigid robots. However, even with their capabilities of shape and behavioral changes and robustness to physical contact, their inherent hysteresis, nonlinearity and infinite degrees of freedom complicate high-level modeling and control. With the rise of learning-based control and perception in soft robotics to deal with those limitations, the sim-to-real transfer of soft robots has begun to be explored, which allows high-level control with efficient data collection. However, manufacturing variances, material fatigue, and high nonlinearities in soft sensors bring a gap between the modeled and physical sensors, which makes it hard to implement sensors in simulation. Taking inspiration from domain adaptation, we propose the sim-real synchronization method of soft robotic sensations by leveraging unsupervised domain-invariant representation learning. The dual cross-modal autoencoder structure is employed to reflect the common features in the domains into the shared latent space. In substitution with the direct mapping, the proposed approach enables data-efficient training by circumventing labeling procedures, and a generalized joint representation transferrable to multiple tasks. The framework is deployed into a popular soft robot design with a pneumatic multi-gait soft robot embedding eGaIn soft sensors. Reflecting the real-world deployments, the evaluation of the method proceeded with multi-task learning of kinematics estimation and collision detection on external contacts. This allows the robots to interact with their surroundings, either in simulation or the real world. Based on our proposed network, soft robots can be used for multimodal control even in simulation or reality, available for sim-to-real transfer. The results show that our network successfully synchronizes the sensor signals and detects the obstructions, providing soft robots with the data-driven multimodal control that was previously unachievable. We also demonstrated our method by implementing a reinforcement learning policy in simulation.
Advisors
Kim, Jungresearcher김정researcher
Description
한국과학기술원 :기계공학과,
Publisher
한국과학기술원
Issue Date
2023
Identifier
325007
Language
eng
Description

학위논문(석사) - 한국과학기술원 : 기계공학과, 2023.2,[v, 45 p. :]

Keywords

soft robot▼aproprioception▼asoft sensor▼adomain adaptation▼amulti-task learning; 소프트 로봇▼a고유수용감각▼a소프트센서▼a도메인 적응▼a다중 작업 학습

URI
http://hdl.handle.net/10203/307720
Link
http://library.kaist.ac.kr/search/detail/view.do?bibCtrlNo=1032257&flag=dissertation
Appears in Collection
ME-Theses_Master(석사논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0