Inverse constraint learning and generalization by transferable reward decomposition전이 가능한 보상 분해를 통한 역제약 조건 학습 및 일반화

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 3
  • Download : 0
We present the problem of inverse constraint learning (ICL), which recovers constraints from demonstrations to autonomously reproduce constrained skills in new scenarios. However, ICL suffers from an ill-posed nature, leading to inaccurate inference of constraints from demonstrations. To figure it out, we introduce a transferable constraint learning (TCL) algorithm that jointly infers a task-oriented reward and a task-agnostic constraint, enabling the generalization of learned skills. Our method TCL additively decomposes the overall reward recovered from an inverse reinforcement learning into a task reward and its residual as soft constraints, minimizing policy divergence between task-oriented policies and the demonstration to obtain a transferable constraint. Evaluating our method and five baselines in three simulated environments, we show TCL outperforms state-of-the-art IRL and ICL algorithms, achieving up to a 72% higher task-success rates with accurate decomposition compared to the next best approach in novel scenarios. Further, we demonstrate the robustness of TCL on two real-world robotic tasks.
Advisors
박대형researcher
Description
한국과학기술원 :김재철AI대학원,
Publisher
한국과학기술원
Issue Date
2024
Identifier
325007
Language
eng
Description

학위논문(석사) - 한국과학기술원 : 김재철AI대학원, 2024.2,[v, 34p :]

Keywords

시연 학습▼a역제약 조건 학습▼a제약된 동작 계획법; Learning from demonstration▼aInverse constraint learning▼aConstrained motion planning

URI
http://hdl.handle.net/10203/321378
Link
http://library.kaist.ac.kr/search/detail/view.do?bibCtrlNo=1096083&flag=dissertation
Appears in Collection
AI-Theses_Master(석사논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0