Learning task invariance with analogy making유추에 의한 과제 불변성 학습 방법

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 75
  • Download : 0
Learning compact state representations from high dimensional and noisy observations is the cornerstone of reinforcement learning (RL). However, these representations are often strongly biased toward the current task structure and associated goals, making it hard to generalize to other tasks. Inspired by the human analogy-making process, we propose a novel representation learning framework for learning task-invariant action features in RL. It consists of task and action relevant encoding, hypothetical observation generation, and analogy making between the original and hypothetical observations. Our model introduces an auxiliary objective that maximizes the mutual information between the generated image, and existing labels of codes used to generate the image. Experiments on various challenging RL environments showed that our model helps the RL agent generalize by effectively separating the action features from others. We also interpreted the role of our model as an information-theoretic perspective.
Advisors
Lee, Sang Wanresearcher이상완researcher
Description
한국과학기술원 :바이오및뇌공학과,
Publisher
한국과학기술원
Issue Date
2022
Identifier
325007
Language
eng
Description

학위논문(석사) - 한국과학기술원 : 바이오및뇌공학과, 2022.2,[iv, 33 p. :]

URI
http://hdl.handle.net/10203/308727
Link
http://library.kaist.ac.kr/search/detail/view.do?bibCtrlNo=997771&flag=dissertation
Appears in Collection
BiS-Theses_Master(석사논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0