DC Field | Value | Language |
---|---|---|
dc.contributor.advisor | Lee, Sang Wan | - |
dc.contributor.advisor | 이상완 | - |
dc.contributor.author | Joo, Shinyoung | - |
dc.date.accessioned | 2023-06-23T19:30:47Z | - |
dc.date.available | 2023-06-23T19:30:47Z | - |
dc.date.issued | 2022 | - |
dc.identifier.uri | http://library.kaist.ac.kr/search/detail/view.do?bibCtrlNo=997771&flag=dissertation | en_US |
dc.identifier.uri | http://hdl.handle.net/10203/308727 | - |
dc.description | 학위논문(석사) - 한국과학기술원 : 바이오및뇌공학과, 2022.2,[iv, 33 p. :] | - |
dc.description.abstract | Learning compact state representations from high dimensional and noisy observations is the cornerstone of reinforcement learning (RL). However, these representations are often strongly biased toward the current task structure and associated goals, making it hard to generalize to other tasks. Inspired by the human analogy-making process, we propose a novel representation learning framework for learning task-invariant action features in RL. It consists of task and action relevant encoding, hypothetical observation generation, and analogy making between the original and hypothetical observations. Our model introduces an auxiliary objective that maximizes the mutual information between the generated image, and existing labels of codes used to generate the image. Experiments on various challenging RL environments showed that our model helps the RL agent generalize by effectively separating the action features from others. We also interpreted the role of our model as an information-theoretic perspective. | - |
dc.language | eng | - |
dc.publisher | 한국과학기술원 | - |
dc.title | Learning task invariance with analogy making | - |
dc.title.alternative | 유추에 의한 과제 불변성 학습 방법 | - |
dc.type | Thesis(Master) | - |
dc.identifier.CNRN | 325007 | - |
dc.description.department | 한국과학기술원 :바이오및뇌공학과, | - |
dc.contributor.alternativeauthor | 주신영 | - |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.