DC Field | Value | Language |
---|---|---|
dc.contributor.author | Kim, Woojun | ko |
dc.contributor.author | Kim, Jeonghye | ko |
dc.contributor.author | Sung, Youngchul | ko |
dc.date.accessioned | 2023-12-06T06:02:00Z | - |
dc.date.available | 2023-12-06T06:02:00Z | - |
dc.date.created | 2023-11-27 | - |
dc.date.issued | 2023-07 | - |
dc.identifier.citation | 40th International Conference on Machine Learning, ICML 2023, pp.16619 - 16638 | - |
dc.identifier.uri | http://hdl.handle.net/10203/315837 | - |
dc.description.abstract | In this paper, a unified framework for exploration in reinforcement learning (RL) is proposed based on an option-critic model. The proposed framework learns to integrate a set of diverse exploration strategies so that the agent can adaptively select the most effective exploration strategy over time to realize a relevant exploration-exploitation trade-off for each given task. The effectiveness of the proposed exploration framework is demonstrated by various experiments in the MiniGrid and Atari environments. | - |
dc.language | English | - |
dc.publisher | ML Research Press | - |
dc.title | LESSON: Learning to Integrate Exploration Strategies for Reinforcement Learning via an Option Framework | - |
dc.type | Conference | - |
dc.identifier.scopusid | 2-s2.0-85174389080 | - |
dc.type.rims | CONF | - |
dc.citation.beginningpage | 16619 | - |
dc.citation.endingpage | 16638 | - |
dc.citation.publicationname | 40th International Conference on Machine Learning, ICML 2023 | - |
dc.identifier.conferencecountry | US | - |
dc.identifier.conferencelocation | Honolulu, HI | - |
dc.contributor.localauthor | Sung, Youngchul | - |
dc.contributor.nonIdAuthor | Kim, Woojun | - |
dc.contributor.nonIdAuthor | Kim, Jeonghye | - |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.