DC Field | Value | Language |
---|---|---|
dc.contributor.author | Han, Seungyul | ko |
dc.contributor.author | Sung, Youngchul | ko |
dc.date.accessioned | 2021-07-23T01:30:14Z | - |
dc.date.available | 2021-07-23T01:30:14Z | - |
dc.date.created | 2021-06-23 | - |
dc.date.created | 2021-06-23 | - |
dc.date.created | 2021-06-23 | - |
dc.date.issued | 2021-07 | - |
dc.identifier.citation | International Conference on Machine Learning (ICML) | - |
dc.identifier.issn | 2640-3498 | - |
dc.identifier.uri | http://hdl.handle.net/10203/286838 | - |
dc.description.abstract | In this paper, sample-aware policy entropy regularization is proposed to enhance the conventional policy entropy regularization for better exploration. Exploiting the sample distribution obtainable from the replay buffer, the proposed sample-aware entropy regularization maximizes the entropy of the weighted sum of the policy action distribution and the sample action distribution from the replay buffer for sample-efficient exploration. A practical algorithm named diversity actor-critic (DAC) is developed by applying policy iteration to the objective function with the proposed sample-aware entropy regularization. Numerical results show that DAC significantly outperforms existing recent algorithms for reinforcement learning. | - |
dc.language | English | - |
dc.publisher | International Conference on Machine Learning (ICML) | - |
dc.title | Diversity Actor-Critic: Sample-Aware Entropy Regularization for Sample-Efficient Exploration | - |
dc.type | Conference | - |
dc.identifier.wosid | 000683104604004 | - |
dc.type.rims | CONF | - |
dc.citation.publicationname | International Conference on Machine Learning (ICML) | - |
dc.identifier.conferencecountry | US | - |
dc.identifier.conferencelocation | Virtual | - |
dc.contributor.localauthor | Sung, Youngchul | - |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.