Diversity Actor-Critic: Sample-Aware Entropy Regularization for Sample-Efficient Exploration

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 187
  • Download : 0
DC FieldValueLanguage
dc.contributor.authorHan, Seungyulko
dc.contributor.authorSung, Youngchulko
dc.date.accessioned2021-07-23T01:30:14Z-
dc.date.available2021-07-23T01:30:14Z-
dc.date.created2021-06-23-
dc.date.created2021-06-23-
dc.date.created2021-06-23-
dc.date.issued2021-07-
dc.identifier.citationInternational Conference on Machine Learning (ICML)-
dc.identifier.issn2640-3498-
dc.identifier.urihttp://hdl.handle.net/10203/286838-
dc.description.abstractIn this paper, sample-aware policy entropy regularization is proposed to enhance the conventional policy entropy regularization for better exploration. Exploiting the sample distribution obtainable from the replay buffer, the proposed sample-aware entropy regularization maximizes the entropy of the weighted sum of the policy action distribution and the sample action distribution from the replay buffer for sample-efficient exploration. A practical algorithm named diversity actor-critic (DAC) is developed by applying policy iteration to the objective function with the proposed sample-aware entropy regularization. Numerical results show that DAC significantly outperforms existing recent algorithms for reinforcement learning.-
dc.languageEnglish-
dc.publisherInternational Conference on Machine Learning (ICML)-
dc.titleDiversity Actor-Critic: Sample-Aware Entropy Regularization for Sample-Efficient Exploration-
dc.typeConference-
dc.identifier.wosid000683104604004-
dc.type.rimsCONF-
dc.citation.publicationnameInternational Conference on Machine Learning (ICML)-
dc.identifier.conferencecountryUS-
dc.identifier.conferencelocationVirtual-
dc.contributor.localauthorSung, Youngchul-
Appears in Collection
EE-Conference Papers(학술회의논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0