Understanding Features on Evolutionary Policy Optimizations

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 110
  • Download : 0
DC FieldValueLanguage
dc.contributor.authorLee, Sangyeopko
dc.contributor.authorHa, Myoung Hoonko
dc.contributor.authorMoon, Byoungroko
dc.date.accessioned2021-10-19T08:50:53Z-
dc.date.available2021-10-19T08:50:53Z-
dc.date.created2021-10-19-
dc.date.issued2020-04-
dc.identifier.citation35th Annual ACM Symposium on Applied Computing (SAC), pp.1112 - 1118-
dc.identifier.urihttp://hdl.handle.net/10203/288271-
dc.description.abstractWe analyze two deep reinforcement learning algorithms, gradient-based policy optimization and evolutionary one, by a number of visualization techniques and supplement experiments. As such techniques, filter visualization and saliency map are used to examine whether meaningful features properly extracted in the two algorithms. In addition to visual analysis, some experiments are devised to enhance the validity of the analysis. We observed that an evolutionary policy optimization tends to make use of the prior knowledge and learn the prior action distribution of the policy by a powerful exploration ability, which a gradient-based algorithm cannot do easily.-
dc.languageEnglish-
dc.publisherASSOC COMPUTING MACHINERY-
dc.titleUnderstanding Features on Evolutionary Policy Optimizations-
dc.typeConference-
dc.identifier.wosid000569720900159-
dc.identifier.scopusid2-s2.0-85083040370-
dc.type.rimsCONF-
dc.citation.beginningpage1112-
dc.citation.endingpage1118-
dc.citation.publicationname35th Annual ACM Symposium on Applied Computing (SAC)-
dc.identifier.conferencecountryCS-
dc.identifier.conferencelocationVirtual-
dc.identifier.doi10.1145/3341105.3373966-
dc.contributor.nonIdAuthorLee, Sangyeop-
dc.contributor.nonIdAuthorMoon, Byoungro-
Appears in Collection
RIMS Conference Papers
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0