Hindsight Goal Ranking on Replay Buffer for Sparse Reward Environment

Cited 2 time in webofscience Cited 0 time in scopus
  • Hit : 257
  • Download : 0
DC FieldValueLanguage
dc.contributor.authorLuu, Tung M.ko
dc.contributor.authorYoo, Chang-Dongko
dc.date.accessioned2021-04-26T06:10:14Z-
dc.date.available2021-04-26T06:10:14Z-
dc.date.created2021-04-26-
dc.date.created2021-04-26-
dc.date.created2021-04-26-
dc.date.issued2021-04-
dc.identifier.citationIEEE ACCESS, v.9, pp.51996 - 52007-
dc.identifier.issn2169-3536-
dc.identifier.urihttp://hdl.handle.net/10203/282559-
dc.description.abstractThis paper proposes a method for prioritizing the replay experience referred to as Hindsight Goal Ranking (HGR) in overcoming the limitation of Hindsight Experience Replay (HER) that generates hindsight goals based on uniform sampling. HGR samples with higher probability on the states visited in an episode with larger temporal difference (TD) error, which is considered as a proxy measure of the amount which the RL agent can learn from an experience. The actual sampling for large TD error is performed in two steps: first, an episode is sampled from the relay buffer according to the average TD error of its experiences, and then, for the sampled episode, the hindsight goal leading to larger TD error is sampled with higher probability from future visited states. The proposed method combined with Deep Deterministic Policy Gradient (DDPG), an off-policy model-free actor-critic algorithm, accelerates learning significantly faster than that without any prioritization on four challenging simulated robotic manipulation tasks. The empirical results show that HGR uses samples more efficiently than previous methods across all tasks.-
dc.languageEnglish-
dc.publisherIEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC-
dc.titleHindsight Goal Ranking on Replay Buffer for Sparse Reward Environment-
dc.typeArticle-
dc.identifier.wosid000639862900001-
dc.identifier.scopusid2-s2.0-85103770178-
dc.type.rimsART-
dc.citation.volume9-
dc.citation.beginningpage51996-
dc.citation.endingpage52007-
dc.citation.publicationnameIEEE ACCESS-
dc.identifier.doi10.1109/ACCESS.2021.3069975-
dc.contributor.localauthorYoo, Chang-Dong-
dc.contributor.nonIdAuthorLuu, Tung M.-
dc.description.isOpenAccessY-
dc.type.journalArticleArticle-
dc.subject.keywordAuthorTask analysis-
dc.subject.keywordAuthorTraining-
dc.subject.keywordAuthorRobots-
dc.subject.keywordAuthorReinforcement learning-
dc.subject.keywordAuthorBuffer storage-
dc.subject.keywordAuthorMeasurement uncertainty-
dc.subject.keywordAuthorComputer architecture-
dc.subject.keywordAuthorHindsight goal ranking-
dc.subject.keywordAuthormulti-goal reinforcement learning-
dc.subject.keywordAuthorreinforcement learning-
dc.subject.keywordAuthorsparse reward-
dc.subject.keywordAuthorsample efficiency-
Appears in Collection
EE-Journal Papers(저널논문)
Files in This Item
There are no files associated with this item.
This item is cited by other documents in WoS
⊙ Detail Information in WoSⓡ Click to see webofscience_button
⊙ Cited 2 items in WoS Click to see citing articles in records_button

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0