Utilizing Skipped Frames in Action Repeats for Improving Sample Efficiency in Reinforcement Learning

Cited 1 time in webofscience Cited 0 time in scopus
  • Hit : 152
  • Download : 0
DC FieldValueLanguage
dc.contributor.authorLuu, Tung M.ko
dc.contributor.authorNguyen, Thanhko
dc.contributor.authorVu, Thangko
dc.contributor.authorYoo, Chang-Dongko
dc.date.accessioned2022-06-26T01:02:00Z-
dc.date.available2022-06-26T01:02:00Z-
dc.date.created2022-06-25-
dc.date.created2022-06-25-
dc.date.created2022-06-25-
dc.date.created2022-06-25-
dc.date.created2022-06-25-
dc.date.issued2022-
dc.identifier.citationIEEE ACCESS, v.10, pp.64965 - 64975-
dc.identifier.issn2169-3536-
dc.identifier.urihttp://hdl.handle.net/10203/297080-
dc.description.abstractAction repeat has become the de-facto mechanism in deep reinforcement learning (RL) for stabilizing training and enhancing exploration. Here, the action is taken at the action-decision point and is executed repeatedly for a designated number of times until the next decision point. Although showing several advantages, in this mechanism, the intermediate states which stem from repeated actions are discarded in training agents, causing sample inefficiency. To utilize the discarded states as training data is nontrivial as the action, which causes the transition between these states, is unavailable. This paper proposes to infer the action at the intermediate states via an inverse dynamic model. The proposed method is simple and easily incorporated into the existing off-policy RL algorithms - integrating the proposed method with SAC shows consistent improvement across various tasks.-
dc.languageEnglish-
dc.publisherIEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC-
dc.titleUtilizing Skipped Frames in Action Repeats for Improving Sample Efficiency in Reinforcement Learning-
dc.typeArticle-
dc.identifier.wosid000815504400001-
dc.identifier.scopusid2-s2.0-85132792201-
dc.type.rimsART-
dc.citation.volume10-
dc.citation.beginningpage64965-
dc.citation.endingpage64975-
dc.citation.publicationnameIEEE ACCESS-
dc.identifier.doi10.1109/access.2022.3182107-
dc.contributor.localauthorYoo, Chang-Dong-
dc.contributor.nonIdAuthorLuu, Tung M.-
dc.contributor.nonIdAuthorNguyen, Thanh-
dc.description.isOpenAccessN-
dc.type.journalArticleArticle-
dc.subject.keywordAuthorTask analysis-
dc.subject.keywordAuthorTraining-
dc.subject.keywordAuthorHeuristic algorithms-
dc.subject.keywordAuthorBenchmark testing-
dc.subject.keywordAuthorData models-
dc.subject.keywordAuthorTraining data-
dc.subject.keywordAuthorRobots-
dc.subject.keywordAuthorAction repeat mechanism-
dc.subject.keywordAuthoroff-policy reinforcement learning-
dc.subject.keywordAuthorreinforcement learning-
dc.subject.keywordAuthorsample efficiency-
dc.subject.keywordPlusLEVEL-
Appears in Collection
EE-Journal Papers(저널논문)
Files in This Item
There are no files associated with this item.
This item is cited by other documents in WoS
⊙ Detail Information in WoSⓡ Click to see webofscience_button
⊙ Cited 1 items in WoS Click to see citing articles in records_button

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0