Hierarchical control architecture regulating competition between model-based and context-dependent model-free reinforcement learning strategies

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 178
  • Download : 0
DC FieldValueLanguage
dc.contributor.authorKim, Dong jaeko
dc.contributor.authorPark, Geon Youngko
dc.contributor.authorLee, Sang Wanko
dc.date.accessioned2018-11-12T04:24:57Z-
dc.date.available2018-11-12T04:24:57Z-
dc.date.created2018-10-23-
dc.date.created2018-10-23-
dc.date.created2018-10-23-
dc.date.issued2018-10-
dc.identifier.citation2018 IEEE International Conference on Systems, Man, and Cybernetics (SMC), pp.990 - 994-
dc.identifier.urihttp://hdl.handle.net/10203/246404-
dc.description.abstractRecent evidence in neuroscience and psychology suggests that a single reinforcement learning (RL) algorithm only accounts for less than 60% of the variance of human choice behavior in an uncertain and dynamic environment, where the amount of uncertainty in state-action-state transitions drift over time. The prediction performance further decreases when the size of the state space increases. We proposed a hierarchical context-dependent RL control framework that dynamically exerted control weights on model-based (MB) and multiple model-free (MF) RL strategies associated with different task goals. To properly assess the validity of the proposed method, we considered a two-stage Markov decision task (MDT) in which the three different types of context changed over time. We trained 57 different RL control models on a Caltech MDT data set; then, we assessed their prediction performance using a Bayesian model comparison. This large-scale computer simulation analysis revealed that the model providing the most accurate prediction was the version that implemented the competition between the MB and multiple goal-dependent MF RL strategies. The present study demonstrates the applicability of the goal-driven RL control to a variety of real-world human-robot interaction scenarios.-
dc.languageEnglish-
dc.publisherIEEE-
dc.titleHierarchical control architecture regulating competition between model-based and context-dependent model-free reinforcement learning strategies-
dc.typeConference-
dc.identifier.wosid000459884801014-
dc.identifier.scopusid2-s2.0-85062212221-
dc.type.rimsCONF-
dc.citation.beginningpage990-
dc.citation.endingpage994-
dc.citation.publicationname2018 IEEE International Conference on Systems, Man, and Cybernetics (SMC)-
dc.identifier.conferencecountryJA-
dc.identifier.conferencelocationMiyazaki-
dc.identifier.doi10.1109/SMC.2018.00176-
dc.contributor.localauthorLee, Sang Wan-
dc.contributor.nonIdAuthorKim, Dong jae-
dc.contributor.nonIdAuthorPark, Geon Young-
Appears in Collection
BiS-Conference Papers(학술회의논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0