DC Field | Value | Language |
---|---|---|
dc.contributor.author | Kim, Dong jae | ko |
dc.contributor.author | Park, Geon Young | ko |
dc.contributor.author | Lee, Sang Wan | ko |
dc.date.accessioned | 2018-11-12T04:24:57Z | - |
dc.date.available | 2018-11-12T04:24:57Z | - |
dc.date.created | 2018-10-23 | - |
dc.date.created | 2018-10-23 | - |
dc.date.created | 2018-10-23 | - |
dc.date.issued | 2018-10 | - |
dc.identifier.citation | 2018 IEEE International Conference on Systems, Man, and Cybernetics (SMC), pp.990 - 994 | - |
dc.identifier.uri | http://hdl.handle.net/10203/246404 | - |
dc.description.abstract | Recent evidence in neuroscience and psychology suggests that a single reinforcement learning (RL) algorithm only accounts for less than 60% of the variance of human choice behavior in an uncertain and dynamic environment, where the amount of uncertainty in state-action-state transitions drift over time. The prediction performance further decreases when the size of the state space increases. We proposed a hierarchical context-dependent RL control framework that dynamically exerted control weights on model-based (MB) and multiple model-free (MF) RL strategies associated with different task goals. To properly assess the validity of the proposed method, we considered a two-stage Markov decision task (MDT) in which the three different types of context changed over time. We trained 57 different RL control models on a Caltech MDT data set; then, we assessed their prediction performance using a Bayesian model comparison. This large-scale computer simulation analysis revealed that the model providing the most accurate prediction was the version that implemented the competition between the MB and multiple goal-dependent MF RL strategies. The present study demonstrates the applicability of the goal-driven RL control to a variety of real-world human-robot interaction scenarios. | - |
dc.language | English | - |
dc.publisher | IEEE | - |
dc.title | Hierarchical control architecture regulating competition between model-based and context-dependent model-free reinforcement learning strategies | - |
dc.type | Conference | - |
dc.identifier.wosid | 000459884801014 | - |
dc.identifier.scopusid | 2-s2.0-85062212221 | - |
dc.type.rims | CONF | - |
dc.citation.beginningpage | 990 | - |
dc.citation.endingpage | 994 | - |
dc.citation.publicationname | 2018 IEEE International Conference on Systems, Man, and Cybernetics (SMC) | - |
dc.identifier.conferencecountry | JA | - |
dc.identifier.conferencelocation | Miyazaki | - |
dc.identifier.doi | 10.1109/SMC.2018.00176 | - |
dc.contributor.localauthor | Lee, Sang Wan | - |
dc.contributor.nonIdAuthor | Kim, Dong jae | - |
dc.contributor.nonIdAuthor | Park, Geon Young | - |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.