Context-dependent meta-control for reinforcement learning using a Dirichlet process Gaussian mixture model

Cited 1 time in webofscience Cited 0 time in scopus
  • Hit : 289
  • Download : 0
Arbitration between model-based (MB) and model-free (MF) reinforcement learning (RL) is key feature of human reinforcement learning. The computational model of arbitration control has been demonstrated to outperform conventional reinforcement learning algorithm, in terms of not only behavioral data but also neural signals. However, this arbitration process does not take full account of contextual changes in environment during learning. By incorporating a Dirichlet process Gaussian mixture model into the arbitration process, we propose a meta-controller for RL that quickly adapts to contextual changes of environment. The proposed model performs better than a conventional model-free RL, model-based RL, and arbitration model.
Publisher
IEEE
Issue Date
2018-01
Language
English
Citation

The 6th international winter conference on Brain-computer interface (IEEE BCI 2018), pp.112 - 114

DOI
10.1109/IWW-BCI.2018.8311512
URI
http://hdl.handle.net/10203/244484
Appears in Collection
BiS-Conference Papers(학술회의논문)
Files in This Item
There are no files associated with this item.
This item is cited by other documents in WoS
⊙ Detail Information in WoSⓡ Click to see webofscience_button
⊙ Cited 1 items in WoS Click to see citing articles in records_button

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0