Reward hierarchical temporal memory: Model for memorizing and computing reward prediction error by neocortex

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 332
  • Download : 0
In humans and animals, reward prediction error encoded by dopamine systems is thought to be important in the temporal difference learning class of reinforcement learning (RL). With RL algorithms, many brain models have described the function of dopamine and related areas, including the basal ganglia and frontal cortex. In spite of this importance, how the reward prediction error itself is computed is not understood well, including the problem of how the current states are assigned to a memorized states and how the values of the states are memorized. In this paper, we describe a neocortical model for memorizing state space and computing reward prediction error, known as ‘reward hierarchical temporal memory’ (rHTM). In this model, the temporal relationships among events are hierarchically stored. Using this memory, rHTM computes reward prediction errors by associating the memorized sequences to rewards and inhibits the predicted reward. In a simulation, our model behaved similarly to dopaminergic neurons. We suggest that our model can provide a hypothetical framework of interaction between cortex and dopamine neurons.
Publisher
IEEE World Congress on Computational Intelligence
Issue Date
2012-06-10
Language
English
Citation

IEEE World Congress on Computational Intelligence

URI
http://hdl.handle.net/10203/169472
Appears in Collection
EE-Conference Papers(학술회의논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0