Bayesian Reinforcement Learning with Behavioral Feedback

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 353
  • Download : 30
In the standard reinforcement learning setting, the agent learns optimal policy solely from state transitions and rewards from the environment. We consider an extended setting where a trainer additionally provides feedback on the actions executed by the agent. This requires appropriately incorporating the feedback, even when the feedback is not necessarily accurate. In this paper, we present a Bayesian approach to this extended reinforcement learning setting. Specifically, we extend Kalman Temporal Difference learning to compute the posterior distribution over Q-values given the state transitions and rewards from the environment as well as the feedback from the trainer. Through experiments on standard reinforcement learning tasks, we show that learning performance can be significantly improved even with inaccurate feedback.
Publisher
International Joint Conferences on Artificial Intelligence Organization (IJCAI)
Issue Date
2016-07-14
Language
English
Citation

25th International Joint Conference on Artificial Intelligence, pp.1571 - 1577

URI
http://hdl.handle.net/10203/214342
Appears in Collection
RIMS Conference Papers
Files in This Item

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0