DC Field | Value | Language |
---|---|---|
dc.contributor.author | Lee, Donghwan | ko |
dc.contributor.author | He, Niao | ko |
dc.date.accessioned | 2020-12-18T07:30:23Z | - |
dc.date.available | 2020-12-18T07:30:23Z | - |
dc.date.created | 2020-11-24 | - |
dc.date.issued | 2020-06-11 | - |
dc.identifier.citation | 2nd Annual Conference on Learning for Dynamics and Control(L4DC) | - |
dc.identifier.uri | http://hdl.handle.net/10203/278704 | - |
dc.description.abstract | The use of target networks is a common practice in deep reinforcement learning for stabilizing the training; however, theoretical understanding of this technique is still limited. In this paper, we study the so-called periodic Q-learning algorithm (PQ-learning for short), which resembles the technique used in deep Q-learning for solving infinite-horizon discounted Markov decision processes (DMDP) in the tabular setting. PQ-learning maintains two separate Q-value estimates – the online estimate and target estimate. The online estimate follows the standard Q-learning update, while the target estimate is updated periodically. In contrast to the standard Q-learning, PQ-learning enjoys a simple finite time analysis and achieves better sample complexity for finding an epsilon-optimal policy. Our result provides a preliminary justification of the effectiveness of utilizing target estimates or networks in Q-learning algorithms. | - |
dc.language | English | - |
dc.publisher | UC Berkeley | - |
dc.title | Periodic Q-learning | - |
dc.type | Conference | - |
dc.type.rims | CONF | - |
dc.citation.publicationname | 2nd Annual Conference on Learning for Dynamics and Control(L4DC) | - |
dc.identifier.conferencecountry | US | - |
dc.identifier.conferencelocation | Online | - |
dc.contributor.localauthor | Lee, Donghwan | - |
dc.contributor.nonIdAuthor | He, Niao | - |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.