Deep reinforcement learning based finite-horizon optimal control for a discrete-time affine nonlinear system

Cited 2 time in webofscience Cited 2 time in scopus
  • Hit : 113
  • Download : 0
Approximate dynamic programming (ADP) aims to obtain an approximate numerical solution to the discrete time Hamilton-Jacobi-Bellman (HJB) equation. Heuristic dynamic programming (HDP) is a two-stage iterative scheme of ADP by separating the HJB equation into two equations, one for the value function and another for the policy function, which are referred to as the critic and the actor, respectively. Previous ADP implementations have been limited by the choice of function approximator, which requires significant prior domain knowledge or a large number of parameters to be fitted. However, recent advances in deep learning brought by the computer science community enable the use of deep neural networks (DNN) to approximate high-dimensional nonlinear functions without prior domain knowledge. Motivated by this, we examine the potential of DNNs as function approximators of the critic and the actor. In contrast to the infinite-horizon optimal control problem, the critic and the actor of the finite horizon optimal control (FHOC) problem are time-varying functions and have to satisfy a boundary condition. DNN structure and training algorithm suitable for FHOC are presented. Illustrative examples are provided to demonstrate the validity of the proposed method.
Publisher
IEEE
Issue Date
2018-09
Language
English
Citation

57th Annual Conference of the Society of Instrument and Control Engineers of Japan (SICE), pp.567 - 572

DOI
10.23919/SICE.2018.8492653
URI
http://hdl.handle.net/10203/274838
Appears in Collection
CBE-Conference Papers(학술회의논문)
Files in This Item
There are no files associated with this item.
This item is cited by other documents in WoS
⊙ Detail Information in WoSⓡ Click to see webofscience_button
⊙ Cited 2 items in WoS Click to see citing articles in records_button

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0