A model-based deep reinforcement learning method applied to finite-horizon optimal control of nonlinear control-affine system

Cited 38 time in webofscience Cited 31 time in scopus
  • Hit : 352
  • Download : 0
DC FieldValueLanguage
dc.contributor.authorKim, Jong Wooko
dc.contributor.authorPark, Byung Junko
dc.contributor.authorYoo, Haeunko
dc.contributor.authorOh, Tae Hoonko
dc.contributor.authorLee, Jay H.ko
dc.contributor.authorLee, Jong Minko
dc.date.accessioned2020-04-02T08:20:06Z-
dc.date.available2020-04-02T08:20:06Z-
dc.date.created2020-03-30-
dc.date.created2020-03-30-
dc.date.created2020-03-30-
dc.date.issued2020-03-
dc.identifier.citationJOURNAL OF PROCESS CONTROL, v.87, pp.166 - 178-
dc.identifier.issn0959-1524-
dc.identifier.urihttp://hdl.handle.net/10203/273800-
dc.description.abstractThe Hamilton-Jacobi-Bellman (HJB) equation can be solved to obtain optimal closed-loop control policies for general nonlinear systems. As it is seldom possible to solve the HJB equation exactly for nonlinear systems, either analytically or numerically, methods to build approximate solutions through simulation based learning have been studied in various names like neurodynamic programming (NDP) and approximate dynamic programming (ADP). The aspect of learning connects these methods to reinforcement learning (RL), which also tries to learn optimal decision policies through trial-and-error based learning. This study develops a model-based RL method, which iteratively learns the solution to the HJB and its associated equations. We focus particularly on the control-affine system with a quadratic objective function and the finite horizon optimal control (FHOC) problem with time-varying reference trajectories. The HJB solutions for such systems involve time-varying value, costate, and policy functions subject to boundary conditions. To represent the time-varying HJB solution in high-dimensional state space in a general and efficient way, deep neural networks (DNNs) are employed. It is shown that the use of DNNs, compared to shallow neural networks (SNNs), can significantly improve the performance of a learned policy in the presence of uncertain initial state and state noise. Examples involving a batch chemical reactor and a one-dimensional diffusion-convection-reaction system are used to demonstrate this and other key aspects of the method.-
dc.languageEnglish-
dc.publisherELSEVIER SCI LTD-
dc.titleA model-based deep reinforcement learning method applied to finite-horizon optimal control of nonlinear control-affine system-
dc.typeArticle-
dc.identifier.wosid000518872200014-
dc.identifier.scopusid2-s2.0-85079376230-
dc.type.rimsART-
dc.citation.volume87-
dc.citation.beginningpage166-
dc.citation.endingpage178-
dc.citation.publicationnameJOURNAL OF PROCESS CONTROL-
dc.identifier.doi10.1016/j.jprocont.2020.02.003-
dc.contributor.localauthorLee, Jay H.-
dc.contributor.nonIdAuthorKim, Jong Woo-
dc.contributor.nonIdAuthorPark, Byung Jun-
dc.contributor.nonIdAuthorOh, Tae Hoon-
dc.contributor.nonIdAuthorLee, Jong Min-
dc.description.isOpenAccessN-
dc.type.journalArticleArticle-
dc.subject.keywordAuthorReinforcement learning-
dc.subject.keywordAuthorApproximate dynamic programming-
dc.subject.keywordAuthorDeep neural networks-
dc.subject.keywordAuthorGlobalized dual heuristic programming-
dc.subject.keywordAuthorFinite horizon optimal control problem-
dc.subject.keywordAuthorHamilton-Jacobi-Bellman equation-
dc.subject.keywordPlusAPPROXIMATE OPTIMAL-CONTROL-
Appears in Collection
CBE-Journal Papers(저널논문)
Files in This Item
There are no files associated with this item.
This item is cited by other documents in WoS
⊙ Detail Information in WoSⓡ Click to see webofscience_button
⊙ Cited 38 items in WoS Click to see citing articles in records_button

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0