Reinforcement Learning - Overview of recent progress and implications for process control

Cited 151 time in webofscience Cited 88 time in scopus
  • Hit : 587
  • Download : 0
DC FieldValueLanguage
dc.contributor.authorShin, Joohyunko
dc.contributor.authorBadgwell, Thomas A.ko
dc.contributor.authorLiu, Kuang-Hungko
dc.contributor.authorLee, Jay Hyungko
dc.date.accessioned2019-07-05T02:30:07Z-
dc.date.available2019-07-05T02:30:07Z-
dc.date.created2019-07-01-
dc.date.created2019-07-01-
dc.date.issued2019-08-
dc.identifier.citationCOMPUTERS & CHEMICAL ENGINEERING, v.127, pp.282 - 294-
dc.identifier.issn0098-1354-
dc.identifier.urihttp://hdl.handle.net/10203/262943-
dc.description.abstractThis paper provides an introduction to Reinforcement Learning (RL) technology, summarizes recent developments in this area, and discusses their potential implications for the field of process control, and more generally, of operational decision-making. The paper begins with an introduction to RL that allows an agent to learn, through trial and error, the best way to accomplish a task. We then highlight new developments in RL that have led to the recent wave of applications and media interest. A comparison of the key features of RL and mathematical programming based methods (e.g., model predictive control) is then presented to clarify their similarities and differences. This is followed by an assessment of several ways that RL technology can potentially be used in process control and operational decision applications. A final section summarizes our conclusions and lists directions for future RL research that may improve its relevance for the process systems engineering field. (C) 2019 Elsevier Ltd. All rights reserved.-
dc.languageEnglish-
dc.publisherPERGAMON-ELSEVIER SCIENCE LTD-
dc.titleReinforcement Learning - Overview of recent progress and implications for process control-
dc.typeArticle-
dc.identifier.wosid000470829700023-
dc.identifier.scopusid2-s2.0-85066311169-
dc.type.rimsART-
dc.citation.volume127-
dc.citation.beginningpage282-
dc.citation.endingpage294-
dc.citation.publicationnameCOMPUTERS & CHEMICAL ENGINEERING-
dc.identifier.doi10.1016/j.compchemeng.2019.05.029-
dc.contributor.localauthorLee, Jay Hyung-
dc.contributor.nonIdAuthorBadgwell, Thomas A.-
dc.contributor.nonIdAuthorLiu, Kuang-Hung-
dc.description.isOpenAccessN-
dc.type.journalArticleArticle-
dc.subject.keywordAuthorReinforcement Learning-
dc.subject.keywordAuthorMathematical programming-
dc.subject.keywordAuthorModel predictive control-
dc.subject.keywordAuthorProcess control-
dc.subject.keywordAuthorStrategic/operational decision-making-
dc.subject.keywordPlusOPTIMIZATION-
dc.subject.keywordPlusUNCERTAINTY-
dc.subject.keywordPlusSYSTEMS-
dc.subject.keywordPlusFORMULATION-
dc.subject.keywordPlusMODEL-
dc.subject.keywordPlusGAME-
dc.subject.keywordPlusGO-
Appears in Collection
CBE-Journal Papers(저널논문)
Files in This Item
There are no files associated with this item.
This item is cited by other documents in WoS
⊙ Detail Information in WoSⓡ Click to see webofscience_button
⊙ Cited 151 items in WoS Click to see citing articles in records_button

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0