Algorithm for Autonomous Power-Increase Operation Using Deep Reinforcement Learning and a Rule-Based System

Cited 31 time in webofscience Cited 0 time in scopus
  • Hit : 35
  • Download : 0
DC FieldValueLanguage
dc.contributor.authorLee Daeilko
dc.contributor.authorArigi Awwal Mohammedko
dc.contributor.authorKim Jonghyunko
dc.date.accessioned2024-03-06T11:00:16Z-
dc.date.available2024-03-06T11:00:16Z-
dc.date.created2024-03-06-
dc.date.created2024-03-06-
dc.date.issued2020-
dc.identifier.citationIEEE ACCESS, v.8, pp.196727 - 196746-
dc.identifier.issn2169-3536-
dc.identifier.urihttp://hdl.handle.net/10203/318444-
dc.description.abstractThe power start-up operation of a nuclear power plant (NPP) increases the reactor power to the full-power condition for electricity generation. Compared to full-power operation, the power-increase operation requires significantly more decision-making and therefore increases the potential for human errors. While previous studies have investigated the use of artificial intelligence (AI) techniques for NPP control, none of them have addressed making the relatively complicated power-increase operation fully autonomous. This study focused on developing an algorithm for converting all the currently manual activities in the NPP power-increase process to autonomous operations. An asynchronous advantage actor-critic, which is a type of deep reinforcement learning method, and a long short-term memory network were applied to the operator tasks for which establishing clear rules or logic was challenging, while a rule-based system was developed for those actions, which could be described by simple logic (such as if-then logic). The proposed autonomous power-increase control algorithm was trained and validated using a compact nuclear simulator (CNS). The simulation results were used to evaluate the algorithm's ability to control the parameters within allowable limits, and the proposed power-increase control algorithm was proven capable of identifying an acceptable operation path for increasing the reactor power from 2% to 100% at a specified rate of power increase. In addition, the pattern of operation that resulted from the autonomous control simulation was found to be identical to that of the established operation strategy. These results demonstrate the potential feasibility of fully autonomous control of the NPP power-increase operation.-
dc.publisherIEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC-
dc.titleAlgorithm for Autonomous Power-Increase Operation Using Deep Reinforcement Learning and a Rule-Based System-
dc.typeArticle-
dc.identifier.wosid000589776200001-
dc.identifier.scopusid2-s2.0-85096334589-
dc.type.rimsART-
dc.citation.volume8-
dc.citation.beginningpage196727-
dc.citation.endingpage196746-
dc.citation.publicationnameIEEE ACCESS-
dc.identifier.doi10.1109/ACCESS.2020.3034218-
dc.contributor.localauthorKim Jonghyun-
dc.contributor.nonIdAuthorLee Daeil-
dc.contributor.nonIdAuthorArigi Awwal Mohammed-
dc.description.isOpenAccessN-
dc.type.journalArticleArticle-
dc.subject.keywordAuthorInductors-
dc.subject.keywordAuthorTask analysis-
dc.subject.keywordAuthorReinforcement learning-
dc.subject.keywordAuthorNeural networks-
dc.subject.keywordAuthorAutomation-
dc.subject.keywordAuthorControl systems-
dc.subject.keywordAuthorNuclear power plant-
dc.subject.keywordAuthorautonomous operation-
dc.subject.keywordAuthorpower-increase operation-
dc.subject.keywordAuthorreinforcement learning-
dc.subject.keywordAuthorasynchronous advantage actor-critic-
dc.subject.keywordPlusNUCLEAR-REACTOR CORE-
dc.subject.keywordPlusNEURAL-NETWORK-
dc.subject.keywordPlusLEVEL CONTROL-
dc.subject.keywordPlusCONTROLLER-
dc.subject.keywordPlusPLANT-
dc.subject.keywordPlusDESIGN-
Appears in Collection
NE-Journal Papers(저널논문)
Files in This Item
There are no files associated with this item.
This item is cited by other documents in WoS
⊙ Detail Information in WoSⓡ Click to see webofscience_button
⊙ Cited 31 items in WoS Click to see citing articles in records_button

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0