A deep reinforcement learning framework for optimizing fuel economy of hybrid electric vehicles

Cited 32 time in webofscience Cited 27 time in scopus
  • Hit : 167
  • Download : 0
DC FieldValueLanguage
dc.contributor.authorZhao, Puko
dc.contributor.authorWang, Yanzhiko
dc.contributor.authorChang, Naehyuckko
dc.contributor.authorZhu, Qiko
dc.contributor.authorLin, Xueko
dc.date.accessioned2020-03-19T03:32:03Z-
dc.date.available2020-03-19T03:32:03Z-
dc.date.created2019-11-19-
dc.date.created2019-11-19-
dc.date.issued2018-01-22-
dc.identifier.citation2018 23rd Asia and South Pacific Design Automation Conference (ASP-DAC), pp.196 - 202-
dc.identifier.urihttp://hdl.handle.net/10203/272917-
dc.description.abstractHybrid electric vehicles employ a hybrid propulsion system to combine the energy efficiency of electric motor and a long driving range of internal combustion engine, thereby achieving a higher fuel economy as well as convenience compared with conventional ICE vehicles. However, the relatively complicated powertrain structures of HEVs necessitate an effective power management policy to determine the power split between ICE and EM. In this work, we propose a deep reinforcement learning framework of the HEV power management with the aim of improving fuel economy. The DRL technique is comprised of an offline deep neural network construction phase and an online deep Q-learning phase. Unlike traditional reinforcement learning, DRL presents the capability of handling the high dimensional state and action space in the actual decision-making process, making it suitable for the HEV power management problem. Enabled by the DRL technique, the derived HEV power management policy is close to optimal, fully model-free, and independent of a prior knowledge of driving cycles. Simulation results based on actual vehicle setup over real-world and testing driving cycles demonstrate the effectiveness of the proposed framework on optimizing HEV fuel economy.-
dc.languageEnglish-
dc.publisherAsia and South Pacific Design Automation Conference (ASP-DAC)-
dc.titleA deep reinforcement learning framework for optimizing fuel economy of hybrid electric vehicles-
dc.typeConference-
dc.identifier.wosid000426987100032-
dc.identifier.scopusid2-s2.0-85045336796-
dc.type.rimsCONF-
dc.citation.beginningpage196-
dc.citation.endingpage202-
dc.citation.publicationname2018 23rd Asia and South Pacific Design Automation Conference (ASP-DAC)-
dc.identifier.conferencecountryKO-
dc.identifier.conferencelocationInternational Convention Center, Jeju-
dc.identifier.doi10.1109/ASPDAC.2018.8297305-
dc.contributor.localauthorChang, Naehyuck-
dc.contributor.nonIdAuthorZhao, Pu-
dc.contributor.nonIdAuthorWang, Yanzhi-
dc.contributor.nonIdAuthorZhu, Qi-
dc.contributor.nonIdAuthorLin, Xue-
Appears in Collection
EE-Conference Papers(학술회의논문)
Files in This Item
There are no files associated with this item.
This item is cited by other documents in WoS
⊙ Detail Information in WoSⓡ Click to see webofscience_button
⊙ Cited 32 items in WoS Click to see citing articles in records_button

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0