OmniDRL: An Energy-Efficient Deep Reinforcement Learning Processor With Dual-Mode Weight Compression and Sparse Weight Transposer

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 631
  • Download : 0
DC FieldValueLanguage
dc.contributor.authorLee, Juhyoungko
dc.contributor.authorKim, Sangyeobko
dc.contributor.authorKim, Sangjinko
dc.contributor.authorJo, Wooyoungko
dc.contributor.authorKim, Ji-Hoonko
dc.contributor.authorHan, Donghyeonko
dc.contributor.authorYoo, Hoi-Junko
dc.date.accessioned2022-04-13T06:49:42Z-
dc.date.available2022-04-13T06:49:42Z-
dc.date.created2022-02-06-
dc.date.created2022-02-06-
dc.date.created2022-02-06-
dc.date.issued2022-04-
dc.identifier.citationIEEE JOURNAL OF SOLID-STATE CIRCUITS, v.57, no.4, pp.999 - 1012-
dc.identifier.issn0018-9200-
dc.identifier.urihttp://hdl.handle.net/10203/292574-
dc.description.abstractIn this article, we present an energy-efficient deep reinforcement learning (DRL) processor, OmniDRL, for DRL training on edge devices. Recently, the need for DRL training is growing due to the DRL's distinct characteristics that can be adapted to each user. However, a massive amount of external and internal memory access limits the implementation of DRL training on resource-constrained platforms. OmniDRL proposes four key features that can reduce external memory access by compressing as much data as possible and can reduce internal memory access by directly processing compressed data. A group-sparse training (GST) enables a high weight compression ratio (CR) for every DRL iteration by selective utilization of weight grouping and weight pruning. A group-sparse training core is proposed to fully take advantage of compressed weight from GST by skipping redundant operations and reusing duplicated data. An exponent-mean-delta encoding additionally compresses the exponent of both weight and feature map for higher CR and low memory power consumption. A world-first on-chip sparse weight transposer enables the DRL training process of compressed weight without off-chip transposer. As a result, OmniDRL is fabricated in a 28-nm CMOS technology and occupies a 3.6x3.6 mm(2) die area. It shows a state-of-the-art peak performance of 4.18 TFLOPS and a peak energy efficiency of 29.3 TFLOPS/W. It achieved 7.42-TFLOPS/W energy efficiency for training robot agent (Mujoco Halfcheetah, TD3), which is 2.4x higher than the previous state of the art.-
dc.languageEnglish-
dc.publisherIEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC-
dc.titleOmniDRL: An Energy-Efficient Deep Reinforcement Learning Processor With Dual-Mode Weight Compression and Sparse Weight Transposer-
dc.typeArticle-
dc.identifier.wosid000745482800001-
dc.identifier.scopusid2-s2.0-85122858788-
dc.type.rimsART-
dc.citation.volume57-
dc.citation.issue4-
dc.citation.beginningpage999-
dc.citation.endingpage1012-
dc.citation.publicationnameIEEE JOURNAL OF SOLID-STATE CIRCUITS-
dc.identifier.doi10.1109/JSSC.2021.3138520-
dc.contributor.localauthorYoo, Hoi-Jun-
dc.contributor.nonIdAuthorJo, Wooyoung-
dc.contributor.nonIdAuthorKim, Ji-Hoon-
dc.description.isOpenAccessN-
dc.type.journalArticleArticle-
dc.subject.keywordAuthorTraining-
dc.subject.keywordAuthorMemory management-
dc.subject.keywordAuthorReinforcement learning-
dc.subject.keywordAuthorPower demand-
dc.subject.keywordAuthorTask analysis-
dc.subject.keywordAuthorComputational modeling-
dc.subject.keywordAuthorBandwidth-
dc.subject.keywordAuthorData compression-
dc.subject.keywordAuthordeep reinforcement learning (DRL)-
dc.subject.keywordAuthorenergy-efficient deep neural network (DNN) application-specific integrated circuit (ASIC)-
dc.subject.keywordAuthorstructured weight-
dc.subject.keywordAuthortransposer-
dc.subject.keywordAuthorweight pruning-
dc.subject.keywordPlusLEVEL-
Appears in Collection
EE-Journal Papers(저널논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0