Reinforcement learning-based mixed-precision quantization for lightweight deep neural networks경량 심층신경망을 위한 강화학습 기반 혼합정밀도 양자화

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 127
  • Download : 0
DC FieldValueLanguage
dc.contributor.advisorKim, Changick-
dc.contributor.advisor김창익-
dc.contributor.authorJung, Juri-
dc.date.accessioned2022-04-27T19:31:06Z-
dc.date.available2022-04-27T19:31:06Z-
dc.date.issued2021-
dc.identifier.urihttp://library.kaist.ac.kr/search/detail/view.do?bibCtrlNo=948978&flag=dissertationen_US
dc.identifier.urihttp://hdl.handle.net/10203/295961-
dc.description학위논문(석사) - 한국과학기술원 : 전기및전자공학부, 2021.2,[iv, 40 p. :]-
dc.description.abstractNetwork quantization has been widely studied to compress the deep neural network in mobile devices. Conventional methods quantize the network parameters of all layers with the same fixed precision, regardless of the number of parameters in each layer. However, quantizing the weights of the layer with many parameters is more effective in reducing the model size. Accordingly, in this paper, we propose a novel mixed-precision quantization method based on reinforcement learning. Specifically, we utilize the number of parameters at each layer as a prior for our framework. By using the accuracy and the bit-width as a reward, the proposed framework determines the optimal quantization policy for each layer. By applying this policy sequentially, we achieve weighted-average 2.97 bits for the VGG-16 model on the CIFAR-10 dataset with no degradation of the accuracy, compared with its full-precision baseline. We also show that our framework can provide an optimal quantization policy for the VGG-Net and the ResNet to minimize the storage while preserving the accuracy.-
dc.languageeng-
dc.publisher한국과학기술원-
dc.subjectDeep neural network▼aReinforcement learning▼aModel compression▼aQuantization▼aEmbedded system-
dc.subject심층신경망▼a강화학습▼a모델압축▼a양자화▼a임베디드 시스템-
dc.titleReinforcement learning-based mixed-precision quantization for lightweight deep neural networks-
dc.title.alternative경량 심층신경망을 위한 강화학습 기반 혼합정밀도 양자화-
dc.typeThesis(Master)-
dc.identifier.CNRN325007-
dc.description.department한국과학기술원 :전기및전자공학부,-
dc.contributor.alternativeauthor정주리-
Appears in Collection
EE-Theses_Master(석사논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0