Deep neural network pruning for self-supervised transfer learning자기 지도 전이 학습을 위한 심층 신경망 가지치기

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 75
  • Download : 0
Recent advancements in self-supervised learning framework show promising results on various downstream tasks, where it can produce on-par or better visual representations than supervised ones with extensive pretraining costs and high complexity. One possible way to solve these problems is to sparsify the network before learning via Pruning-at-Initialization (PaI), aimed to downsize neural networks without significant loss of accuracy compared to dense, overparameterized models. However, our understanding of Pruning-at-Initialization (PaI) methods is limited to supervised learning, where models learn from massive amounts of carefully labeled data. In this work, we first investigate how sparse networks obtained from the criterion of different PaI methods would perform on the Self-Supervised Learning pretraining framework and how comparable they are to the supervised learning setup. Furthermore, we find that sparse networks trained on self-supervised frameworks perform better quantitatively and qualitatively than their supervised counterpart on various downstream tasks, especially in transfer learning tasks.
Advisors
Yoo, Changdongresearcher유창동researcher
Description
한국과학기술원 :전기및전자공학부,
Publisher
한국과학기술원
Issue Date
2023
Identifier
325007
Language
eng
Description

학위논문(석사) - 한국과학기술원 : 전기및전자공학부, 2023.2,[vi, 40 p. :]

Keywords

Self-supervised learning▼aNetwork pruning▼aPruning-at-initialization▼aUnsupervised representation learning▼aTransfer learning▼aComputer vision; 자기 지도 학습▼a네트워크 정리▼a초기화 시 프루닝▼a감독되지 않은 표현 학습▼a전이 학습▼a컴퓨터 비전

URI
http://hdl.handle.net/10203/309976
Link
http://library.kaist.ac.kr/search/detail/view.do?bibCtrlNo=1032939&flag=dissertation
Appears in Collection
EE-Theses_Master(석사논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0