DC Field | Value | Language |
---|---|---|
dc.contributor.advisor | Kim, Changick | - |
dc.contributor.advisor | 김창익 | - |
dc.contributor.author | Park, Keunchul | - |
dc.date.accessioned | 2022-04-27T19:31:06Z | - |
dc.date.available | 2022-04-27T19:31:06Z | - |
dc.date.issued | 2021 | - |
dc.identifier.uri | http://library.kaist.ac.kr/search/detail/view.do?bibCtrlNo=963408&flag=dissertation | en_US |
dc.identifier.uri | http://hdl.handle.net/10203/295962 | - |
dc.description | 학위논문(석사) - 한국과학기술원 : 전기및전자공학부, 2021.8,[iii, 25 p. :] | - |
dc.description.abstract | The key challenge of few-shot learning is to recognize novel classes with a few examples. Most existing few-shot learning models represent a class as a single prototype to train a model which can adapt to novel classes. However, these methods make a model to ignore the detailed characteristics of the class. In this paper, we propose MPLNet that task-adaptively divides the class into account the detailed features of the class. Our key idea is to employ each prototype of a divided sub-class as a class prototype to represent a class as multiple prototypes and utilize unlabeled data, not only labeled data. In order to extract information from unlabeled data, we introduce pseudo-labeling for multimodal distribution to assign a pseudo-label to unlabeled data. Experimental results on miniImagenet and tieredImagenet show that our method is comparable to or even outperforms state-of-the-art methods. | - |
dc.language | eng | - |
dc.publisher | 한국과학기술원 | - |
dc.subject | Few-shot learning▼aPrototype▼aTransductive▼aPseudo-labeling▼amultimodal distribution | - |
dc.subject | 퓨샷 러닝▼a원형▼aTransductive▼a유사-레이블링▼a다봉분포 | - |
dc.title | Task-adaptive class division for transductive few-shot learning | - |
dc.title.alternative | Transductive 퓨샷 러닝을 위한 과제 적응형 집단 분할 기법 | - |
dc.type | Thesis(Master) | - |
dc.identifier.CNRN | 325007 | - |
dc.description.department | 한국과학기술원 :전기및전자공학부, | - |
dc.contributor.alternativeauthor | 박근철 | - |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.