Stochastic subset selection확률론적 부분 집합 선택

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 189
  • Download : 0
DC FieldValueLanguage
dc.contributor.advisorHwang, Sung Ju-
dc.contributor.advisor황성주-
dc.contributor.authorNguyen, Anh Tuan-
dc.date.accessioned2021-05-13T19:38:28Z-
dc.date.available2021-05-13T19:38:28Z-
dc.date.issued2020-
dc.identifier.urihttp://library.kaist.ac.kr/search/detail/view.do?bibCtrlNo=925168&flag=dissertationen_US
dc.identifier.urihttp://hdl.handle.net/10203/285007-
dc.description학위논문(석사) - 한국과학기술원 : 전산학부, 2020.8,[iv, 17 :]-
dc.description.abstractCurrent machine learning algorithms are designed to work with huge volumes of high dimensional data such as images. However, these algorithms are being increasingly deployed to resource constrained systems such as mobile devices and embedded systems. Even in cases where large computing infrastructure is available, the size of each data instance, as well as datasets, can provide a huge bottleneck in data transfer across communication channels. Also, there is a huge incentive both in energy and monetary terms in reducing both the computational and memory requirements of these algorithms. For non-parametric models that require to leverage the stored training data at the inference time, the increased cost in memory and computation could be even more problematic. In this work, we aim to reduce the volume of data these algorithms must process through an end-to-end two-stage neural subset selection model, where the first stage selects a set of candidate points using a conditionally independent Bernoulli mask followed by an iterative coreset selection via a conditional Categorical distribution. The subset selection model is trained by meta-learning with a distribution of sets. We validate our method on set reconstruction and classification tasks with feature selection as well as the selection of representative samples from a given dataset, on which our method outperforms relevant baselines. We also show in our experiments that our method enhances scalability of non-parametric models such as Neural Processes.-
dc.languageeng-
dc.publisher한국과학기술원-
dc.subjectSet Representation Learning▼anon-parametric model▼astochastic model▼acompression▼acoreset-
dc.subject확률론적▼a서브 세트▼a압축▼a비모수▼a기계 학습-
dc.titleStochastic subset selection-
dc.title.alternative확률론적 부분 집합 선택-
dc.typeThesis(Master)-
dc.identifier.CNRN325007-
dc.description.department한국과학기술원 :전산학부,-
dc.contributor.alternativeauthor응우옌 안트완-
Appears in Collection
CS-Theses_Master(석사논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0