Sparsity-aware autoencoder-based embedding for graph-structured data그래프 구조 데이터를 위한 희박성 인지 오토인코더 기반 임베딩

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 323
  • Download : 0
DC FieldValueLanguage
dc.contributor.advisorLee, Jae-Gil-
dc.contributor.advisor이재길-
dc.contributor.authorPark, Dongmin-
dc.date.accessioned2021-05-12T19:35:46Z-
dc.date.available2021-05-12T19:35:46Z-
dc.date.issued2020-
dc.identifier.urihttp://library.kaist.ac.kr/search/detail/view.do?bibCtrlNo=910740&flag=dissertationen_US
dc.identifier.urihttp://hdl.handle.net/10203/283955-
dc.description학위논문(석사) - 한국과학기술원 : 지식서비스공학대학원, 2020.2,[v, 37 p. :]-
dc.description.abstractFinding low-dimensional embeddings of sparse high-dimensional data objects is very important in various fields such as recommendation, graph mining, and natural language processing. Recently, autoencoder (AE)-based embedding approaches have achieved state-of-the-art performance in many tasks, especially in top-$\kappa$ recommendation tasks with user embedding or node classification tasks with node embedding. However, we find that since many real-world data follow the power-law with respect to the data object sparsity, AE-based embedding severely suffers from a problem, which we call polarization, that dense data objects move away from sparse ones in an embedding space even if they are highly correlated. In this paper, we propose TRAP that leverages two-level regularizers to effectively alleviate this problem. (i) The “macroscopic regularizer” adds a regularization term in the loss function to generally prevent dense input objects to being distant from other sparse input objects. (ii) The “microscopic regularizer” introduces a new object-wise parameter to individually entice each object to correlated neighbor objects rather than uncorrelated ones. Importantly, TRAP is a meta-algorithm that can be easily coupled with existing AE-based embedding methods with a simple modification. In extensive experiments on two representative embedding tasks using six-real world datasets, TRAP boosted the performance of the state-of-the-art algorithms by up to 31.53% and 94.99% respectively.-
dc.languageeng-
dc.publisher한국과학기술원-
dc.subjectPower-law distribution▼aLow-dimensional embedding▼aAutoencoder▼aRecommender system▼aGraph mining-
dc.subject멱 법칙▼a저차원 임베딩▼a오토 인코더▼a추천 시스템▼a그래프 마이닝-
dc.titleSparsity-aware autoencoder-based embedding for graph-structured data-
dc.title.alternative그래프 구조 데이터를 위한 희박성 인지 오토인코더 기반 임베딩-
dc.typeThesis(Master)-
dc.identifier.CNRN325007-
dc.description.department한국과학기술원 :지식서비스공학대학원,-
dc.contributor.alternativeauthor박동민-
Appears in Collection
KSE-Theses_Master(석사논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0