Robust unsupervised representation learning for relational data관계형 데이터를 위한 강건한 비지도 표현 학습

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 24
  • Download : 0
Many real-world data are relational. Examples include collaborations of researchers, network traffic, and online social interactions. These relational data are modeled in multiple forms such as graphs, tensors, and hypergraphs. Recently, representation learning on relational data has drawn a lot of attention due to its superiority in various machine learning tasks. Representation learning embeds entities (e.g., users, items, vertices, and relationships) into a low-dimensional vector space, which is meaningful and useful for various purposes, by preserving their relational and context information. For example, tensor decomposition extracts the underlying latent structure of a given tensor into a low-dimensional vector space. Also, (hyper)graph neural networks extract low-dimensional vectors for nodes and (hyper)edges of a given (hyper)graph. Many studies on representation learning have shown state-of-the-art performance on clean and refined data. However, real-world relational data may often be incomplete and thus have missing observations due to unintended problems. At the same time, they are easily corrupted by natural or adversarial outliers due to unpredicted events during data collection. Although recent representation learning models have demonstrated their superiority, many approaches are vulnerable to such noise. Given noisy relational data, how can we design the representation learning approaches robustly? This thesis focuses on developing robust unsupervised representation learning models for three target scenarios in order from simple to complex: (1) a robust linear model against random noise, (2) a robust non-linear model against random noise, and (3) a robust non-linear model against adversarial noise. First, we propose a robust linear representation model against random noise (e.g., data input corruption). Specifically, we develop a robust tensor factorization method that integrates tensor factorization, outlier removal, and temporal-pattern detection smoothly and tightly. This method is designed to handle tensor streams and is able to impute missing entries, detect outliers, and predict future entries accurately in an online manner. Second, we propose a robust non-linear representation model against random noise (e.g., data input and label corruption). In particular, we develop a hypergraph contrastive learning approach that exploits a novel contrastive loss that fully utilizes the constituents in hypergraphs (i.e., nodes, hyperedges, and memberships). We demonstrate the robustness of this method under various noisy situations. Lastly, we propose a robust non-linear representation model against adversarial noise (a.k.a. adversarial attack). Especially, we introduce a way to make the temporal graph neural networks robust to noisy interactions (i.e., edge streams). To verify its robustness under a harsh environment, we also propose a simple and effective adversarial attack, which generates more detrimental noise than randomly generated noise.
Advisors
신기정researcher
Description
한국과학기술원 :전기및전자공학부,
Publisher
한국과학기술원
Issue Date
2024
Identifier
325007
Language
eng
Description

학위논문(박사) - 한국과학기술원 : 전기및전자공학부, 2024.2,[v, 93p :]

Keywords

관계형 데이터▼a텐서▼a그래프▼a하이퍼그래프▼a그래프 스트림▼a표현 학습▼a텐서 분해▼a그래프 신경망▼a하이퍼그래프 신경망▼a시간 그래프 신경망▼a견고성▼a대조 학습▼a적대적 공격▼a적대적 방어; Relational data▼atensors▼agraphs▼ahypergraphs▼agraph streams▼arepresentation learning▼atensor factorization▼agraph neural networks▼ahypergraph neural networks▼atemporal graph neural networks▼arobustness▼acontrastive learning▼aadversarial learning

URI
http://hdl.handle.net/10203/322160
Link
http://library.kaist.ac.kr/search/detail/view.do?bibCtrlNo=1100066&flag=dissertation
Appears in Collection
EE-Theses_Ph.D.(박사논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0