I’m Me, We’re Us, and I’m Us: Tri-directional Contrastive Learning on Hypergraphs

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 44
  • Download : 0
DC FieldValueLanguage
dc.contributor.authorLee, Dongjinko
dc.contributor.authorShin, Kijungko
dc.date.accessioned2023-12-08T03:02:00Z-
dc.date.available2023-12-08T03:02:00Z-
dc.date.created2023-12-08-
dc.date.created2023-12-08-
dc.date.issued2023-02-09-
dc.identifier.citation37th AAAI Conference on Artificial Intelligence, AAAI 2023, pp.8456 - 8464-
dc.identifier.urihttp://hdl.handle.net/10203/316064-
dc.description.abstractAlthough machine learning on hypergraphs has attracted considerable attention, most of the works have focused on (semi-)supervised learning, which may cause heavy labeling costs and poor generalization. Recently, contrastive learning has emerged as a successful unsupervised representation learning method. Despite the prosperous development of contrastive learning in other domains, contrastive learning on hypergraphs remains little explored. In this paper, we propose TriCL (Tri-directional Contrastive Learning), a general framework for contrastive learning on hypergraphs. Its main idea is tri-directional contrast, and specifically, it aims to maximize in two augmented views the agreement (a) between the same node, (b) between the same group of nodes, and (c) between each group and its members. Together with simple but surprisingly effective data augmentation and negative sampling schemes, these three forms of contrast enable TriCL to capture both node- and group-level structural information in node embeddings. Our extensive experiments using 14 baseline approaches, 10 datasets, and two tasks demonstrate the effectiveness of TriCL, and most noticeably, TriCL almost consistently outperforms not just unsupervised competitors but also (semi-)supervised competitors mostly by significant margins for node classification. The code and datasets are available at https://github.com/wooner49/TriCL.-
dc.languageEnglish-
dc.publisherAssociation for the Advancement of Artificial Intelligence (AAAI)-
dc.titleI’m Me, We’re Us, and I’m Us: Tri-directional Contrastive Learning on Hypergraphs-
dc.typeConference-
dc.identifier.scopusid2-s2.0-85167341993-
dc.type.rimsCONF-
dc.citation.beginningpage8456-
dc.citation.endingpage8464-
dc.citation.publicationname37th AAAI Conference on Artificial Intelligence, AAAI 2023-
dc.identifier.conferencecountryUS-
dc.identifier.conferencelocationWashington-
dc.identifier.doi10.1609/aaai.v37i7.26019-
dc.contributor.localauthorShin, Kijung-
dc.contributor.nonIdAuthorLee, Dongjin-
Appears in Collection
AI-Conference Papers(학술대회논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0