Disentangling Degree-related Biases and Interest for Out-of-Distribution Generalized Directed Network Embedding

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 33
  • Download : 0
The goal of directed network embedding is to represent the nodes in a given directed network as embeddings that preserve the asymmetric relationships between nodes. While a number of directed network embedding methods have been proposed, we empirically show that the existing methods lack out-of-distribution generalization abilities against degree-related distributional shifts. To mitigate this problem, we propose ODIN (Out-of-Distribution Generalized Directed Network Embedding), a new directed NE method where we model multiple factors in the formation of directed edges. Then, for each node, ODIN learns multiple embeddings, each of which preserves its corresponding factor, by disentangling interest factors and biases related to in- and out-degrees of nodes. Our experiments on four real-world directed networks demonstrate that disentangling multiple factors enables ODIN to yield out-of-distribution generalized embeddings that are consistently effective under various degrees of shifts in degree distributions. Specifically, ODIN universally outperforms 9 state-of-the-art competitors in 2 LP tasks on 4 real-world datasets under both identical distribution (ID) and non-ID settings. The code is available at https://github.com/hsyoo32/odin.
Publisher
Association for Computing Machinery, Inc
Issue Date
2023-05-02
Language
English
Citation

2023 World Wide Web Conference, WWW 2023, pp.231 - 239

DOI
10.1145/3543507.3583271
URI
http://hdl.handle.net/10203/316059
Appears in Collection
AI-Conference Papers(학술대회논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0