Topological Insights into Sparse Neural Networks

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 131
  • Download : 0
DC FieldValueLanguage
dc.contributor.authorLiu, Shiweiko
dc.contributor.authorVan, der Lee Timko
dc.contributor.authorYaman,Anilko
dc.contributor.authorAtashgahi, Zahrako
dc.contributor.authorFerraro, Davideko
dc.contributor.authorSokar, Ghadako
dc.contributor.authorPechenizkiy, Mykolako
dc.contributor.authorMocanu, Decebal Constantinko
dc.date.accessioned2021-11-04T06:42:22Z-
dc.date.available2021-11-04T06:42:22Z-
dc.date.created2021-10-26-
dc.date.issued2021-09-
dc.identifier.citationEuropean Conference on Machine Learning and Knowledge Discovery in Databases, ECML PKDD 2020, pp.279 - 294-
dc.identifier.issn0302-9743-
dc.identifier.urihttp://hdl.handle.net/10203/288760-
dc.description.abstractSparse neural networks are effective approaches to reduce the resource requirements for the deployment of deep neural networks. Recently, the concept of adaptive sparse connectivity, has emerged to allow training sparse neural networks from scratch by optimizing the sparse structure during training. However, comparing different sparse topologies and determining how sparse topologies evolve during training, especially for the situation in which the sparse structure optimization is involved, remain as challenging open questions. This comparison becomes increasingly complex as the number of possible topological comparisons increases exponentially with the size of networks. In this work, we introduce an approach to understand and compare sparse neural network topologies from the perspective of graph theory. We first propose Neural Network Sparse Topology Distance (NNSTD) to measure the distance between different sparse neural networks. Further, we demonstrate that sparse neural networks can outperform over-parameterized models in terms of performance, even without any further structure optimization. To the end, we also show that adaptive sparse connectivity can always unveil a plenitude of sparse sub-networks with very different topologies which outperform the dense model, by quantifying and comparing their topological evolutionary processes. The latter findings complement the Lottery Ticket Hypothesis by showing that there is a much more efficient and robust way to find “winning tickets”. Altogether, our results start enabling a better theoretical understanding of sparse neural networks, and demonstrate the utility of using graph theory to analyze them.-
dc.languageEnglish-
dc.publisherSpringer Science and Business Media Deutschland GmbH-
dc.titleTopological Insights into Sparse Neural Networks-
dc.typeConference-
dc.identifier.scopusid2-s2.0-85103233455-
dc.type.rimsCONF-
dc.citation.beginningpage279-
dc.citation.endingpage294-
dc.citation.publicationnameEuropean Conference on Machine Learning and Knowledge Discovery in Databases, ECML PKDD 2020-
dc.identifier.conferencecountryBE-
dc.identifier.conferencelocationgent-
dc.identifier.doi10.1007/978-3-030-67664-3_17-
dc.contributor.localauthorYaman,Anil-
dc.contributor.nonIdAuthorLiu, Shiwei-
dc.contributor.nonIdAuthorVan, der Lee Tim-
dc.contributor.nonIdAuthorAtashgahi, Zahra-
dc.contributor.nonIdAuthorFerraro, Davide-
dc.contributor.nonIdAuthorSokar, Ghada-
dc.contributor.nonIdAuthorPechenizkiy, Mykola-
dc.contributor.nonIdAuthorMocanu, Decebal Constantin-
Appears in Collection
RIMS Conference Papers
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0