TENAS: Using Taylor Expansion and Channel-level Skip Connection for Neural Architecture Search

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 190
  • Download : 0
DC FieldValueLanguage
dc.contributor.authorLim, Heechulko
dc.contributor.authorKim, Min-Sooko
dc.date.accessioned2022-09-06T03:00:58Z-
dc.date.available2022-09-06T03:00:58Z-
dc.date.created2022-09-06-
dc.date.created2022-09-06-
dc.date.issued2022-08-
dc.identifier.citationIEEE ACCESS, v.10, pp.84790 - 84798-
dc.identifier.issn2169-3536-
dc.identifier.urihttp://hdl.handle.net/10203/298371-
dc.description.abstractThere is growing interest in automating designing good neural network architectures. The neural architecture search (NAS) methods proposed recently have significantly reduced the architecture search cost by sharing parameters, but there is still a challenging problem in designing search space. The existing operation-level architecture search methods require a large amount of computing power or designing the search space of operations very carefully. For example, types of operations (e.g., convolutions with filter sizes 3 x 3) in the search space need to be carefully selected in the existing methods. In this paper, we investigate the possibility of achieving competitive performance with them only using a small amount of computing power and without designing search space carefully. We propose TENAS using Taylor expansion and only a fixed type of operation. The resulting architecture is sparse in terms of channel and has different topology at different cells. The experimental results for CIFAR-10 and ImageNet show that a fine-granular and sparse model searched by TENAS achieves very competitive performance with dense models searched by the existing methods.-
dc.languageEnglish-
dc.publisherIEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC-
dc.titleTENAS: Using Taylor Expansion and Channel-level Skip Connection for Neural Architecture Search-
dc.typeArticle-
dc.identifier.wosid000842085900001-
dc.identifier.scopusid2-s2.0-85135747464-
dc.type.rimsART-
dc.citation.volume10-
dc.citation.beginningpage84790-
dc.citation.endingpage84798-
dc.citation.publicationnameIEEE ACCESS-
dc.identifier.doi10.1109/ACCESS.2022.3195208-
dc.contributor.localauthorKim, Min-Soo-
dc.contributor.nonIdAuthorLim, Heechul-
dc.description.isOpenAccessN-
dc.type.journalArticleArticle-
dc.subject.keywordAuthorNeural architecture search-
dc.subject.keywordAuthorconvolutional neural network-
dc.subject.keywordAuthordeep learning-
Appears in Collection
CS-Journal Papers(저널논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0