A Convergence Monitoring Method for DNN Training of On-Device Task Adaptation

Cited 2 time in webofscience Cited 0 time in scopus
  • Hit : 90
  • Download : 0
DC FieldValueLanguage
dc.contributor.authorChoi, Seungkyuko
dc.contributor.authorShin, Jaekangko
dc.contributor.authorKim, Lee-Supko
dc.date.accessioned2021-12-09T06:52:33Z-
dc.date.available2021-12-09T06:52:33Z-
dc.date.created2021-11-25-
dc.date.created2021-11-25-
dc.date.created2021-11-25-
dc.date.issued2021-11-
dc.identifier.citation40th IEEE/ACM International Conference on Computer Aided Design (ICCAD)-
dc.identifier.issn1933-7760-
dc.identifier.urihttp://hdl.handle.net/10203/290336-
dc.description.abstractDNN training has become a major workload in on-device situations to execute various vision tasks with high performance. Accordingly, training architectures accompanying approximate computing have been steadily studied for efficient acceleration. However, most of the works examine their scheme on from-the-scratch training where inaccurate computing is not tolerable. Moreover, previous solutions are mostly provided as an extended version of the inference works, e.g., sparsity/pruning, quantization, dataflow, etc. Therefore, unresolved issues in practical workloads that hinder the total speed of the DNN training process remain still. In this work, with targeting the transfer learning-based task adaptation of the practical on-device training workload, we propose a convergence monitoring method to resolve the redundancy in massive training iterations. By utilizing the network’s output value, we detect the training intensity of incoming tasks and monitor the prediction convergence with the given intensity to provide early-exits in the scheduled training iteration. As a result, an accurate approximation over various tasks is performed with minimal overhead. Unlike the sparsity-driven approximation, our method enables runtime optimization and can be easily applicable to off-the-shelf accelerators achieving significant speedup. Evaluation results on various datasets show a geomean of 2.2× speedup over baseline and 1.8× speedup over the latest convergence-related training method.-
dc.languageEnglish-
dc.publisherIEEE/ACM-
dc.titleA Convergence Monitoring Method for DNN Training of On-Device Task Adaptation-
dc.typeConference-
dc.identifier.wosid000747493600087-
dc.identifier.scopusid2-s2.0-85124144114-
dc.type.rimsCONF-
dc.citation.publicationname40th IEEE/ACM International Conference on Computer Aided Design (ICCAD)-
dc.identifier.conferencecountryUS-
dc.identifier.conferencelocationVirtual-
dc.identifier.doi10.1109/ICCAD51958.2021.9643522-
dc.contributor.localauthorKim, Lee-Sup-
Appears in Collection
EE-Conference Papers(학술회의논문)
Files in This Item
There are no files associated with this item.
This item is cited by other documents in WoS
⊙ Detail Information in WoSⓡ Click to see webofscience_button
⊙ Cited 2 items in WoS Click to see citing articles in records_button

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0