Distributed fine-tuning of CNNs for image retrieval on multiple mobile devices

Cited 2 time in webofscience Cited 0 time in scopus
  • Hit : 389
  • Download : 0
The high performance of mobile devices has enabled deep learning to be extended to also exploit its strengths on such devices. However, because their computing power is not yet sufficient to perform on-device training, a pre-trained model is usually downloaded to mobile devices, and only inference is performed on them. This situation leads to the problem that accuracy may be degraded if the characteristics of the data for training and those for inference are sufficiently different. In general, fine-tuning allows a pre-trained model to adapt to a given data set, but it has also been perceived as difficult on mobile devices. In this paper, we introduce our on-going effort to improve the quality of mobile deep learning by enabling fine-tuning on mobile devices. In order to reduce its cost to a level that can be operated on mobile devices, a light-weight fine-tuning method is proposed, and its cost is further reduced by using distributing computing on mobile devices. The proposed technique has been applied to LetsPic-DL, a group photoware application under development in our research group. It required only 24 seconds to fine-tune a pre-trained MobileNet with 100 photos on five Galaxy S8 units, resulting in an excellent image retrieval accuracy reflected a 27-35% improvement.
Publisher
ELSEVIER
Issue Date
2020-04
Language
English
Article Type
Article
Citation

PERVASIVE AND MOBILE COMPUTING, v.64

ISSN
1574-1192
DOI
10.1016/j.pmcj.2020.101134
URI
http://hdl.handle.net/10203/274293
Appears in Collection
CS-Journal Papers(저널논문)
Files in This Item
There are no files associated with this item.
This item is cited by other documents in WoS
⊙ Detail Information in WoSⓡ Click to see webofscience_button
⊙ Cited 2 items in WoS Click to see citing articles in records_button

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0