Sparsification on Different Federated Learning Schemes: Comparative Analysis

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 73
  • Download : 0
High communication overhead is a major bottleneck in federated learning (FL). To overcome this issue, sparsification is utilized in various compression frameworks. Generally, local clients upload the updated weights to the server. However, in sparsification, we observed that local clients upload the difference between the updated weights and the original weights. Our study is to confirm the importance of uploading the difference of weights in sparsification and to observe how different the accuracy between the two schemes is.
Publisher
Institute of Electrical and Electronics Engineers Inc.
Issue Date
2022-10-20
Language
English
Citation

The 13th International Conference on ICT Convergence, ICTC 2022, pp.2044 - 2047

DOI
10.1109/ICTC55196.2022.9952431
URI
http://hdl.handle.net/10203/301578
Appears in Collection
EE-Conference Papers(학술회의논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0