SplitGP: Achieving Both Generalization and Personalization in Federated Learning

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 42
  • Download : 0
DC FieldValueLanguage
dc.contributor.authorHan, Dong-Junko
dc.contributor.authorKim, Do-Yeonko
dc.contributor.authorChoi, Minseokko
dc.contributor.authorBrinton, Christopher G.ko
dc.contributor.authorMoon, Jaekyunko
dc.date.accessioned2023-12-05T09:00:50Z-
dc.date.available2023-12-05T09:00:50Z-
dc.date.created2023-11-24-
dc.date.issued2023-05-18-
dc.identifier.citation42nd IEEE International Conference on Computer Communications, INFOCOM 2023-
dc.identifier.urihttp://hdl.handle.net/10203/315760-
dc.description.abstractA fundamental challenge to providing edge-AI services is the need for a machine learning (ML) model that achieves personalization (i.e., to individual clients) and generalization (i.e., to unseen data) properties concurrently. Existing techniques in federated learning (FL) have encountered a steep tradeoff between these objectives and impose large computational requirements on edge devices during training and inference. In this paper, we propose SplitGP, a new split learning solution that can simultaneously capture generalization and personalization capabilities for efficient inference across resource-constrained clients (e.g., mobile/IoT devices). Our key idea is to split the full ML model into client-side and server-side components, and impose different roles to them: the client-side model is trained to have strong personalization capability optimized to each client's main task, while the server-side model is trained to have strong generalization capability for handling all clients' out-of-distribution tasks. We analytically characterize the convergence behavior of SplitGP, revealing that all client models approach stationary points asymptotically. Further, we analyze the inference time in SplitGP and provide bounds for determining model split ratios. Experimental results show that SplitGP outperforms existing baselines by wide margins in inference time and test accuracy for varying amounts of out-of-distribution samples.-
dc.languageEnglish-
dc.publisherInstitute of Electrical and Electronics Engineers Inc.-
dc.titleSplitGP: Achieving Both Generalization and Personalization in Federated Learning-
dc.typeConference-
dc.identifier.scopusid2-s2.0-85170508346-
dc.type.rimsCONF-
dc.citation.publicationname42nd IEEE International Conference on Computer Communications, INFOCOM 2023-
dc.identifier.conferencecountryUS-
dc.identifier.conferencelocationNew York-
dc.identifier.doi10.1109/INFOCOM53939.2023.10229027-
dc.contributor.localauthorMoon, Jaekyun-
dc.contributor.nonIdAuthorChoi, Minseok-
dc.contributor.nonIdAuthorBrinton, Christopher G.-
Appears in Collection
EE-Conference Papers(학술회의논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0