Federated Split Learning With Joint Personalization-Generalization for Inference-Stage Optimization in Wireless Edge Networks

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 22
  • Download : 0
DC FieldValueLanguage
dc.contributor.authorHan, Dong-Junko
dc.contributor.authorKim, Do-Yeonko
dc.contributor.authorChoi, Minseokko
dc.contributor.authorNickel, Davidko
dc.contributor.authorMoon, Jaekyunko
dc.contributor.authorChiang, Mungko
dc.contributor.authorBrinton, Christopher G.ko
dc.date.accessioned2024-07-02T11:00:06Z-
dc.date.available2024-07-02T11:00:06Z-
dc.date.created2023-11-24-
dc.date.issued2024-06-
dc.identifier.citationIEEE TRANSACTIONS ON MOBILE COMPUTING, v.23, no.6, pp.7048 - 7065-
dc.identifier.issn1536-1233-
dc.identifier.urihttp://hdl.handle.net/10203/320118-
dc.description.abstractThe demand for intelligent services at the network edge has introduced several research challenges. One is the need for a machine learning architecture that achieves personalization (to individual clients) and generalization (to unseen data) properties concurrently across different applications. Another is the need for an inference strategy that can satisfy network resource and latency constraints during testing-time. Existing techniques in federated learning have encountered a steep trade-off between personalization and generalization, and have not explicitly considered the resource requirements during the inference-stage. In this paper, we propose SplitGP, a joint edge-AI training and inference strategy that simultaneously captures generalization/personalization for efficient inference across resource-constrained clients. The training process of SplitGP is based on federated split learning, with the key idea of optimizing the client-side model to have personalization capability tailored to its main task, while training the server-side model to have generalization capability for handling out-of-distribution tasks. During testing-time, each client selectively offloads inference tasks to the server based on the uncertainty threshold tunable based on network resource availability. Through formal convergence analysis and inference time analysis, we provide guidelines on the selection of key meta-parameters in SplitGP. Experimental results confirm the advantage of SplitGP over existing baselines.-
dc.languageEnglish-
dc.publisherIEEE COMPUTER SOC-
dc.titleFederated Split Learning With Joint Personalization-Generalization for Inference-Stage Optimization in Wireless Edge Networks-
dc.typeArticle-
dc.identifier.wosid001216462000031-
dc.identifier.scopusid2-s2.0-85177078408-
dc.type.rimsART-
dc.citation.volume23-
dc.citation.issue6-
dc.citation.beginningpage7048-
dc.citation.endingpage7065-
dc.citation.publicationnameIEEE TRANSACTIONS ON MOBILE COMPUTING-
dc.identifier.doi10.1109/TMC.2023.3331690-
dc.contributor.localauthorMoon, Jaekyun-
dc.contributor.nonIdAuthorHan, Dong-Jun-
dc.contributor.nonIdAuthorChoi, Minseok-
dc.contributor.nonIdAuthorNickel, David-
dc.contributor.nonIdAuthorChiang, Mung-
dc.contributor.nonIdAuthorBrinton, Christopher G.-
dc.description.isOpenAccessN-
dc.type.journalArticleArticle-
dc.subject.keywordAuthoredge-AI-
dc.subject.keywordAuthorinference-
dc.subject.keywordAuthorFederated learning-
dc.subject.keywordAuthorsplit learning-
dc.subject.keywordAuthorpersonalization-
dc.subject.keywordAuthorwireless edge network-
Appears in Collection
EE-Journal Papers(저널논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0