FedDefender: Client-Side Attack-Tolerant Federated Learning

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 47
  • Download : 0
DC FieldValueLanguage
dc.contributor.authorPark, Sungwonko
dc.contributor.authorHan, Sungwonko
dc.contributor.authorWu, Fangzhaoko
dc.contributor.authorKim, Sundongko
dc.contributor.authorZhu, Binko
dc.contributor.authorXie, Xingko
dc.contributor.authorCha, Meeyoungko
dc.date.accessioned2023-11-21T02:01:34Z-
dc.date.available2023-11-21T02:01:34Z-
dc.date.created2023-11-20-
dc.date.issued2023-08-08-
dc.identifier.citationKDD '23: The 29th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, pp.1850 - 1861-
dc.identifier.urihttp://hdl.handle.net/10203/314909-
dc.description.abstractFederated learning enables learning from decentralized data sources without compromising privacy, which makes it a crucial technique. However, it is vulnerable to model poisoning attacks, where malicious clients interfere with the training process. Previous defense mechanisms have focused on the server-side by using careful model aggregation, but this may not be effective when the data is not identically distributed or when attackers can access the information of benign clients. In this paper, we propose a new defense mechanism that focuses on the client-side, called FedDefender, to help benign clients train robust local models and avoid the adverse impact of malicious model updates from attackers, even when a server-side defense cannot identify or remove adversaries. Our method consists of two main components: (1) attack-tolerant local meta update and (2) attack-tolerant global knowledge distillation. These components are used to find noise-resilient model parameters while accurately extracting knowledge from a potentially corrupted global model. Our client-side defense strategy has a flexible structure and can work in conjunction with any existing server-side strategies. Evaluations of real-world scenarios across multiple datasets show that the proposed method enhances the robustness of federated learning against model poisoning attacks.-
dc.languageEnglish-
dc.publisherACM-
dc.titleFedDefender: Client-Side Attack-Tolerant Federated Learning-
dc.typeConference-
dc.identifier.scopusid2-s2.0-85171391136-
dc.type.rimsCONF-
dc.citation.beginningpage1850-
dc.citation.endingpage1861-
dc.citation.publicationnameKDD '23: The 29th ACM SIGKDD Conference on Knowledge Discovery and Data Mining-
dc.identifier.conferencecountryUS-
dc.identifier.conferencelocationLong Beach-
dc.identifier.doi10.1145/3580305.3599346-
dc.contributor.localauthorCha, Meeyoung-
dc.contributor.nonIdAuthorWu, Fangzhao-
dc.contributor.nonIdAuthorKim, Sundong-
dc.contributor.nonIdAuthorZhu, Bin-
dc.contributor.nonIdAuthorXie, Xing-
Appears in Collection
CS-Conference Papers(학술회의논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0