Exploring Representational Similarity Analysis to Protect Federated Learning from Data Poisoning
Ref: CISTER-TR-240303 Publication Date: 13 to 17, Mar, 2024
Exploring Representational Similarity Analysis to Protect Federated Learning from Data Poisoning
Ref: CISTER-TR-240303 Publication Date: 13 to 17, Mar, 2024Abstract:
As a paradigm that preserves privacy, Federated Learning (FL) enables distributed clients to cooperatively train global models using local datasets. However, this approach also provides opportunities for adversaries to compromise system stability by contaminating local data, such as through Label-Flipping Attacks (LFAs). In addressing these security challenges, most existing defense strategies presume the presence of an independent and identically distributed (IID) environment, resulting in suboptimal performance under Non-IID conditions. This paper introduces RSim-FL, a novel and pragmatic defense mechanism that incorporates Representational Similarity Analysis (RSA) into the detection of malevolent updates. This is achieved by calculating the similarity between uploaded local models and the global model. The evaluation, conducted against five state-of-the-art baselines, demonstrates that RSim-FL can accurately identify malicious local models and effectively mitigate divergent Label-Flipping Attacks (LFAs) in a Non-IID setting.
The Web Conference (formerly known as International World Wide Web Conference, abbreviated as WWW) (WWW), The Short Papers Track.
Singapore, Singapore.
Record Date: 7, Mar, 2024









Gengxiang Chen
Kai Li