File(s) under permanent embargo

Shielding Federated Learning: A New Attack Approach and Its Defense

conference contribution
posted on 2021-01-01, 00:00 authored by Wei Wan, Jianrong Lu, Shengshan Hu, Leo ZhangLeo Zhang, Xiaobing Pei
Federated learning (FL) is a newly emerging distributed learning framework that is communication-efficient with user privacy guarantee. Wireless end-user devices can collaboratively train a global model while keeping their local training data private. Nevertheless, recent studies show that FL is highly susceptible to attacks from malicious users since the server cannot directly access and audit the user’s local training data. In this work, we identify a new kind of attack surface that is much easier to be carried out while remaining a high attack success rate. By exploiting the inherent flaw of the weight assignment strategy in the standard federated learning process, our attack can bypass the existing defense methods and damage the performance of the global model effectively. We then propose a new density-based detection strategy to defend against such attack by modeling the problem as anomaly detection to effectively detect anomalous updates. Experimental results on two typical datasets, MNIST and CIFAR-10, show that our attack can significantly affect the convergence of the aggregated model and reduce the accuracy of the global model. This holds true even the state-of-the-art defense strategies are deployed, while our newly proposed defense can effectively mitigate such attack.



Wireless Communications and Networking. Conference (2021 : Nanjing, China)


1 - 7




Nanjing, China

Place of publication

Piscataway, N.J.

Start date


End date






Publication classification

E1 Full written paper - refereed

Title of proceedings

WCNC 2021 : Proceedings of the IEEE Wireless Communications and Networking Conference