Check the Wiki to get started with the project.
PyTorch's implementation of poisoning attacks and defenses in federated learning.
Category | Details |
---|---|
FL Algorithms | FedAvg, FedSGD, FedOpt(see fl/algorithms) |
Data Distribution | Balanced IID, Class-imbalanced IID, Quantity-imbalanced Dirichlet Non-IID, (Quantity-Balanced|-Imbalanced) Pathological Non-IID (see data_utils.py) |
Datasets | MNIST, FashionMNIST, EMNIST, CIFAR10, CINIC10, CIFAR100 (see dataset_config.py) |
Models | Logistic Regression, SimpleCNN, LeNet5, ResNet-series, VGG-series |
Supported datasets and models pairs see datamodel.pdf
Name | Source File | Paper |
---|---|---|
FedSGD | fedsgd.py | Communication-Efficient Learning of Deep Networks from Decentralized Data - AISTATS '17 |
FedAvg | fedavg.py | Communication-Efficient Learning of Deep Networks from Decentralized Data - AISTATS '17 |
FedOpt | fedopt.py | Adaptive Federated Optimization - arxiv '20, ICLR '21 |
Applicable algorithms include base algorithms used by original paper, as well as others not explicitly mentioned but applicable based on the described principles. [ ]
indicates necessary modifications for compatibility, also implemented within this framework. To sum up, we implemented and adapted the attacks and defenses to be compatible with three commonly-used FL algorithms, FedSGD, FedOpt, FedAvg.
Data poisoning attacks here, mainly targeted attacks, refer to attacks aimed at embedding backdoors or bias into the model, thus misleading it to produce the attacker's intended prediction
Name | Source File | Paper | Base Algorithm | Applicable Algorithms |
---|---|---|---|---|
Neurotoxin | neurotoxin.py | Neurotoxin: Durable Backdoors in Federated Learning - ICML '22 | FedOpt | FedOpt, [FedSGD, FedAvg] |
Edge-case Backdoor | edgecase.py | Attack of the Tails: Yes, You Really Can Backdoor Federated Learning - NeurIPS '20 | FedOpt | FedSGD, FedOpt, [FedAvg] |
Model Replacement Attack (Scaling Attack) | modelreplacement.py | How to Backdoor Federated Learning - AISTATS '20 | FedOpt | FedOpt, [FedSGD, FedAvg] |
Alternating Minimization | altermin.py | Analyzing Federated Learning Through an Adversarial Lens - ICML '19 | FedOpt | FedSGD, FedOpt, [FedAvg] |
DBA | dba.py | DBA: Distributed Backdoor Attacks Against Federated Learning - ICLR '19 | FedOpt | FedSGD, FedOpt, [FedAvg] |
BadNets | badnets.py | BadNets: Identifying Vulnerabilities in the Machine Learning Model Supply Chain - NIPS-WS '17 | Centralized ML | [FedSGD, FedOpt, FedAvg] |
Label Flipping Attack | labelflipping.py | Poisoning Attacks against Support Vector Machines - ICML'12 | Centralized ML | [FedSGD, FedOpt, FedAvg] |
Name | Source File | Paper | Base Algorithm | Applicable Algorithms |
---|---|---|---|---|
FLAME | flame.py | FLAME: Taming Backdoors in Federated Learning - USENIX Security '22 | FedOpt | FedOpt,[FedSGD, FedAvg] |
DeepSight | deepsight.py | DeepSight: Mitigating Backdoor Attacks in Federated Learning Through Deep Model Inspection - NDSS '22 | FedOpt | FedOpt, [FedSGD, FedAvg] |
CRFL | crfl.py | CRFL: Certifiably Robust Federated Learning against Backdoor Attacks - ICML '21 | FedOpt | FedOpt, [FedSGD, FedAvg] |
NormClipping | normclipping.py | Can You Really Backdoor Federated Learning - NeurIPS '20 | FedOpt | FedOpt, [FedSGD, FedAvg] |
FoolsGold | foolsgold.py | The Limitations of Federated Learning in Sybil Settings - RAID '20 | FedSGD | FedSGD, [FedOpt, FedAvg] |
Auror | auror.py | Auror: Defending against poisoning attacks in collaborative deep learning systems - ACSAC '16 | FedSGD | FedSGD, [FedOpt, FedAvg] |
Model poisoning attacks here, main untargeted attacks, refer to the attacks aimed at preventing convergence of the model, thus affecting the model's performance.
Bug reports, feature suggestions, and code contributions are welcome. Please open an issue or submit a pull request if you encounter any problems or have suggestions.
If you are using FLPoison for your work, please cite our paper with:
@misc{sokflpoison,
title={SoK: Benchmarking Poisoning Attacks and Defenses in Federated Learning},
author={Heyi Zhang and Yule Liu and Xinlei He and Jun Wu and Tianshuo Cong and Xinyi Huang},
year={2025},
eprint={2502.03801},
archivePrefix={arXiv},
primaryClass={cs.CR},
url={https://arxiv.org/abs/2502.03801},
}