Skip to content

FLPoison: Benchmarking Poisoning Attacks and Defenses in Federated Learning

License

Notifications You must be signed in to change notification settings

vio1etus/FLPoison

Repository files navigation

Welcome to FLPoison

Python Versions Last Commit License: GPL v2

Check the Wiki to get started with the project.

Features

PyTorch's implementation of poisoning attacks and defenses in federated learning.

Category Details
FL Algorithms FedAvg, FedSGD, FedOpt(see fl/algorithms)
Data Distribution Balanced IID, Class-imbalanced IID, Quantity-imbalanced Dirichlet Non-IID, (Quantity-Balanced|-Imbalanced) Pathological Non-IID (see data_utils.py)
Datasets MNIST, FashionMNIST, EMNIST, CIFAR10, CINIC10, CIFAR100 (see dataset_config.py)
Models Logistic Regression, SimpleCNN, LeNet5, ResNet-series, VGG-series

Supported datasets and models pairs see datamodel.pdf

Federated Learning Algorithms

Name Source File Paper
FedSGD fedsgd.py Communication-Efficient Learning of Deep Networks from Decentralized Data - AISTATS '17
FedAvg fedavg.py Communication-Efficient Learning of Deep Networks from Decentralized Data - AISTATS '17
FedOpt fedopt.py Adaptive Federated Optimization - arxiv '20, ICLR '21

Attacks and Defenses

Applicable algorithms include base algorithms used by original paper, as well as others not explicitly mentioned but applicable based on the described principles. [ ] indicates necessary modifications for compatibility, also implemented within this framework. To sum up, we implemented and adapted the attacks and defenses to be compatible with three commonly-used FL algorithms, FedSGD, FedOpt, FedAvg.

Data Poisoning Attacks (DPAs)

Data poisoning attacks here, mainly targeted attacks, refer to attacks aimed at embedding backdoors or bias into the model, thus misleading it to produce the attacker's intended prediction

Name Source File Paper Base Algorithm Applicable Algorithms
Neurotoxin neurotoxin.py Neurotoxin: Durable Backdoors in Federated Learning - ICML '22 FedOpt FedOpt, [FedSGD, FedAvg]
Edge-case Backdoor edgecase.py Attack of the Tails: Yes, You Really Can Backdoor Federated Learning - NeurIPS '20 FedOpt FedSGD, FedOpt, [FedAvg]
Model Replacement Attack (Scaling Attack) modelreplacement.py How to Backdoor Federated Learning - AISTATS '20 FedOpt FedOpt, [FedSGD, FedAvg]
Alternating Minimization altermin.py Analyzing Federated Learning Through an Adversarial Lens - ICML '19 FedOpt FedSGD, FedOpt, [FedAvg]
DBA dba.py DBA: Distributed Backdoor Attacks Against Federated Learning - ICLR '19 FedOpt FedSGD, FedOpt, [FedAvg]
BadNets badnets.py BadNets: Identifying Vulnerabilities in the Machine Learning Model Supply Chain - NIPS-WS '17 Centralized ML [FedSGD, FedOpt, FedAvg]
Label Flipping Attack labelflipping.py Poisoning Attacks against Support Vector Machines - ICML'12 Centralized ML [FedSGD, FedOpt, FedAvg]

Defenses Against DPAs

Name Source File Paper Base Algorithm Applicable Algorithms
FLAME flame.py FLAME: Taming Backdoors in Federated Learning - USENIX Security '22 FedOpt FedOpt,[FedSGD, FedAvg]
DeepSight deepsight.py DeepSight: Mitigating Backdoor Attacks in Federated Learning Through Deep Model Inspection - NDSS '22 FedOpt FedOpt, [FedSGD, FedAvg]
CRFL crfl.py CRFL: Certifiably Robust Federated Learning against Backdoor Attacks - ICML '21 FedOpt FedOpt, [FedSGD, FedAvg]
NormClipping normclipping.py Can You Really Backdoor Federated Learning - NeurIPS '20 FedOpt FedOpt, [FedSGD, FedAvg]
FoolsGold foolsgold.py The Limitations of Federated Learning in Sybil Settings - RAID '20 FedSGD FedSGD, [FedOpt, FedAvg]
Auror auror.py Auror: Defending against poisoning attacks in collaborative deep learning systems - ACSAC '16 FedSGD FedSGD, [FedOpt, FedAvg]

Model Poisoning Attacks (MPAs)

Model poisoning attacks here, main untargeted attacks, refer to the attacks aimed at preventing convergence of the model, thus affecting the model's performance.

Name Source File Paper Base Algorithm Applicable Algorithms
Mimic Attack mimic.py Byzantine-Robust Learning on Heterogeneous Datasets via Bucketing - ICLR '22 FedSGD FedSGD, [FedOpt, FedAvg]
Min-Max attack min.py Manipulating the Byzantine: Optimizing Model Poisoning Attacks and Defenses for Federated Learning - NDSS '21 FedSGD FedSGD, [FedOpt, FedAvg]
Min-Sum attack min.py Manipulating the Byzantine: Optimizing Model Poisoning Attacks and Defenses for Federated Learning - NDSS '21 FedSGD FedSGD, [FedOpt, FedAvg]
Fang attack (Adaptive attack) fangattack.py Local Model Poisoning Attacks to Byzantine-Robust Federated Learning - USENIX Security '20 FedAvg [FedSGD, FedOpt], FedAvg
IPM attack ipm.py Fall of empires: Breaking Byzantine-tolerant SGD by inner product manipulation - UAI '20 FedSGD FedSGD, [FedOpt, FedAvg]
ALIE attack alie.py A Little Is Enough: Circumventing Defenses For Distributed Learning - NeurIPS '19 FedSGD FedSGD, [FedOpt, FedAvg]
Sign flipping attack signflipping.py Asynchronous Byzantine machine learning (the case of SGD) - ICML '18 FedSGD FedSGD, [FedOpt, FedAvg]
Gaussian (noise) attack gaussian.py Machine Learning with Adversaries: Byzantine Tolerant Gradient Descent - NeurIPS '17 FedSGD FedSGD, [FedOpt, FedAvg]

Defenses Against MPAs

Name Source File Paper Base Algorithm Applicable Algorithms
LASA lasa.py Achieving Byzantine-Resilient Federated Learning via Layer-Adaptive Sparsified Model Aggregation - WACV '25 FedOpt FedSGD, [FedOpt, FedAvg]
FLDetector fldetector.py FLDetector: Defending Federated Learning Against Model Poisoning Attacks via Detecting Malicious Clients - KDD '22 FedSGD FedOpt, [FedOpt, FedAvg]
SignGuard signguard.py Byzantine-robust Federated Learning through Collaborative Malicious Gradient Filtering - ICDCS '22 FedSGD FedSGD, [FedOpt, FedAvg]
Bucketing bucketing.py Byzantine-Robust Learning on Heterogeneous Datasets via Bucketing - ICLR '22 FedSGD FedSGD, [FedOpt, FedAvg]
DnC dnc.py Manipulating the Byzantine: Optimizing Model Poisoning Attacks and Defenses for Federated Learning - NDSS '21 FedSGD FedSGD, [FedOpt, FedAvg]
CenteredClipping centeredclipping Learning from History for Byzantine Robust Optimization - ICML '21 FedSGD FedSGD, [FedOpt, FedAvg]
FLTrust fltrust.py FLTrust: Byzantine-robust Federated Learning via Trust Bootstrapping - ArXiv'20, NDSS '21 FedOpt FedOpt, [FedSGD, FedAvg]
RFA (Geometric Median) rfa.py Robust Aggregation for Federated Learning - ArXiv'19, TSP '22 FedAvg [FedSGD, FedOpt], FedAvg
Bulyan bulyan.py The hidden vulnerability of distributed learning in Byzantium - ICML'18 FedSGD FedSGD, [FedOpt,FedAvg]
Coordinate-wise Median median.py Byzantine-robust distributed learning: Towards optimal statistical rates - ICML'18 FedSGD FedSGD, [FedOpt, FedAvg]
Trimmed Mean trimmedmean.py Byzantine-robust distributed learning: Towards optimal statistical rates - ICML'18 FedSGD FedSGD, [FedOpt, FedAvg]
Multi-Krum multikrum.py Machine Learning with Adversaries: Byzantine Tolerant Gradient Descent - NeurIPS '17 FedSGD FedSGD, [FedOpt, FedAvg]
Krum krum.py Machine Learning with Adversaries: Byzantine Tolerant Gradient Descent - NeurIPS '17 FedSGD FedSGD, [FedOpt, FedAvg]
SimpleClustering simpleclustering.py Simple majority-based clustering FedSGD, FedAvg, FedOpt FedSGD, FedAvg, FedOpt

Contributing

Bug reports, feature suggestions, and code contributions are welcome. Please open an issue or submit a pull request if you encounter any problems or have suggestions.

Citation

If you are using FLPoison for your work, please cite our paper with:

@misc{sokflpoison,
      title={SoK: Benchmarking Poisoning Attacks and Defenses in Federated Learning}, 
      author={Heyi Zhang and Yule Liu and Xinlei He and Jun Wu and Tianshuo Cong and Xinyi Huang},
      year={2025},
      eprint={2502.03801},
      archivePrefix={arXiv},
      primaryClass={cs.CR},
      url={https://arxiv.org/abs/2502.03801}, 
}

Releases

No releases published

Packages

No packages published

Languages