Skip to content

Commit 00bc36b

Browse files
committed
Initial commit
0 parents  commit 00bc36b

27 files changed

+3979
-0
lines changed

.gitignore

+5
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,5 @@
1+
**/.DS_Store
2+
*/__pycache__/
3+
results/*
4+
base_models/*
5+
test.sh

LICENSE.txt

+21
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,21 @@
1+
MIT License
2+
3+
Copyright (c) 2024 Yatong Bai
4+
5+
Permission is hereby granted, free of charge, to any person obtaining a copy
6+
of this software and associated documentation files (the "Software"), to deal
7+
in the Software without restriction, including without limitation the rights
8+
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9+
copies of the Software, and to permit persons to whom the Software is
10+
furnished to do so, subject to the following conditions:
11+
12+
The above copyright notice and this permission notice shall be included in all
13+
copies or substantial portions of the Software.
14+
15+
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16+
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17+
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18+
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19+
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20+
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
21+
SOFTWARE.

README.md

+99
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,99 @@
1+
# MixedNUTS: Training-Free Accuracy-Robustness Balance via Nonlinearly Mixed Classifiers
2+
3+
This is the official code implementation of the preprint paper \
4+
*MixedNUTS: Training-Free Accuracy-Robustness Balance via Nonlinearly Mixed Classifiers* \
5+
by [Yatong Bai](https://bai-yt.github.io), [Mo Zhou](https://cdluminate.github.io), [Vishal M. Patel](https://engineering.jhu.edu/faculty/vishal-patel), and [Somayeh Sojoudi](https://www2.eecs.berkeley.edu/Faculty/Homepages/sojoudi.html).
6+
7+
**TL;DR:** MixedNUTS balances clean data classification accuracy and adversarial robustness without additional training
8+
via a mixed classifier with nonlinear base model logit transformations.
9+
10+
11+
<img src="main_figure.jpg" alt="MixedNUTS Results" title="Results" width="800"/>
12+
13+
14+
#### Citing our work (BibTeX)
15+
16+
```bibtex
17+
@article{MixedNUTS,
18+
title={MixedNUTS: Training-Free Accuracy-Robustness Balance via Nonlinearly Mixed Classifiers},
19+
author={Bai, Yatong and Zhou, Mo and Patel, Vishal M. and Sojoudi, Somayeh},
20+
year={2024}
21+
}
22+
```
23+
24+
25+
## Getting Started
26+
27+
### Model Checkpoints
28+
29+
All robust base classifiers are available on [RobustBench](https://robustbench.github.io).
30+
31+
The ImageNet accurate base classifier is from
32+
the [ConvNeXt-V2](https://github.com/facebookresearch/ConvNeXt-V2) repository and can be downloaded
33+
[here](https://dl.fbaipublicfiles.com/convnext/convnextv2/im22k/convnextv2_large_22k_224_ema.pt).
34+
35+
The CIFAR-10 and -100 accurate base classifiers are fine-tuned from
36+
[BiT](https://github.com/google-research/big_transfer) checkpoints and will be released soon.
37+
38+
Create a `base_models` directory and organize as follows:
39+
```
40+
base_models
41+
42+
└───cifar10
43+
│ └───cifar10_std_rn152.pt
44+
45+
└───cifar100
46+
└───cifar100_std_rn152.pt
47+
48+
└───imagenet
49+
└───imagenet_std_convnext_v2-l_224.pt
50+
```
51+
52+
### Environment
53+
54+
Run the following to install the environment:
55+
```
56+
conda env create -f environment.yml
57+
```
58+
59+
There are two additional pip packages that are not present in pypi and need to be manually installed via the following:
60+
61+
```
62+
conda activate nlmc
63+
pip install git+https://github.com/fra31/auto-attack
64+
pip install git+https://github.com/RobustBench/robustbench.git
65+
```
66+
67+
### Replicating Our Results
68+
69+
CIFAR-10:
70+
```
71+
python run_robustbench.py --root_dir base_models --dataset_name cifar10 \
72+
--rob_model_name Peng2023Robust --std_model_arch rn152 --map_type best \
73+
--adaptive --n_examples 10000 --batch_size_per_gpu 40 --disable_nonlin_for_grad
74+
```
75+
76+
CIFAR-100:
77+
```
78+
python run_robustbench.py --root_dir base_models --dataset_name cifar100 \
79+
--rob_model_name Wang2023Better_WRN-70-16 --std_model_arch rn152 --map_type best \
80+
--adaptive --n_examples 10000 --batch_size_per_gpu 40 --disable_nonlin_for_grad
81+
```
82+
83+
ImageNet:
84+
```
85+
python run_robustbench.py --root_dir base_models --dataset_name imagenet \
86+
--rob_model_name Liu2023Comprehensive_Swin-L --std_model_arch convnext_v2-l_224 --map_type best \
87+
--adaptive --n_examples 5000 --batch_size_per_gpu 20 --disable_nonlin_for_grad
88+
```
89+
90+
### Building MixedNUTS with Your Base Classifiers
91+
92+
Please refer to `scripts.sh` for the workflow of constructing MixedNUTS.
93+
94+
95+
## Third-party Code
96+
97+
`adaptive_autoattack.py`, `autopgd_base.py`, and `fab_pt.py` in `adaptive_autoattack` are modified based on [AutoAttack](https://github.com/fra31/auto-attack).
98+
99+
`robust_bench.py` is modified based on [RobustBench](https://github.com/RobustBench/robustbench).

adaptive_autoattack/__init__.py

+1
Original file line numberDiff line numberDiff line change
@@ -0,0 +1 @@
1+
from .adaptive_autoattack import AdaptiveAutoAttack

0 commit comments

Comments
 (0)