Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

create a script to train autoencoderkl #10605

Merged
merged 15 commits into from
Jan 27, 2025
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
59 changes: 59 additions & 0 deletions examples/research_projects/autoencoderkl/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,59 @@
# AutoencoderKL training example

## Installing the dependencies

Before running the scripts, make sure to install the library's training dependencies:

**Important**

To make sure you can successfully run the latest versions of the example scripts, we highly recommend **installing from source** and keeping the install up to date as we update the example scripts frequently and install some example-specific requirements. To do this, execute the following steps in a new virtual environment:
```bash
git clone https://github.com/huggingface/diffusers
cd diffusers
pip install .
```

Then cd in the example folder and run
```bash
pip install -r requirements.txt
```


And initialize an [🤗Accelerate](https://github.com/huggingface/accelerate/) environment with:

```bash
accelerate config
```

## Training on CIFAR10

Please replace the validation image with your own image.

```bash
accelerate launch train_autoencoderkl.py \
--pretrained_model_name_or_path stabilityai/sd-vae-ft-mse \
--dataset_name=cifar10 \
--image_column=img \
--validation_image images/bird.jpg images/car.jpg images/dog.jpg images/frog.jpg \
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Users won't know about these images. Maybe add a note?

--num_train_epochs 100 \
--gradient_accumulation_steps 2 \
--learning_rate 4.5e-6 \
--lr_scheduler cosine \
--report_to wandb \
```

## Training on ImageNet

```bash
accelerate launch train_autoencoderkl.py \
--pretrained_model_name_or_path stabilityai/sd-vae-ft-mse \
--num_train_epochs 100 \
--gradient_accumulation_steps 2 \
--learning_rate 4.5e-6 \
--lr_scheduler cosine \
--report_to wandb \
--mixed_precision bf16 \
--train_data_dir /path/to/ImageNet/train \
--validation_image ./image.png \
--decoder_only
```
15 changes: 15 additions & 0 deletions examples/research_projects/autoencoderkl/requirements.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,15 @@
accelerate>=0.16.0
bitsandbytes
datasets
huggingface_hub
lpips
numpy
packaging
Pillow
taming_transformers
torch
torchvision
tqdm
transformers
wandb
xformers
Loading