Skip to content

Commit f4fc750

Browse files
authored
[Docs] Fix typos (huggingface#7131)
* Add copyright notice to relevant files and fix typos * Set `timestep_spacing` parameter of `StableDiffusionXLPipeline`'s scheduler to `'trailing'`. * Update `StableDiffusionXLPipeline.from_single_file` by including EulerAncestralDiscreteScheduler with `timestep_spacing="trailing"` param. * Update model loading method in SDXL Turbo documentation
1 parent 8f2d13c commit f4fc750

File tree

4 files changed

+39
-13
lines changed

4 files changed

+39
-13
lines changed

Diff for: docs/source/en/api/models/consistency_decoder_vae.md

+13-1
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,18 @@
1+
<!--Copyright 2024 The HuggingFace Team. All rights reserved.
2+
3+
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
4+
the License. You may obtain a copy of the License at
5+
6+
http://www.apache.org/licenses/LICENSE-2.0
7+
8+
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
9+
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
10+
specific language governing permissions and limitations under the License.
11+
-->
12+
113
# Consistency Decoder
214

3-
Consistency decoder can be used to decode the latents from the denoising UNet in the [`StableDiffusionPipeline`]. This decoder was introduced in the [DALL-E 3 technical report](https://openai.com/dall-e-3).
15+
Consistency decoder can be used to decode the latents from the denoising UNet in the [`StableDiffusionPipeline`]. This decoder was introduced in the [DALL-E 3 technical report](https://openai.com/dall-e-3).
416

517
The original codebase can be found at [openai/consistencydecoder](https://github.com/openai/consistencydecoder).
618

Diff for: docs/source/en/api/pipelines/stable_diffusion/sdxl_turbo.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -21,7 +21,7 @@ The abstract from the paper is:
2121
## Tips
2222

2323
- SDXL Turbo uses the exact same architecture as [SDXL](./stable_diffusion_xl), which means it also has the same API. Please refer to the [SDXL](./stable_diffusion_xl) API reference for more details.
24-
- SDXL Turbo should disable guidance scale by setting `guidance_scale=0.0`
24+
- SDXL Turbo should disable guidance scale by setting `guidance_scale=0.0`.
2525
- SDXL Turbo should use `timestep_spacing='trailing'` for the scheduler and use between 1 and 4 steps.
2626
- SDXL Turbo has been trained to generate images of size 512x512.
2727
- SDXL Turbo is open-access, but not open-source meaning that one might have to buy a model license in order to use it for commercial applications. Make sure to read the [official model card](https://huggingface.co/stabilityai/sdxl-turbo) to learn more.

Diff for: docs/source/en/api/schedulers/consistency_decoder.md

+14-2
Original file line numberDiff line numberDiff line change
@@ -1,9 +1,21 @@
1+
<!--Copyright 2024 The HuggingFace Team. All rights reserved.
2+
3+
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
4+
the License. You may obtain a copy of the License at
5+
6+
http://www.apache.org/licenses/LICENSE-2.0
7+
8+
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
9+
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
10+
specific language governing permissions and limitations under the License.
11+
-->
12+
113
# ConsistencyDecoderScheduler
214

3-
This scheduler is a part of the [`ConsistencyDecoderPipeline`] and was introduced in [DALL-E 3](https://openai.com/dall-e-3).
15+
This scheduler is a part of the [`ConsistencyDecoderPipeline`] and was introduced in [DALL-E 3](https://openai.com/dall-e-3).
416

517
The original codebase can be found at [openai/consistency_models](https://github.com/openai/consistency_models).
618

719

820
## ConsistencyDecoderScheduler
9-
[[autodoc]] schedulers.scheduling_consistency_decoder.ConsistencyDecoderScheduler
21+
[[autodoc]] schedulers.scheduling_consistency_decoder.ConsistencyDecoderScheduler

Diff for: docs/source/en/using-diffusers/sdxl_turbo.md

+11-9
Original file line numberDiff line numberDiff line change
@@ -31,29 +31,31 @@ Before you begin, make sure you have the following libraries installed:
3131
Model weights may be stored in separate subfolders on the Hub or locally, in which case, you should use the [`~StableDiffusionXLPipeline.from_pretrained`] method:
3232

3333
```py
34-
from diffusers import AutoPipelineForText2Image, AutoPipelineForImage2Image
34+
from diffusers import AutoPipelineForText2Image
3535
import torch
3636

3737
pipeline = AutoPipelineForText2Image.from_pretrained("stabilityai/sdxl-turbo", torch_dtype=torch.float16, variant="fp16")
3838
pipeline = pipeline.to("cuda")
3939
```
4040

41-
You can also use the [`~StableDiffusionXLPipeline.from_single_file`] method to load a model checkpoint stored in a single file format (`.ckpt` or `.safetensors`) from the Hub or locally:
41+
You can also use the [`~StableDiffusionXLPipeline.from_single_file`] method to load a model checkpoint stored in a single file format (`.ckpt` or `.safetensors`) from the Hub or locally. For this loading method, you need to set `timestep_spacing="trailing"` (feel free to experiment with the other scheduler config values to get better results):
4242

4343
```py
44-
from diffusers import StableDiffusionXLPipeline
44+
from diffusers import StableDiffusionXLPipeline, EulerAncestralDiscreteScheduler
4545
import torch
4646

4747
pipeline = StableDiffusionXLPipeline.from_single_file(
48-
"https://huggingface.co/stabilityai/sdxl-turbo/blob/main/sd_xl_turbo_1.0_fp16.safetensors", torch_dtype=torch.float16)
48+
"https://huggingface.co/stabilityai/sdxl-turbo/blob/main/sd_xl_turbo_1.0_fp16.safetensors",
49+
torch_dtype=torch.float16, variant="fp16")
4950
pipeline = pipeline.to("cuda")
51+
pipeline.scheduler = EulerAncestralDiscreteScheduler.from_config(pipeline.scheduler.config, timestep_spacing="trailing")
5052
```
5153

5254
## Text-to-image
5355

5456
For text-to-image, pass a text prompt. By default, SDXL Turbo generates a 512x512 image, and that resolution gives the best results. You can try setting the `height` and `width` parameters to 768x768 or 1024x1024, but you should expect quality degradations when doing so.
5557

56-
Make sure to set `guidance_scale` to 0.0 to disable, as the model was trained without it. A single inference step is enough to generate high quality images.
58+
Make sure to set `guidance_scale` to 0.0 to disable, as the model was trained without it. A single inference step is enough to generate high quality images.
5759
Increasing the number of steps to 2, 3 or 4 should improve image quality.
5860

5961
```py
@@ -75,7 +77,7 @@ image
7577

7678
## Image-to-image
7779

78-
For image-to-image generation, make sure that `num_inference_steps * strength` is larger or equal to 1.
80+
For image-to-image generation, make sure that `num_inference_steps * strength` is larger or equal to 1.
7981
The image-to-image pipeline will run for `int(num_inference_steps * strength)` steps, e.g. `0.5 * 2.0 = 1` step in
8082
our example below.
8183

@@ -84,14 +86,14 @@ from diffusers import AutoPipelineForImage2Image
8486
from diffusers.utils import load_image, make_image_grid
8587

8688
# use from_pipe to avoid consuming additional memory when loading a checkpoint
87-
pipeline = AutoPipelineForImage2Image.from_pipe(pipeline_text2image).to("cuda")
89+
pipeline_image2image = AutoPipelineForImage2Image.from_pipe(pipeline_text2image).to("cuda")
8890

8991
init_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/cat.png")
9092
init_image = init_image.resize((512, 512))
9193

9294
prompt = "cat wizard, gandalf, lord of the rings, detailed, fantasy, cute, adorable, Pixar, Disney, 8k"
9395

94-
image = pipeline(prompt, image=init_image, strength=0.5, guidance_scale=0.0, num_inference_steps=2).images[0]
96+
image = pipeline_image2image(prompt, image=init_image, strength=0.5, guidance_scale=0.0, num_inference_steps=2).images[0]
9597
make_image_grid([init_image, image], rows=1, cols=2)
9698
```
9799

@@ -101,7 +103,7 @@ make_image_grid([init_image, image], rows=1, cols=2)
101103

102104
## Speed-up SDXL Turbo even more
103105

104-
- Compile the UNet if you are using PyTorch version 2 or better. The first inference run will be very slow, but subsequent ones will be much faster.
106+
- Compile the UNet if you are using PyTorch version 2.0 or higher. The first inference run will be very slow, but subsequent ones will be much faster.
105107

106108
```py
107109
pipe.unet = torch.compile(pipe.unet, mode="reduce-overhead", fullgraph=True)

0 commit comments

Comments
 (0)