Skip to content

Commit 8a21245

Browse files
Add SDXL turbo example.
1 parent 3c76203 commit 8a21245

File tree

4 files changed

+14
-1
lines changed

4 files changed

+14
-1
lines changed

README.md

+2
Original file line numberDiff line numberDiff line change
@@ -38,6 +38,8 @@ Here are some more advanced examples:
3838

3939
[LCM](lcm)
4040

41+
[SDXL Turbo](sdturbo)
42+
4143
[Video Models](video)
4244

4345
#### The [Node Guide (WIP)](https://blenderneko.github.io/ComfyUI-docs/) documents what each node does.

sdturbo/README.md

+11
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,11 @@
1+
# SDXL Turbo Examples
2+
3+
SDXL Turbo is a SDXL model that can generate consistent images in a single step. You can use more steps to increase the quality. The proper way to use it is with the new SDTurboScheduler node but it might also work with the regular schedulers.
4+
5+
Here is the link to [download the official SDXL turbo checkpoint](https://huggingface.co/stabilityai/sdxl-turbo/blob/main/sd_xl_turbo_1.0_fp16.safetensors)
6+
7+
Here is a workflow for using it:
8+
9+
![Example](sdxlturbo_example.png)
10+
11+
Save this image then load it or drag it on ComfyUI to get the workflow. I then recommend enabling Extra Options -> Auto Queue in the interface. Then press "Queue Prompt" once and start writing your prompt.

sdturbo/sdxlturbo_example.png

354 KB
Loading

video/README.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@
55
As of writing this there are two image to video checkpoints. Here are the official checkpoints for [the one tuned to generate 14 frame videos](https://huggingface.co/stabilityai/stable-video-diffusion-img2vid/blob/main/svd.safetensors) and [the one for 25 frame videos](https://huggingface.co/stabilityai/stable-video-diffusion-img2vid-xt/blob/main/svd_xt.safetensors). Put them in the ComfyUI/models/checkpoints folder.
66

77

8-
The most basic way of using the image to video model is by giving it an init image like in the folowing workflow that uses the 14 frame model.
8+
The most basic way of using the image to video model is by giving it an init image like in the following workflow that uses the 14 frame model.
99
You can download this webp animated image and load it or drag it on [ComfyUI](https://github.com/comfyanonymous/ComfyUI) to get the workflow.
1010

1111
![Example](image_to_video.webp)

0 commit comments

Comments
 (0)