You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
SDXL Turbo is a SDXL model that can generate consistent images in a single step. You can use more steps to increase the quality. The proper way to use it is with the new SDTurboScheduler node but it might also work with the regular schedulers.
4
+
5
+
Here is the link to [download the official SDXL turbo checkpoint](https://huggingface.co/stabilityai/sdxl-turbo/blob/main/sd_xl_turbo_1.0_fp16.safetensors)
6
+
7
+
Here is a workflow for using it:
8
+
9
+

10
+
11
+
Save this image then load it or drag it on ComfyUI to get the workflow. I then recommend enabling Extra Options -> Auto Queue in the interface. Then press "Queue Prompt" once and start writing your prompt.
Copy file name to clipboardExpand all lines: video/README.md
+1-1
Original file line number
Diff line number
Diff line change
@@ -5,7 +5,7 @@
5
5
As of writing this there are two image to video checkpoints. Here are the official checkpoints for [the one tuned to generate 14 frame videos](https://huggingface.co/stabilityai/stable-video-diffusion-img2vid/blob/main/svd.safetensors) and [the one for 25 frame videos](https://huggingface.co/stabilityai/stable-video-diffusion-img2vid-xt/blob/main/svd_xt.safetensors). Put them in the ComfyUI/models/checkpoints folder.
6
6
7
7
8
-
The most basic way of using the image to video model is by giving it an init image like in the folowing workflow that uses the 14 frame model.
8
+
The most basic way of using the image to video model is by giving it an init image like in the following workflow that uses the 14 frame model.
9
9
You can download this webp animated image and load it or drag it on [ComfyUI](https://github.com/comfyanonymous/ComfyUI) to get the workflow.
0 commit comments