You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+2-2
Original file line number
Diff line number
Diff line change
@@ -77,7 +77,7 @@ Please refer to the [How to use Stable Diffusion in Apple Silicon](https://huggi
77
77
78
78
## Quickstart
79
79
80
-
Generating outputs is super easy with 🤗 Diffusers. To generate an image from text, use the `from_pretrained` method to load any pretrained diffusion model (browse the [Hub](https://huggingface.co/models?library=diffusers&sort=downloads) for 22000+ checkpoints):
80
+
Generating outputs is super easy with 🤗 Diffusers. To generate an image from text, use the `from_pretrained` method to load any pretrained diffusion model (browse the [Hub](https://huggingface.co/models?library=diffusers&sort=downloads) for 25.000+ checkpoints):
81
81
82
82
```python
83
83
from diffusers import DiffusionPipeline
@@ -219,7 +219,7 @@ Also, say 👋 in our public Discord channel <a href="https://discord.gg/G7tWnz9
TCD-LoRA also supports other LoRAs trained on different styles. For example, let's load the [TheLastBen/Papercut_SDXL](https://huggingface.co/TheLastBen/Papercut_SDXL) LoRA and fuse it with the TCD-LoRA with the [`~loaders.UNet2DConditionLoadersMixin.set_adapters`] method.
166
+
TCD-LoRA also supports other LoRAs trained on different styles. For example, let's load the [TheLastBen/Papercut_SDXL](https://huggingface.co/TheLastBen/Papercut_SDXL) LoRA and fuse it with the TCD-LoRA with the [`~loaders.UNet2DConditionLoadersMixin.set_adapters`] method.
167
167
168
168
> [!TIP]
169
169
> Check out the [Merge LoRAs](merge_loras) guide to learn more about efficient merging methods.
The inference parameters in this example might not work for all examples, so we recommend you to try different values for `num_inference_steps`, `guidance_scale`, `controlnet_conditioning_scale` and `cross_attention_kwargs` parameters and choose the best one.
339
+
The inference parameters in this example might not work for all examples, so we recommend you to try different values for `num_inference_steps`, `guidance_scale`, `controlnet_conditioning_scale` and `cross_attention_kwargs` parameters and choose the best one.
340
340
</Tip>
341
341
342
342
</hfoption>
@@ -350,7 +350,7 @@ from diffusers import StableDiffusionXLPipeline
350
350
from diffusers.utils import load_image, make_image_grid
# processing_res=768, # (optional) Maximum resolution of processing. If set to 0: will not resize at all. Defaults to 768.
250
250
# match_input_res=True, # (optional) Resize depth prediction to match input resolution.
251
251
# batch_size=0, # (optional) Inference batch size, no bigger than `num_ensemble`. If set to 0, the script will automatically decide the proper batch size. Defaults to 0.
@@ -1032,7 +1032,7 @@ image = pipe().images[0]
1032
1032
1033
1033
Make sure you have @crowsonkb's <https://github.com/crowsonkb/k-diffusion> installed:
1034
1034
1035
-
```
1035
+
```sh
1036
1036
pip install k-diffusion
1037
1037
```
1038
1038
@@ -1854,13 +1854,13 @@ To use this pipeline, you need to:
1854
1854
1855
1855
You can simply use pip to install IPEX with the latest version.
1856
1856
1857
-
```python
1857
+
```sh
1858
1858
python -m pip install intel_extension_for_pytorch
1859
1859
```
1860
1860
1861
1861
**Note:** To install a specific version, run with the following command:
0 commit comments