Description
Describe the bug
While using the Stable Diffusion Web UI and Hugging face Diffuser's Stable Diffusion for inpainting task with ControlNet, I am not getting the desired result in HF diffusers. But I want to get the same result as Stable Diffusion Web UI.
Reproduction
controlnet = ControlNetModel.from_pretrained(
"lllyasviel/control_v11p_sd15_inpaint", torch_dtype=torch.float16)
pipe = StableDiffusionControlNetInpaintPipeline.from_pretrained(
"runwayml/stable-diffusion-v1-5", controlnet = controlnet, torch_dtype = torch.float16)
pipe.to("cuda")
pipe.scheduler = DDIMScheduler.from_config(pipe.scheduler.config)
generate image
output = pipe(prompt = pos_prompt, negative_prompt = neg_prompt, num_inference_steps = 45,
generator = generator, image = init_image, mask_image = mask_image,
control_image = ref_image, strength = 0.5, guidance_scale = 8.0,
controlnet_conditioning_scale = 1.0, padding_mask_crop = None)
Logs
The code runs successfully, but the results are not up to the mark
System Info
- 🤗 Diffusers version: 0.33.1
- Platform: Linux-6.1.123+-x86_64-with-glibc2.35
- Running on Google Colab?: Yes
- Python version: 3.11.12
- PyTorch version (GPU?): 2.6.0+cu124 (True)
- Flax version (CPU?/GPU?/TPU?): 0.10.6 (gpu)
- Jax version: 0.5.2
- JaxLib version: 0.5.1
- Huggingface_hub version: 0.31.2
- Transformers version: 4.51.3
- Accelerate version: 1.6.0
- PEFT version: 0.15.2
- Bitsandbytes version: not installed
- Safetensors version: 0.5.3
- xFormers version: not installed
- Accelerator: Tesla T4, 15360 MiB
- Using GPU in script?:
- Using distributed or parallel set-up in script?:
Who can help?
No response