Skip to content

v2.5.20: Merge pull request #402 from AInVFX/main

Choose a tag to compare

@adrientoupet adrientoupet released this 12 Dec 05:48
· 37 commits to main since this release
a1486a3
  • ⚡ Expanded attention backends - Full support for Flash Attention 2 (Ampere+), Flash Attention 3 (Hopper+), SageAttention 2, and SageAttention 3 (Blackwell/RTX 50xx), with automatic fallback chains to PyTorch SDPA when unavailable (based on PR by @naxci1 - thank you!)
  • 🍎 macOS/Apple Silicon compatibility - Replaced MPS autocast with explicit dtype conversion throughout VAE and DiT pipelines, resolving hangs and crashes on M-series Macs. BlockSwap now auto-disables with warning (unified memory makes it meaningless)
  • 🛡️ Flash Attention graceful fallback - Added compatibility shims for corrupted or partially installed flash_attn/xformers DLLs, preventing startup crashes
  • 🛡️ AMD ROCm: bitsandbytes conflict fix - Prevent kernel registration errors when diffusers attempts to re-import broken bitsandbytes installations
  • 📦 ComfyUI Manager: macOS classifier fix - Removed NVIDIA CUDA classifier causing false "GPU not supported" warnings on macOS
  • 📚 Documentation updates - Updated README with attention backend details, BlockSwap macOS notes, and clarified model caching descriptions