Skip to content

Releases: mit-han-lab/nunchaku

Nunchaku v0.1.4 Release

08 Mar 00:34
6772359
Compare
Choose a tag to compare

🚀 Nunchaku v0.1.4 Released!

This update brings significant improvements and bug fixes:

  • Enhanced Low-Memory Inference: Added support for 4-bit text encoder and per-layer CPU offloading, reducing FLUX’s minimum memory requirement to just 4 GiB while maintaining a 2–3× speedup. (#131, #144, #119)
  • Improved Generation Stability: Fixed issues affecting any-resolution generation. (#129, #43)
  • LoRA Compatibility Fixes: Resolved various LoRA-related issues. (#135, #108)
  • Memory Management Improvements: Addressed pin memory issues for better stability. (#49, #141, #154)
  • ComfyUI Integration Fixes: Resolved directory-related issues. (#145)

Latest Linux wheels are available at https://huggingface.co/mit-han-lab/nunchaku/tree/main.

Nunchaku v0.1.3 Release

25 Feb 18:51
Compare
Choose a tag to compare

Support auto LoRA format detection for the conversion.

Nunchaku v0.0.1 Release

08 Dec 06:21
Compare
Choose a tag to compare

Optimize the RAM usage when loading the model

Nunchaku v0.0.0 Release

27 Nov 05:11
Compare
Choose a tag to compare

Initial release:

  • Support INT4 inference of FLUX.1-schnell and FLUX.1-dev on NVIDIA GPUs.
  • Deliver the text-to-image and image-to-image demo.
  • Release the quality and latency benchmark.