ComfyUI-LightVAE is a collection of LightX2V VAE custom nodes designed for ComfyUI, supporting high-performance video VAE models including LightVAE and LightTAE.
The LightX2V team has deeply optimized VAE, creating two major series: LightVAE and LightTAE, which significantly reduce memory usage and improve inference speed while maintaining high quality.
|
Feature: Best Balance ⚖️
|
Feature: Ultra-fast + High Quality 🏆
|
Test Environment: H100 GPU, BF16, 81-frame video (480P)
| Model | Encode Time | Decode Time | Encode Memory | Decode Memory | Quality |
|---|---|---|---|---|---|
| lightvaew2_1 | 1.5s | 2.1s | 4.8GB | 5.6GB | ⭐⭐⭐⭐⭐ |
| lighttaew2_1 | 0.4s | 0.25s | 0.009GB | 0.4GB | ⭐⭐⭐⭐ |
| Wan2.1_VAE | 4.2s | 5.5s | 8.5GB | 10.1GB | ⭐⭐⭐⭐ |
| taew2_1 | 0.4s | 0.25s | 0.009GB | 0.4GB | ⭐⭐⭐ |
Performance Improvements:
- 🚀 LightVAE is 2-3x faster than official VAE, 50% less memory
- ⚡ LightTAE is 10+ times faster than official VAE, 95%+ less memory
- 🎨 Near-official VAE quality, surpasses open-source TAE
# Clone LightX2V repository
git clone https://github.com/ModelTC/LightX2V
cd LightX2V
python setup_vae.py installLightVAE nodes depend on WanVideoWrapper for main model support:
cd ComfyUI/custom_nodes
git clone https://github.com/kijai/ComfyUI-WanVideoWrappercd ComfyUI/custom_nodes
git clone https://github.com/YOUR_USERNAME/ComfyUI-LightVAEOption 1: Distilled Models (Recommended, 4-step)
- 🔗 Wan2.1 Distilled Models and Wan2.2 Distilled Models
- ✅ Supports BF16 format
- ✅ Supports FP8 format (requires models with
_comfyui.safetensorssuffix)
Option 2: Original Models (20-step)
- 🔗 Wan2.1 Official Models and Wan2.2 Official Models
- ✅ Supports BF16 format
- ✅ Supports FP8 format (requires models with
_comfyui.safetensorssuffix)
# Download to ComfyUI/models/diffusion_models/
huggingface-cli download lightx2v/Wan2.1/2-Distill-Models \
--local-dir ./ComfyUI/models/diffusion_models/All VAE Models (Required):
# Download all VAE models
huggingface-cli download lightx2v/Autoencoders \
--local-dir ./ComfyUI/models/vae/
# Or download only what you need (Recommended)
huggingface-cli download lightx2v/Autoencoders lightvaew2_1.pth \
--local-dir ./ComfyUI/models/vae/Supported VAE Models:
Wan2.1_VAE.pth/.safetensors- Official VAE 2.1Wan2.2_VAE.pth/.safetensors- Official VAE 2.2lightvaew2_1.pth/.safetensors- Optimized VAE 2.1 ⭐ Recommendedtaew2_1.pth/.safetensors- Open-source TAE 2.1taew2_2.pth/.safetensors- Open-source TAE 2.2lighttaew2_1.pth/.safetensors- Optimized TAE 2.1 ⚡ Fastestlighttaew2_2.pth/.safetensors- Optimized TAE 2.2
Input Parameters:
vae_filename- VAE model filename (automatically lists from./models/vae/)dtype- Data type (bfloat16 / float16 / float32)device- Compute device (cuda / cpu)
Output:
vae- VAE model object
Features:
- ✅ Automatically identifies VAE type from filename
- ✅ Supports all LightX2V VAE models
Input Parameters:
vae- VAE object from Loaderlatent- Latent representation
Output:
IMAGE- Decoded video frames
Supports:
- ✅ All VAE series (WanVAE, LightVAE)
- ✅ All TAE series (TAE, LightTAE)
High-performance configuration using 4-step distilled model + LightVAE optimized decoder.
Workflow File: example/workflows/wan2.1_I2V_4step_fp8_lightvae.json
Wan2.2 Text-Image-to-Video + LightVAE decoding.
Workflow File: example/workflows/wan2.2_TI2V_lightvae.json
⚠️ Wan2.1 VAE can only be used with Wan2.1/Wan2.2-A1B backbone models⚠️ Wan2.2 VAE can only be used with Wan2.2 TI2V backbone models- ❌ Do not mix different versions of VAE and backbone models
- Project Homepage: https://github.com/ModelTC/LightX2V
- VAE Models: https://huggingface.co/lightx2v/Autoencoders
- Video Generation Models: https://huggingface.co/lightx2v/
- ComfyUI-WanVideoWrapper: https://github.com/kijai/ComfyUI-WanVideoWrapper
- TAE Series Models: https://github.com/madebyollin/taesd
- Wan-AI: https://huggingface.co/Wan-AI
If this project helps you, please give a ⭐ to LightX2V and this repository!
- GitHub Issues: Issues page of this repository
- LightX2V Issues: https://github.com/ModelTC/LightX2V/issues
- HuggingFace: https://huggingface.co/lightx2v
Enjoy using LightX2V VAE! 🚀


