-
Notifications
You must be signed in to change notification settings - Fork 9.2k
Which GPU should I buy for ComfyUI
This is a tier list of which consumer GPUs we would recommend for using with ComfyUI.
In AI the most important thing is the software stack, which is why this is ranked this way.
All Nvidia GPUs from the last 10 years (since Maxwell/GTX 900) are supported in pytorch and they work very well.
3000 series and above are recommended for best performance. More VRAM is always preferable.
Why you should avoid older generations if you can.
Older generations of cards will work however performance might be worse than expected because they don't support certain operations.
Here is a quick summary of what is supported on each generation:
- 50 series (blackwell): fp16, bf16, fp8, fp4
- 40 series (ada): fp16, bf16, fp8
- 30 series (ampere): fp16, bf16
- 20 series (turing): fp16
- 10 series (pascal) and below: only slow full precision fp32.
Models are inferenced in fp16 or bf16 for best quality depending on the model with the option for fp8 on some models for less memory/more speed at lower quality.
Note that this table doesn't mean that it's completely unsupported to use fp16 on 10 series for example it just means it's going to be slower because the GPU can't handle it natively.
Don't be tempted by the cheap pascal workstation cards with lots of vram, your performance will be bad.
Anything older than 2000 series like Volta or Pascal should be avoided because they are about to be deprecated in cuda 13.
Officially supported in pytorch.
Works well if the card is officially supported by ROCm but they are slow compared to price equivalent Nvidia GPUs mainly because of the lack of an optimized implementation of torch.nn.functional.scaled_dot_product_attention for consumer GPUs.
Unsupported cards might be a real pain to get running.
Officially supported in pytorch. People seem to get it working fine but I had trouble with my integrated intel GPU.
Unofficial pytorch rocm builds for windows have come out that work decently but they are still a bit of a pain to get working properly.
Things might improve in the future once they have official pytorch ROCm working on windows.
Officially supported in pytorch. It works but they love randomly breaking things with OS updates.
Very slow. A lot of ops are not properly supported. No fp8 support at all.
Pytorch doesn't work at all.
They are: "working on it", until they do actually get it working I recommend avoiding them completely because it might take them so long to make it work that the current hardware will be completely obsolete.