-
Notifications
You must be signed in to change notification settings - Fork 63
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support for RTX 2060 Super 8GB VRAM? #159
Comments
We currently support only NVIDIA GPUs with architectures sm_86 (Ampere: RTX 3090, A6000), sm_89 (Ada: RTX 4090), and sm_80 (A100). See #1 for more details. |
seems like again I miss a great opportunity to use flux. |
Does your GPU support CUDA 12.6 drivers and torch 2.5.1+cu124. |
I am not sure if 20-series GPUs work. It seems that they have INT4 tensor cores as in https://images.nvidia.com/aem-dam/en-zz/Solutions/design-visualization/technologies/turing-architecture/NVIDIA-Turing-Architecture-Whitepaper.pdf. I will do a quick check. |
You can also check using this repo. If it works for you (see if you are able to generate few images), I will integrate int4 and GGUF version of Flux. |
Yes, it supports this as I am using comfyUI and it shows me the same pair at startup. |
yes, RTX 2000 (Turing, sm7.5) are supported, even in the recent releases like NVIDIA CUDA 12.8 Update 1 |
If you are using Comfy you are set. There are many low VRAM workflows. |
but my question was, how can I use nunchaku with RTX 2060 |
Does this support the second generation of Nvidia RTX cards, like the RTX 2060 8GB Super, which has 8GB VRAM? Will it work with less than 8GB VRAM?
The text was updated successfully, but these errors were encountered: