We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
FP16 version is impossible to run on local GPU, I hope there will be GPTQ and AWQ versions, please!!!
The text was updated successfully, but these errors were encountered:
The quantized models for Qwen2.5-VL are coming soon.
Sorry, something went wrong.
thanks!! How soon, 1 week or 1 month?
add vllm support Qwen2.5-VL too, please
well, I think vllm developers is responsible for this. they said v0.7.2 will support qwen2.5-vl.
No branches or pull requests
FP16 version is impossible to run on local GPU, I hope there will be GPTQ and AWQ versions, please!!!
The text was updated successfully, but these errors were encountered: