-
Notifications
You must be signed in to change notification settings - Fork 44
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Using CUDA #35
Comments
cudnn libs + --device cuda on my LXC setup i followed this article https://sluijsjes.nl/2024/05/18/coral-and-nvidia-passthrough-for-proxmox-lxc-to-install-frigate-video-surveillance-server/ you just have to include the correct libs in the container, or map damn from the host to the container. i ended up building the cointainer myself with this dockerfile:
took some time to find the right versions to make it run. |
👍 For official GPU support, would be nice to speed things up :) |
👍 for this feature. I already have a GPU in my server for other applications :) |
Also, is it possible to make it unload model based on some |
Simple and works out of the box - https://github.com/linuxserver/docker-faster-whisper |
I made PR #44 to allow building it directly from this repository, and use CUDA. |
working with 2.4.0 |
Using CUDNN9.6 and 2.4.0 versions does not work properly. I hope someone can give me a hint
Do I have to use CUDNN9.1? #35 (comment) |
Thanks to @rufinus, I got this working on a GTX 970M. Sadly this poxy little GPU only supports CUDA Compute Capability 5.2, and I couldn't get it working on CUDA 12. The CUDA 11 workaround discussed here worked a treat. Getting <1s translation using
|
How to run faster-whisper using CUDA ?
The text was updated successfully, but these errors were encountered: