Run the whisper.cpp in a Docker container with GPU support.
docker compose up
or
MODEL=large-v2 LANGUAGE=ru docker compose up
docker compose build --progress=plain
You may want to do it manually in order to see the progress
./models/download.sh large-v2
This script is a plain copy of download-ggml-model.sh. You may find additional information and configurations here
Place all the files in the ./volume/input/
directory
docker compose up
Configure defaults
MODEL=large-v2 LANGUAGE=ru docker compose up
MODEL=large-v3 LANGUAGE=ru docker compose up
MODEL=large-v3-turbo LANGUAGE=ru docker compose up
Argument | Values | Defaults |
---|---|---|
model | base, medium, large, other options | large-v2 |
language | rn, ru, fr, etc. (depends on the model) | ru |
You can find the result in the ./volume/output/
directory