Skip to content

stellarbear/whisper.cpp.docker

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

6 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Run the whisper.cpp in a Docker container with GPU support.

TLDR

docker compose up

or

MODEL=large-v2 LANGUAGE=ru docker compose up

Step by step

1. Build CUDA image (single run)

docker compose build --progress=plain

2. Download models (single run)

You may want to do it manually in order to see the progress

./models/download.sh large-v2 

This script is a plain copy of download-ggml-model.sh. You may find additional information and configurations here

3. Prepare your files

Place all the files in the ./volume/input/ directory

4. Run the docker compose

docker compose up

Configure defaults

MODEL=large-v2 LANGUAGE=ru docker compose up
MODEL=large-v3 LANGUAGE=ru docker compose up
MODEL=large-v3-turbo LANGUAGE=ru docker compose up
Argument Values Defaults
model base, medium, large, other options large-v2
language rn, ru, fr, etc. (depends on the model) ru

5. Result

You can find the result in the ./volume/output/ directory

Releases

No releases published

Packages

No packages published