You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
No, the current version does not support batch inference, the reason is that this repo is based on the official OpenAI whisper repo, which does not support batch inference.
However, in evaluating Whisper-AT on large datasets (e.g., AudioSet), we do use batch inference, the implementation is pre-extract and store Whisper encoder features on disk (which is done one by one), and feed a batch to TLTR module for training or inference (which supports batch input).
There are implementations of Whisper that support batch inference from the third party, as long as their encoder feature is same with the official Whisper, you can use them to extract features from Whisper in batch.
Hello, thank you for sharing really nice code.
However, I cannot find batch-wise inference codes for transcribing .
(I referred quick start example code in ReadMe)
Is there any batch-wise codes for inference?
best regards
The text was updated successfully, but these errors were encountered: