Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

App Submitted - whisper-wrapper.v10 #179

Merged
merged 1 commit into from
Aug 29, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
139 changes: 139 additions & 0 deletions docs/_apps/whisper-wrapper/v10/index.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,139 @@
---
layout: posts
classes: wide
title: "Whisper Wrapper (v10)"
date: 2024-08-29T22:13:54+00:00
---
## About this version

- Submitter: [keighrim](https://github.com/keighrim)
- Submission Time: 2024-08-29T22:13:54+00:00
- Prebuilt Container Image: [ghcr.io/clamsproject/app-whisper-wrapper:v10](https://github.com/clamsproject/app-whisper-wrapper/pkgs/container/app-whisper-wrapper/v10)
- Release Notes

> This version adds some delegation parameters to whisper.transcribe
> - `task`: delegate to `--task`
> - `initialPrompt`: delegate to `--initial-prompt`
> - `conditionOnPreviousText`: delegate to `--condition-on-previous-text`
> - `noSpeechThreshold`: delegate to `--no-speech-threshold`

## About this app (See raw [metadata.json](metadata.json))

**A CLAMS wrapper for Whisper-based ASR software originally developed by OpenAI.**

- App ID: [http://apps.clams.ai/whisper-wrapper/v10](http://apps.clams.ai/whisper-wrapper/v10)
- App License: Apache 2.0
- Source Repository: [https://github.com/clamsproject/app-whisper-wrapper](https://github.com/clamsproject/app-whisper-wrapper) ([source tree of the submitted version](https://github.com/clamsproject/app-whisper-wrapper/tree/v10))
- Analyzer Version: 20231117
- Analyzer License: MIT


#### Inputs
(**Note**: "*" as a property value means that the property is required but can be any value.)

One of the following is required: [
- [http://mmif.clams.ai/vocabulary/AudioDocument/v1](http://mmif.clams.ai/vocabulary/AudioDocument/v1) (required)
(of any properties)

- [http://mmif.clams.ai/vocabulary/VideoDocument/v1](http://mmif.clams.ai/vocabulary/VideoDocument/v1) (required)
(of any properties)



]


#### Configurable Parameters
(**Note**: _Multivalued_ means the parameter can have one or more values.)

- `modelSize`: optional, defaults to `tiny`

- Type: string
- Multivalued: False
- Choices: **_`tiny`_**, `True`, `base`, `b`, `small`, `s`, `medium`, `m`, `large`, `l`, `large-v2`, `l2`, `large-v3`, `l3`


> The size of the model to use. When `modelLang=en` is given, for non-`large` models, English-only models will be used instead of multilingual models for speed and accuracy. (For `large` models, English-only models are not available.) (also can be given as alias: tiny=t, base=b, small=s, medium=m, large=l, large-v2=l2, large-v3=l3)
- `modelLang`: required

- Type: string
- Multivalued: False


> Language of the model to use, accepts two- or three-letter ISO 639 language codes, however Whisper only supports a subset of languages. If the language is not supported, error will be raised.For the full list of supported languages, see https://github.com/openai/whisper/blob/20231117/whisper/tokenizer.py . In addition to the langauge code, two-letter region codes can be added to the language code, e.g. "en-US" for US English. Note that the region code is only for compatibility and recording purpose, and Whisper neither detects regional dialects, nor use the given one for transcription. When the langauge code is not given, Whisper will run in langauge detection mode, and will use first few seconds of the audio to detect the language.
- `task`: optional, defaults to `transcribe`

- Type: string
- Multivalued: False
- Choices: **_`transcribe`_**, `translate`


> (from whisper CLI) whether to perform X->X speech recognition ('transcribe') or X->English translation ('translate')
- `initialPrompt`: required

- Type: string
- Multivalued: False


> (from whisper CLI) optional text to provide as a prompt for the first window.
- `conditionOnPreviousText`: optional, defaults to `true`

- Type: boolean
- Multivalued: False
- Choices: `false`, **_`true`_**


> (from whisper CLI) if True, provide the previous output of the model as a prompt for the next window; disabling may make the text inconsistent across windows, but the model becomes less prone to getting stuck in a failure loop
- `noSpeechThreshold`: optional, defaults to `0.6`

- Type: number
- Multivalued: False


> (from whisper CLI) if the probability of the <|nospeech|> token is higher than this value AND the decoding has failed due to `logprob_threshold`, consider the segment as silence
- `pretty`: optional, defaults to `false`

- Type: boolean
- Multivalued: False
- Choices: **_`false`_**, `true`


> The JSON body of the HTTP response will be re-formatted with 2-space indentation
- `runningTime`: optional, defaults to `false`

- Type: boolean
- Multivalued: False
- Choices: **_`false`_**, `true`


> The running time of the app will be recorded in the view metadata
- `hwFetch`: optional, defaults to `false`

- Type: boolean
- Multivalued: False
- Choices: **_`false`_**, `true`


> The hardware information (architecture, GPU and vRAM) will be recorded in the view metadata


#### Outputs
(**Note**: "*" as a property value means that the property is required but can be any value.)

(**Note**: Not all output annotations are always generated.)

- [http://mmif.clams.ai/vocabulary/TextDocument/v1](http://mmif.clams.ai/vocabulary/TextDocument/v1)
(of any properties)

- [http://mmif.clams.ai/vocabulary/TimeFrame/v5](http://mmif.clams.ai/vocabulary/TimeFrame/v5)
- _timeUnit_ = "milliseconds"

- [http://mmif.clams.ai/vocabulary/Alignment/v1](http://mmif.clams.ai/vocabulary/Alignment/v1)
(of any properties)

- [http://vocab.lappsgrid.org/Token](http://vocab.lappsgrid.org/Token)
(of any properties)

- [http://vocab.lappsgrid.org/Sentence](http://vocab.lappsgrid.org/Sentence)
(of any properties)

128 changes: 128 additions & 0 deletions docs/_apps/whisper-wrapper/v10/metadata.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,128 @@
{
"name": "Whisper Wrapper",
"description": "A CLAMS wrapper for Whisper-based ASR software originally developed by OpenAI.",
"app_version": "v10",
"mmif_version": "1.0.5",
"analyzer_version": "20231117",
"app_license": "Apache 2.0",
"analyzer_license": "MIT",
"identifier": "http://apps.clams.ai/whisper-wrapper/v10",
"url": "https://github.com/clamsproject/app-whisper-wrapper",
"input": [
[
{
"@type": "http://mmif.clams.ai/vocabulary/AudioDocument/v1",
"required": true
},
{
"@type": "http://mmif.clams.ai/vocabulary/VideoDocument/v1",
"required": true
}
]
],
"output": [
{
"@type": "http://mmif.clams.ai/vocabulary/TextDocument/v1"
},
{
"@type": "http://mmif.clams.ai/vocabulary/TimeFrame/v5",
"properties": {
"timeUnit": "milliseconds"
}
},
{
"@type": "http://mmif.clams.ai/vocabulary/Alignment/v1"
},
{
"@type": "http://vocab.lappsgrid.org/Token"
},
{
"@type": "http://vocab.lappsgrid.org/Sentence"
}
],
"parameters": [
{
"name": "modelSize",
"description": "The size of the model to use. When `modelLang=en` is given, for non-`large` models, English-only models will be used instead of multilingual models for speed and accuracy. (For `large` models, English-only models are not available.) (also can be given as alias: tiny=t, base=b, small=s, medium=m, large=l, large-v2=l2, large-v3=l3)",
"type": "string",
"choices": [
"tiny",
true,
"base",
"b",
"small",
"s",
"medium",
"m",
"large",
"l",
"large-v2",
"l2",
"large-v3",
"l3"
],
"default": "tiny",
"multivalued": false
},
{
"name": "modelLang",
"description": "Language of the model to use, accepts two- or three-letter ISO 639 language codes, however Whisper only supports a subset of languages. If the language is not supported, error will be raised.For the full list of supported languages, see https://github.com/openai/whisper/blob/20231117/whisper/tokenizer.py . In addition to the langauge code, two-letter region codes can be added to the language code, e.g. \"en-US\" for US English. Note that the region code is only for compatibility and recording purpose, and Whisper neither detects regional dialects, nor use the given one for transcription. When the langauge code is not given, Whisper will run in langauge detection mode, and will use first few seconds of the audio to detect the language.",
"type": "string",
"default": "",
"multivalued": false
},
{
"name": "task",
"description": "(from whisper CLI) whether to perform X->X speech recognition ('transcribe') or X->English translation ('translate')",
"type": "string",
"choices": [
"transcribe",
"translate"
],
"default": "transcribe",
"multivalued": false
},
{
"name": "initialPrompt",
"description": "(from whisper CLI) optional text to provide as a prompt for the first window.",
"type": "string",
"default": "",
"multivalued": false
},
{
"name": "conditionOnPreviousText",
"description": "(from whisper CLI) if True, provide the previous output of the model as a prompt for the next window; disabling may make the text inconsistent across windows, but the model becomes less prone to getting stuck in a failure loop",
"type": "boolean",
"default": true,
"multivalued": false
},
{
"name": "noSpeechThreshold",
"description": "(from whisper CLI) if the probability of the <|nospeech|> token is higher than this value AND the decoding has failed due to `logprob_threshold`, consider the segment as silence",
"type": "number",
"default": 0.6,
"multivalued": false
},
{
"name": "pretty",
"description": "The JSON body of the HTTP response will be re-formatted with 2-space indentation",
"type": "boolean",
"default": false,
"multivalued": false
},
{
"name": "runningTime",
"description": "The running time of the app will be recorded in the view metadata",
"type": "boolean",
"default": false,
"multivalued": false
},
{
"name": "hwFetch",
"description": "The hardware information (architecture, GPU and vRAM) will be recorded in the view metadata",
"type": "boolean",
"default": false,
"multivalued": false
}
]
}
6 changes: 6 additions & 0 deletions docs/_apps/whisper-wrapper/v10/submission.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
{
"time": "2024-08-29T22:13:54+00:00",
"submitter": "keighrim",
"image": "ghcr.io/clamsproject/app-whisper-wrapper:v10",
"releasenotes": "This version adds some delegation parameters to whisper.transcribe\n\n- `task`: delegate to `--task`\n- `initialPrompt`: delegate to `--initial-prompt`\n- `conditionOnPreviousText`: delegate to `--condition-on-previous-text`\n- `noSpeechThreshold`: delegate to `--no-speech-threshold`\n\n"
}
6 changes: 5 additions & 1 deletion docs/_data/app-index.json
Original file line number Diff line number Diff line change
@@ -1,8 +1,12 @@
{
"http://apps.clams.ai/whisper-wrapper": {
"description": "A CLAMS wrapper for Whisper-based ASR software originally developed by OpenAI.",
"latest_update": "2024-08-16T15:05:09+00:00",
"latest_update": "2024-08-29T22:13:54+00:00",
"versions": [
[
"v10",
"keighrim"
],
[
"v9",
"keighrim"
Expand Down
2 changes: 1 addition & 1 deletion docs/_data/apps.json

Large diffs are not rendered by default.

Loading