Skip to content

Commit 1f15c3e

Browse files
authored
Merge pull request #175 from clamsproject/register/0-swt-detection.v6.1
App Submitted - swt-detection.v6.1
2 parents 8559f65 + d4328d1 commit 1f15c3e

File tree

5 files changed

+335
-2
lines changed

5 files changed

+335
-2
lines changed
+154
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,154 @@
1+
---
2+
layout: posts
3+
classes: wide
4+
title: "Scenes-with-text Detection (v6.1)"
5+
date: 2024-07-30T01:06:00+00:00
6+
---
7+
## About this version
8+
9+
- Submitter: [keighrim](https://github.com/keighrim)
10+
- Submission Time: 2024-07-30T01:06:00+00:00
11+
- Prebuilt Container Image: [ghcr.io/clamsproject/app-swt-detection:v6.1](https://github.com/clamsproject/app-swt-detection/pkgs/container/app-swt-detection/v6.1)
12+
- Release Notes
13+
14+
> SDK update to fix warning view bugs
15+
16+
## About this app (See raw [metadata.json](metadata.json))
17+
18+
**Detects scenes with text, like slates, chyrons and credits.**
19+
20+
- App ID: [http://apps.clams.ai/swt-detection/v6.1](http://apps.clams.ai/swt-detection/v6.1)
21+
- App License: Apache 2.0
22+
- Source Repository: [https://github.com/clamsproject/app-swt-detection](https://github.com/clamsproject/app-swt-detection) ([source tree of the submitted version](https://github.com/clamsproject/app-swt-detection/tree/v6.1))
23+
24+
25+
#### Inputs
26+
(**Note**: "*" as a property value means that the property is required but can be any value.)
27+
28+
- [http://mmif.clams.ai/vocabulary/VideoDocument/v1](http://mmif.clams.ai/vocabulary/VideoDocument/v1) (required)
29+
(of any properties)
30+
31+
32+
33+
#### Configurable Parameters
34+
(**Note**: _Multivalued_ means the parameter can have one or more values.)
35+
36+
- `startAt`: optional, defaults to `0`
37+
38+
- Type: integer
39+
- Multivalued: False
40+
41+
42+
> Number of milliseconds into the video to start processing
43+
- `stopAt`: optional, defaults to `9223372036854775807`
44+
45+
- Type: integer
46+
- Multivalued: False
47+
48+
49+
> Number of milliseconds into the video to stop processing
50+
- `sampleRate`: optional, defaults to `1000`
51+
52+
- Type: integer
53+
- Multivalued: False
54+
55+
56+
> Milliseconds between sampled frames
57+
- `minFrameScore`: optional, defaults to `0.01`
58+
59+
- Type: number
60+
- Multivalued: False
61+
62+
63+
> Minimum score for a still frame to be included in a TimeFrame
64+
- `minTimeframeScore`: optional, defaults to `0.5`
65+
66+
- Type: number
67+
- Multivalued: False
68+
69+
70+
> Minimum score for a TimeFrame
71+
- `minFrameCount`: optional, defaults to `2`
72+
73+
- Type: integer
74+
- Multivalued: False
75+
76+
77+
> Minimum number of sampled frames required for a TimeFrame
78+
- `modelName`: optional, defaults to `convnext_lg`
79+
80+
- Type: string
81+
- Multivalued: False
82+
- Choices: `convnext_tiny`, **_`convnext_lg`_**
83+
84+
85+
> model name to use for classification
86+
- `usePosModel`: optional, defaults to `true`
87+
88+
- Type: boolean
89+
- Multivalued: False
90+
- Choices: `false`, **_`true`_**
91+
92+
93+
> Use the model trained with positional features
94+
- `useStitcher`: optional, defaults to `true`
95+
96+
- Type: boolean
97+
- Multivalued: False
98+
- Choices: `false`, **_`true`_**
99+
100+
101+
> Use the stitcher after classifying the TimePoints
102+
- `allowOverlap`: optional, defaults to `true`
103+
104+
- Type: boolean
105+
- Multivalued: False
106+
- Choices: `false`, **_`true`_**
107+
108+
109+
> Allow overlapping time frames
110+
- `map`: optional, defaults to `['B:bars', 'S:slate', 'I:chyron', 'N:chyron', 'Y:chyron', 'C:credits', 'R:credits', 'W:other_opening', 'L:other_opening', 'O:other_opening', 'M:other_opening', 'E:other_text', 'K:other_text', 'G:other_text', 'T:other_text', 'F:other_text']`
111+
112+
- Type: map
113+
- Multivalued: True
114+
115+
116+
> Mapping of a label in the input annotations to a new label. Must be formatted as IN_LABEL:OUT_LABEL (with a colon). To pass multiple mappings, use this parameter multiple times. By default, all the input labels are passed as is, including any negative labels (with default value being no remapping at all). However, when at least one label is remapped, all the other "unset" labels are discarded as a negative label.
117+
- `pretty`: optional, defaults to `false`
118+
119+
- Type: boolean
120+
- Multivalued: False
121+
- Choices: **_`false`_**, `true`
122+
123+
124+
> The JSON body of the HTTP response will be re-formatted with 2-space indentation
125+
- `runningTime`: optional, defaults to `false`
126+
127+
- Type: boolean
128+
- Multivalued: False
129+
- Choices: **_`false`_**, `true`
130+
131+
132+
> The running time of the app will be recorded in the view metadata
133+
- `hwFetch`: optional, defaults to `false`
134+
135+
- Type: boolean
136+
- Multivalued: False
137+
- Choices: **_`false`_**, `true`
138+
139+
140+
> The hardware information (architecture, GPU and vRAM) will be recorded in the view metadata
141+
142+
143+
#### Outputs
144+
(**Note**: "*" as a property value means that the property is required but can be any value.)
145+
146+
(**Note**: Not all output annotations are always generated.)
147+
148+
- [http://mmif.clams.ai/vocabulary/TimeFrame/v5](http://mmif.clams.ai/vocabulary/TimeFrame/v5)
149+
- _timeUnit_ = "milliseconds"
150+
151+
- [http://mmif.clams.ai/vocabulary/TimePoint/v4](http://mmif.clams.ai/vocabulary/TimePoint/v4)
152+
- _timeUnit_ = "milliseconds"
153+
- _labelset_ = a list of ["B", "S", "W", "L", "O", "M", "I", "N", "E", "P", "Y", "K", "G", "T", "F", "C", "R"]
154+
+169
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,169 @@
1+
{
2+
"name": "Scenes-with-text Detection",
3+
"description": "Detects scenes with text, like slates, chyrons and credits.",
4+
"app_version": "v6.1",
5+
"mmif_version": "1.0.5",
6+
"app_license": "Apache 2.0",
7+
"identifier": "http://apps.clams.ai/swt-detection/v6.1",
8+
"url": "https://github.com/clamsproject/app-swt-detection",
9+
"input": [
10+
{
11+
"@type": "http://mmif.clams.ai/vocabulary/VideoDocument/v1",
12+
"required": true
13+
}
14+
],
15+
"output": [
16+
{
17+
"@type": "http://mmif.clams.ai/vocabulary/TimeFrame/v5",
18+
"properties": {
19+
"timeUnit": "milliseconds"
20+
}
21+
},
22+
{
23+
"@type": "http://mmif.clams.ai/vocabulary/TimePoint/v4",
24+
"properties": {
25+
"timeUnit": "milliseconds",
26+
"labelset": [
27+
"B",
28+
"S",
29+
"W",
30+
"L",
31+
"O",
32+
"M",
33+
"I",
34+
"N",
35+
"E",
36+
"P",
37+
"Y",
38+
"K",
39+
"G",
40+
"T",
41+
"F",
42+
"C",
43+
"R"
44+
]
45+
}
46+
}
47+
],
48+
"parameters": [
49+
{
50+
"name": "startAt",
51+
"description": "Number of milliseconds into the video to start processing",
52+
"type": "integer",
53+
"default": 0,
54+
"multivalued": false
55+
},
56+
{
57+
"name": "stopAt",
58+
"description": "Number of milliseconds into the video to stop processing",
59+
"type": "integer",
60+
"default": 9223372036854775807,
61+
"multivalued": false
62+
},
63+
{
64+
"name": "sampleRate",
65+
"description": "Milliseconds between sampled frames",
66+
"type": "integer",
67+
"default": 1000,
68+
"multivalued": false
69+
},
70+
{
71+
"name": "minFrameScore",
72+
"description": "Minimum score for a still frame to be included in a TimeFrame",
73+
"type": "number",
74+
"default": 0.01,
75+
"multivalued": false
76+
},
77+
{
78+
"name": "minTimeframeScore",
79+
"description": "Minimum score for a TimeFrame",
80+
"type": "number",
81+
"default": 0.5,
82+
"multivalued": false
83+
},
84+
{
85+
"name": "minFrameCount",
86+
"description": "Minimum number of sampled frames required for a TimeFrame",
87+
"type": "integer",
88+
"default": 2,
89+
"multivalued": false
90+
},
91+
{
92+
"name": "modelName",
93+
"description": "model name to use for classification",
94+
"type": "string",
95+
"choices": [
96+
"convnext_tiny",
97+
"convnext_lg"
98+
],
99+
"default": "convnext_lg",
100+
"multivalued": false
101+
},
102+
{
103+
"name": "usePosModel",
104+
"description": "Use the model trained with positional features",
105+
"type": "boolean",
106+
"default": true,
107+
"multivalued": false
108+
},
109+
{
110+
"name": "useStitcher",
111+
"description": "Use the stitcher after classifying the TimePoints",
112+
"type": "boolean",
113+
"default": true,
114+
"multivalued": false
115+
},
116+
{
117+
"name": "allowOverlap",
118+
"description": "Allow overlapping time frames",
119+
"type": "boolean",
120+
"default": true,
121+
"multivalued": false
122+
},
123+
{
124+
"name": "map",
125+
"description": "Mapping of a label in the input annotations to a new label. Must be formatted as IN_LABEL:OUT_LABEL (with a colon). To pass multiple mappings, use this parameter multiple times. By default, all the input labels are passed as is, including any negative labels (with default value being no remapping at all). However, when at least one label is remapped, all the other \"unset\" labels are discarded as a negative label.",
126+
"type": "map",
127+
"default": [
128+
"B:bars",
129+
"S:slate",
130+
"I:chyron",
131+
"N:chyron",
132+
"Y:chyron",
133+
"C:credits",
134+
"R:credits",
135+
"W:other_opening",
136+
"L:other_opening",
137+
"O:other_opening",
138+
"M:other_opening",
139+
"E:other_text",
140+
"K:other_text",
141+
"G:other_text",
142+
"T:other_text",
143+
"F:other_text"
144+
],
145+
"multivalued": true
146+
},
147+
{
148+
"name": "pretty",
149+
"description": "The JSON body of the HTTP response will be re-formatted with 2-space indentation",
150+
"type": "boolean",
151+
"default": false,
152+
"multivalued": false
153+
},
154+
{
155+
"name": "runningTime",
156+
"description": "The running time of the app will be recorded in the view metadata",
157+
"type": "boolean",
158+
"default": false,
159+
"multivalued": false
160+
},
161+
{
162+
"name": "hwFetch",
163+
"description": "The hardware information (architecture, GPU and vRAM) will be recorded in the view metadata",
164+
"type": "boolean",
165+
"default": false,
166+
"multivalued": false
167+
}
168+
]
169+
}
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,6 @@
1+
{
2+
"time": "2024-07-30T01:06:00+00:00",
3+
"submitter": "keighrim",
4+
"image": "ghcr.io/clamsproject/app-swt-detection:v6.1",
5+
"releasenotes": "SDK update to fix warning view bugs\n\n"
6+
}

docs/_data/app-index.json

+5-1
Original file line numberDiff line numberDiff line change
@@ -1,8 +1,12 @@
11
{
22
"http://apps.clams.ai/swt-detection": {
33
"description": "Detects scenes with text, like slates, chyrons and credits.",
4-
"latest_update": "2024-07-25T16:11:42+00:00",
4+
"latest_update": "2024-07-30T01:06:00+00:00",
55
"versions": [
6+
[
7+
"v6.1",
8+
"keighrim"
9+
],
610
[
711
"v6.0",
812
"keighrim"

docs/_data/apps.json

+1-1
Large diffs are not rendered by default.

0 commit comments

Comments
 (0)