Skip to content

Commit 8c454d4

Browse files
author
clams-bot
committed
adding metadata of swt-detection.v5.0
1 parent 8d38227 commit 8c454d4

File tree

5 files changed

+350
-43
lines changed

5 files changed

+350
-43
lines changed
+137
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,137 @@
1+
---
2+
layout: posts
3+
classes: wide
4+
title: "Scenes-with-text Detection (v5.0)"
5+
date: 2024-05-22T14:32:58+00:00
6+
---
7+
## About this version
8+
9+
- Submitter: [marcverhagen](https://github.com/marcverhagen)
10+
- Submission Time: 2024-05-22T14:32:58+00:00
11+
- Prebuilt Container Image: [ghcr.io/clamsproject/app-swt-detection:v5.0](https://github.com/clamsproject/app-swt-detection/pkgs/container/app-swt-detection/v5.0)
12+
- Release Notes
13+
14+
> This release adds a script to run a server-less app from the command line, adds and fixes parameters, and updates dependencies.
15+
> - Added CLI capabilities
16+
> - Added allowOverlap parameter
17+
> - Added map parameter for postbin mapping
18+
> - Updated to clams-python 1.2.2
19+
> - Updated base container to clams-python-opencv4-torch2:1.2.2
20+
> - Simplified model names
21+
> - Documentation updates
22+
23+
## About this app (See raw [metadata.json](metadata.json))
24+
25+
**Detects scenes with text, like slates, chyrons and credits.**
26+
27+
- App ID: [http://apps.clams.ai/swt-detection/v5.0](http://apps.clams.ai/swt-detection/v5.0)
28+
- App License: Apache 2.0
29+
- Source Repository: [https://github.com/clamsproject/app-swt-detection](https://github.com/clamsproject/app-swt-detection) ([source tree of the submitted version](https://github.com/clamsproject/app-swt-detection/tree/v5.0))
30+
31+
32+
#### Inputs
33+
(**Note**: "*" as a property value means that the property is required but can be any value.)
34+
35+
- [http://mmif.clams.ai/vocabulary/VideoDocument/v1](http://mmif.clams.ai/vocabulary/VideoDocument/v1) (required)
36+
(of any properties)
37+
38+
39+
40+
#### Configurable Parameters
41+
(**Note**: _Multivalued_ means the parameter can have one or more values.)
42+
43+
- `startAt`: optional, defaults to `0`
44+
45+
- Type: integer
46+
- Multivalued: False
47+
48+
49+
> Number of milliseconds into the video to start processing
50+
- `stopAt`: optional, defaults to `9223372036854775807`
51+
52+
- Type: integer
53+
- Multivalued: False
54+
55+
56+
> Number of milliseconds into the video to stop processing
57+
- `sampleRate`: optional, defaults to `1000`
58+
59+
- Type: integer
60+
- Multivalued: False
61+
62+
63+
> Milliseconds between sampled frames
64+
- `minFrameScore`: optional, defaults to `0.01`
65+
66+
- Type: number
67+
- Multivalued: False
68+
69+
70+
> Minimum score for a still frame to be included in a TimeFrame
71+
- `minTimeframeScore`: optional, defaults to `0.5`
72+
73+
- Type: number
74+
- Multivalued: False
75+
76+
77+
> Minimum score for a TimeFrame
78+
- `minFrameCount`: optional, defaults to `2`
79+
80+
- Type: integer
81+
- Multivalued: False
82+
83+
84+
> Minimum number of sampled frames required for a TimeFrame
85+
- `modelName`: optional, defaults to `20240409-091401.convnext_lg`
86+
87+
- Type: string
88+
- Multivalued: False
89+
- Choices: `20240212-132306.convnext_lg`, `20240409-093229.convnext_tiny`, **_`20240409-091401.convnext_lg`_**, `20240126-180026.convnext_lg`, `20240212-131937.convnext_tiny`
90+
91+
92+
> model name to use for classification
93+
- `useStitcher`: optional, defaults to `true`
94+
95+
- Type: boolean
96+
- Multivalued: False
97+
- Choices: `false`, **_`true`_**
98+
99+
100+
> Use the stitcher after classifying the TimePoints
101+
- `allowOverlap`: optional, defaults to `true`
102+
103+
- Type: boolean
104+
- Multivalued: False
105+
- Choices: `false`, **_`true`_**
106+
107+
108+
> Allow overlapping time frames
109+
- `map`: optional, defaults to `['B:bars', 'S:slate', 'S-H:slate', 'S-C:slate', 'S-D:slate', 'S-G:slate', 'W:other_opening', 'L:other_opening', 'O:other_opening', 'M:other_opening', 'I:chyron', 'N:chyron', 'Y:chyron', 'C:credit', 'R:credit', 'E:other_text', 'K:other_text', 'G:other_text', 'T:other_text', 'F:other_text']`
110+
111+
- Type: map
112+
- Multivalued: True
113+
114+
115+
> Mapping of a label in the input annotations to a new label. Must be formatted as IN_LABEL:OUT_LABEL (with a colon). To pass multiple mappings, use this parameter multiple times. By default, all the input labels are passed as is, including any negative labels (with default value being no remapping at all). However, when at least one label is remapped, all the other "unset" labels are discarded as a negative label.
116+
- `pretty`: optional, defaults to `false`
117+
118+
- Type: boolean
119+
- Multivalued: False
120+
- Choices: **_`false`_**, `true`
121+
122+
123+
> The JSON body of the HTTP response will be re-formatted with 2-space indentation
124+
125+
126+
#### Outputs
127+
(**Note**: "*" as a property value means that the property is required but can be any value.)
128+
129+
(**Note**: Not all output annotations are always generated.)
130+
131+
- [http://mmif.clams.ai/vocabulary/TimeFrame/v5](http://mmif.clams.ai/vocabulary/TimeFrame/v5)
132+
- _timeUnit_ = "milliseconds"
133+
134+
- [http://mmif.clams.ai/vocabulary/TimePoint/v4](http://mmif.clams.ai/vocabulary/TimePoint/v4)
135+
- _timeUnit_ = "milliseconds"
136+
- _labelset_ = a list of ["B", "S", "S:H", "S:C", "S:D", "S:B", "S:G", "W", "L", "O", "M", "I", "N", "E", "P", "Y", "K", "G", "T", "F", "C", "R"]
137+
+160
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,160 @@
1+
{
2+
"name": "Scenes-with-text Detection",
3+
"description": "Detects scenes with text, like slates, chyrons and credits.",
4+
"app_version": "v5.0",
5+
"mmif_version": "1.0.4",
6+
"app_license": "Apache 2.0",
7+
"identifier": "http://apps.clams.ai/swt-detection/v5.0",
8+
"url": "https://github.com/clamsproject/app-swt-detection",
9+
"input": [
10+
{
11+
"@type": "http://mmif.clams.ai/vocabulary/VideoDocument/v1",
12+
"required": true
13+
}
14+
],
15+
"output": [
16+
{
17+
"@type": "http://mmif.clams.ai/vocabulary/TimeFrame/v5",
18+
"properties": {
19+
"timeUnit": "milliseconds"
20+
}
21+
},
22+
{
23+
"@type": "http://mmif.clams.ai/vocabulary/TimePoint/v4",
24+
"properties": {
25+
"timeUnit": "milliseconds",
26+
"labelset": [
27+
"B",
28+
"S",
29+
"S:H",
30+
"S:C",
31+
"S:D",
32+
"S:B",
33+
"S:G",
34+
"W",
35+
"L",
36+
"O",
37+
"M",
38+
"I",
39+
"N",
40+
"E",
41+
"P",
42+
"Y",
43+
"K",
44+
"G",
45+
"T",
46+
"F",
47+
"C",
48+
"R"
49+
]
50+
}
51+
}
52+
],
53+
"parameters": [
54+
{
55+
"name": "startAt",
56+
"description": "Number of milliseconds into the video to start processing",
57+
"type": "integer",
58+
"default": 0,
59+
"multivalued": false
60+
},
61+
{
62+
"name": "stopAt",
63+
"description": "Number of milliseconds into the video to stop processing",
64+
"type": "integer",
65+
"default": 9223372036854775807,
66+
"multivalued": false
67+
},
68+
{
69+
"name": "sampleRate",
70+
"description": "Milliseconds between sampled frames",
71+
"type": "integer",
72+
"default": 1000,
73+
"multivalued": false
74+
},
75+
{
76+
"name": "minFrameScore",
77+
"description": "Minimum score for a still frame to be included in a TimeFrame",
78+
"type": "number",
79+
"default": 0.01,
80+
"multivalued": false
81+
},
82+
{
83+
"name": "minTimeframeScore",
84+
"description": "Minimum score for a TimeFrame",
85+
"type": "number",
86+
"default": 0.5,
87+
"multivalued": false
88+
},
89+
{
90+
"name": "minFrameCount",
91+
"description": "Minimum number of sampled frames required for a TimeFrame",
92+
"type": "integer",
93+
"default": 2,
94+
"multivalued": false
95+
},
96+
{
97+
"name": "modelName",
98+
"description": "model name to use for classification",
99+
"type": "string",
100+
"choices": [
101+
"20240212-132306.convnext_lg",
102+
"20240409-093229.convnext_tiny",
103+
"20240409-091401.convnext_lg",
104+
"20240126-180026.convnext_lg",
105+
"20240212-131937.convnext_tiny"
106+
],
107+
"default": "20240409-091401.convnext_lg",
108+
"multivalued": false
109+
},
110+
{
111+
"name": "useStitcher",
112+
"description": "Use the stitcher after classifying the TimePoints",
113+
"type": "boolean",
114+
"default": true,
115+
"multivalued": false
116+
},
117+
{
118+
"name": "allowOverlap",
119+
"description": "Allow overlapping time frames",
120+
"type": "boolean",
121+
"default": true,
122+
"multivalued": false
123+
},
124+
{
125+
"name": "map",
126+
"description": "Mapping of a label in the input annotations to a new label. Must be formatted as IN_LABEL:OUT_LABEL (with a colon). To pass multiple mappings, use this parameter multiple times. By default, all the input labels are passed as is, including any negative labels (with default value being no remapping at all). However, when at least one label is remapped, all the other \"unset\" labels are discarded as a negative label.",
127+
"type": "map",
128+
"default": [
129+
"B:bars",
130+
"S:slate",
131+
"S-H:slate",
132+
"S-C:slate",
133+
"S-D:slate",
134+
"S-G:slate",
135+
"W:other_opening",
136+
"L:other_opening",
137+
"O:other_opening",
138+
"M:other_opening",
139+
"I:chyron",
140+
"N:chyron",
141+
"Y:chyron",
142+
"C:credit",
143+
"R:credit",
144+
"E:other_text",
145+
"K:other_text",
146+
"G:other_text",
147+
"T:other_text",
148+
"F:other_text"
149+
],
150+
"multivalued": true
151+
},
152+
{
153+
"name": "pretty",
154+
"description": "The JSON body of the HTTP response will be re-formatted with 2-space indentation",
155+
"type": "boolean",
156+
"default": false,
157+
"multivalued": false
158+
}
159+
]
160+
}
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,6 @@
1+
{
2+
"time": "2024-05-22T14:32:58+00:00",
3+
"submitter": "marcverhagen",
4+
"image": "ghcr.io/clamsproject/app-swt-detection:v5.0",
5+
"releasenotes": "This release adds a script to run a server-less app from the command line, adds and fixes parameters, and updates dependencies.\n\n- Added CLI capabilities\n- Added allowOverlap parameter\n- Added map parameter for postbin mapping\n- Updated to clams-python 1.2.2\n- Updated base container to clams-python-opencv4-torch2:1.2.2\n- Simplified model names\n- Documentation updates\n\n"
6+
}

0 commit comments

Comments
 (0)