Skip to content

Commit 7dbcfa6

Browse files
committed
initial commit
Signed-off-by: Yuan Yao <[email protected]>
1 parent 976a142 commit 7dbcfa6

File tree

6 files changed

+9
-42
lines changed

6 files changed

+9
-42
lines changed

docs/Changelog.md

Lines changed: 3 additions & 37 deletions
Original file line numberDiff line numberDiff line change
@@ -21477,7 +21477,7 @@ This version of the operator has been available since version 18 of the default
2147721477
<dd>Constrain input and output types to all numeric tensor types.</dd>
2147821478
</dl>
2147921479

21480-
### <a name="GroupNormalization-18"></a>**GroupNormalization-18**</a>
21480+
### <a name="GroupNormalization-18"></a>**GroupNormalization-18** (deprecated)</a>
2148121481

2148221482
A GroupNormalization function. Carries out group normalization as described in
2148321483
the paper https://arxiv.org/abs/1803.08494
@@ -21497,41 +21497,7 @@ This version of the operator has been available since version 18 of the default
2149721497

2149821498
#### Version
2149921499

21500-
This version of the operator has been available since version 18 of the default ONNX operator set.
21501-
21502-
#### Attributes
21503-
21504-
<dl>
21505-
<dt><tt>epsilon</tt> : float (default is 1e-05)</dt>
21506-
<dd>The epsilon value to use to avoid division by zero.</dd>
21507-
<dt><tt>num_groups</tt> : int (required)</dt>
21508-
<dd>The number of groups of channels. It should be a divisor of the number of channels `C`.</dd>
21509-
</dl>
21510-
21511-
#### Inputs
21512-
21513-
<dl>
21514-
<dt><tt>X</tt> (differentiable) : T</dt>
21515-
<dd>Input data tensor. Dimensions for image cases are `(N x C x H x W)`, where `N` is the batch size, `C` is the number of channels, and `H` and `W` are the height and width of the data. Statistics are computed for every group of channels over `C`, `H`, and `W`. For non-image cases, the dimensions are in the form of `(N x C x D1 x D2 ... Dn)`.</dd>
21516-
<dt><tt>scale</tt> (differentiable) : T</dt>
21517-
<dd>Scale tensor of shape `(num_groups)`.</dd>
21518-
<dt><tt>bias</tt> (differentiable) : T</dt>
21519-
<dd>Bias tensor of shape `(num_groups)`.</dd>
21520-
</dl>
21521-
21522-
#### Outputs
21523-
21524-
<dl>
21525-
<dt><tt>Y</tt> (differentiable) : T</dt>
21526-
<dd>The output tensor of the same shape as `X`.</dd>
21527-
</dl>
21528-
21529-
#### Type Constraints
21530-
21531-
<dl>
21532-
<dt><tt>T</tt> : tensor(float16), tensor(float), tensor(double), tensor(bfloat16)</dt>
21533-
<dd>Constrain input and output types to float tensors.</dd>
21534-
</dl>
21500+
This version of the operator has been deprecated since version 18 of the default ONNX operator set.
2153521501

2153621502
### <a name="LpPool-18"></a>**LpPool-18**</a>
2153721503

@@ -24864,7 +24830,7 @@ This version of the operator has been available since version 21 of the default
2486424830
y = scale * (x - mean) / sqrt(variance + epsilon) + bias,
2486524831
```
2486624832
where the mean and variance are computed per instance per group of channels, and
24867-
`scale` and `bias` should be specified for each group of channels. The number of
24833+
`scale` and `bias` should be specified for each channel. The number of
2486824834
groups `num_groups` should be divisible by the number of channels so that there are
2486924835
an equal number of channels per group.
2487024836

docs/Operators.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -11736,7 +11736,7 @@ expect(
1173611736
y = scale * (x - mean) / sqrt(variance + epsilon) + bias,
1173711737
```
1173811738
where the mean and variance are computed per instance per group of channels, and
11739-
`scale` and `bias` should be specified for each group of channels. The number of
11739+
`scale` and `bias` should be specified for each channel. The number of
1174011740
groups `num_groups` should be divisible by the number of channels so that there are
1174111741
an equal number of channels per group.
1174211742

onnx/defs/nn/defs.cc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -2699,7 +2699,7 @@ This operator transforms input according to
26992699
y = scale * (x - mean) / sqrt(variance + epsilon) + bias,
27002700
```
27012701
where the mean and variance are computed per instance per group of channels, and
2702-
`scale` and `bias` should be specified for each group of channels. The number of
2702+
`scale` and `bias` should be specified for each channel. The number of
27032703
groups `num_groups` should be divisible by the number of channels so that there are
27042704
an equal number of channels per group.
27052705

onnx/defs/nn/old.cc

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -4020,6 +4020,7 @@ ONNX_OPERATOR_SET_SCHEMA(
40204020
GroupNormalization,
40214021
18,
40224022
OpSchema()
4023+
.Deprecate()
40234024
.SetDoc(GroupNormalization_ver18_doc)
40244025
.Attr("epsilon", "The epsilon value to use to avoid division by zero.", AttributeProto::FLOAT, 1e-5f)
40254026
.Attr(

onnx/reference/ops/op_quantize_linear.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -209,7 +209,7 @@ def _run( # noqa: PLR0911
209209
if tensor_type == TensorProto.FLOAT4E2M1:
210210
x += zero_point
211211
f4 = subbyte.float32_to_float4e2m1_unpacked(x)
212-
return (f4,) # type: ignore[attr-defined]
212+
return (f4.astype(float4e2m1),) # type: ignore[attr-defined]
213213

214214
raise ValueError(
215215
f"Unexpected type: output_dtype={tensor_type} is not a supported quantized type."

onnx/test/version_converter/automatic_upgrade_test.py

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1718,8 +1718,8 @@ def test_BitwiseXor(self) -> None:
17181718
def test_GroupNormalization(self) -> None:
17191719
self._test_op_upgrade(
17201720
"GroupNormalization",
1721-
18,
1722-
[[3, 4, 2, 2], [1], [1]],
1721+
21,
1722+
[[3, 4, 2, 2], [4], [4]],
17231723
[[3, 4, 2, 2]],
17241724
attrs={"epsilon": 1e-5, "num_groups": 2},
17251725
)

0 commit comments

Comments
 (0)