You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/SpecCodingConventions.md
+4
Original file line number
Diff line number
Diff line change
@@ -80,6 +80,9 @@ Example:
80
80
* Commonly used punctuation and symbol characters include:
81
81
* « » (U+00AB / U+00BB Left/Right Pointing Double Angle Quotation Marks) used for [list literals](https://infra.spec.whatwg.org/#lists)
82
82
* → (U+2192 Rightwards Arrow) used for [map iteration](https://infra.spec.whatwg.org/#map-iterate)
83
+
* In expressions:
84
+
* Use * (U+002A Asterisk) for multiplication, / (U+002F Solidus) for division, and - (U+002D Hyphen-Minux), to reduce friction for implementers. Don't use × (U+00D7 Multiplication Sign), ∗ (U+2217 Asterisk Operator), ÷ (U+00F7 Division Sign), or − (U+2212 Minus Sign).
85
+
* Use named functions like _floor(x)_ and _ceil()_ rather than syntax like ⌊_x_⌋ and ⌈_x_⌉.
83
86
84
87
85
88
### Formatting
@@ -88,6 +91,7 @@ Example:
88
91
* Outside of examples, which should be appropriately styled automatically, literals such as numbers within spec prose are not JavaScript values and should not be styled as code.
89
92
* Strings used internally (e.g. operator names) should not be styled as code.
90
93
* When concisely defining a list's members or a tensor's layout, use the syntax `*[ ... ]*` (e.g. _"nchw" means the input tensor has the layout *[batches, inputChannels, height, width]*_)
94
+
* In Web IDL `<pre class=idl>` blocks, wrap long lines to avoid horizontal scrollbars. 88 characters seems to be the magic number.
1. [=list/For each=] |dimension| of |desc|.{{MLOperandDescriptor/dimensions}}:
1033
-
1. Set |elementLength| to |elementLength| × |dimension|.
1033
+
1. Set |elementLength| to |elementLength| * |dimension|.
1034
1034
1. Let |elementSize| be the [=element size=] of one of the {{ArrayBufferView}} types that matches |desc|.{{MLOperandDescriptor/dataType}} according to [this table](#appendices-mloperanddatatype-arraybufferview-compatibility).
1035
-
1. Return |elementLength| × |elementSize|.
1035
+
1. Return |elementLength| * |elementSize|.
1036
1036
</details>
1037
1037
1038
1038
<details open algorithm>
@@ -1254,7 +1254,7 @@ Create a named {{MLOperand}} based on a descriptor, that can be used as an input
1254
1254
**Arguments:**
1255
1255
- *name*: a [=string=] name of the input.
1256
1256
- *descriptor*: an {{MLOperandDescriptor}} object.
1257
-
**Returns:**: an {{MLOperand}} object.
1257
+
**Returns:** an {{MLOperand}} object.
1258
1258
</div>
1259
1259
1260
1260
<details open algorithm>
@@ -1280,7 +1280,7 @@ Create a constant {{MLOperand}} of the specified data type and shape that contai
1280
1280
**Arguments:**
1281
1281
- *descriptor*: an {{MLOperandDescriptor}}. The descriptor of the output tensor.
1282
1282
- *bufferView*: an {{ArrayBufferView}}. The view of the buffer containing the initializing data.
1283
-
**Returns:**: an {{MLOperand}}. The constant output tensor.
1283
+
**Returns:** an {{MLOperand}}. The constant output tensor.
1284
1284
</div>
1285
1285
1286
1286
<details open algorithm>
@@ -1307,7 +1307,7 @@ Data truncation will occur when the specified value exceeds the range of the spe
1307
1307
**Arguments:**
1308
1308
- *value*: a {{float}} number. The value of the constant.
1309
1309
- *type*: an optional {{MLOperandDataType}}. If not specified, it is assumed to be {{MLOperandDataType/"float32"}}.
1310
-
**Returns:**: an {{MLOperand}}. The constant output.
1310
+
**Returns:** an {{MLOperand}}. The constant output.
1311
1311
</div>
1312
1312
1313
1313
<details open algorithm>
@@ -1336,7 +1336,7 @@ Data truncation will occur when the values in the range exceed the range of the
1336
1336
- *end*: a {{float}} scalar. The ending value of the range.
1337
1337
- *step*: a {{float}} scalar. The gap value between two data points in the range.
1338
1338
- *type*: an optional {{MLOperandDataType}}. If not specified, it is assumed to be {{MLOperandDataType/"float32"}}.
1339
-
**Returns:**: an {{MLOperand}}. The constant 1-D output tensor of size `max(0, ceil((end - start)/step))`.
1339
+
**Returns:** an {{MLOperand}}. The constant 1-D output tensor of size `max(0, ceil((end - start)/step))`.
The 1-D tensor of the scaling values whose [=list/size=] is equal to the number of channels, i.e. the size of the feature dimension of the input. For example, for an |input| tensor with `nchw` layout, the [=list/size=] is equal to |input|'s [=MLOperand/shape=][1].
3451
+
The 1-D tensor of the scaling values whose [=list/size=] is equal to the number of channels, i.e. the size of the feature dimension of the input. For example, for an |input| tensor with {{MLInputOperandLayout/"nchw"}} layout, the [=list/size=] is equal to |input|'s [=MLOperand/shape=][1].
3448
3452
3449
3453
: <dfn>bias</dfn>
3450
3454
::
3451
-
The 1-D tensor of the bias values whose [=list/size=] is equal to the size of the feature dimension of the input. For example, for an |input| tensor with `nchw` layout, the [=list/size=] is equal to |input|'s [=MLOperand/shape=][1].
3455
+
The 1-D tensor of the bias values whose [=list/size=] is equal to the size of the feature dimension of the input. For example, for an |input| tensor with {{MLInputOperandLayout/"nchw"}} layout, the [=list/size=] is equal to |input|'s [=MLOperand/shape=][1].
Calculate the <a href="https://en.wikipedia.org/wiki/Rectifier_(neural_networks)#Leaky_ReLU"> leaky version of rectified linear function</a> on the input tensor element-wise. The calculation follows the expression `max(0, x) + alpha ∗ min(0, x)`.
3634
+
Calculate the <a href="https://en.wikipedia.org/wiki/Rectifier_(neural_networks)#Leaky_ReLU"> leaky version of rectified linear function</a> on the input tensor element-wise. The calculation follows the expression `max(0, x) + alpha * min(0, x)`.
To <dfn dfn-for=MLGraphBuilder>calculate matmul output sizes</dfn>, given |a| and |b| run the following steps:
4255
4260
</summary>
4256
4261
1. Let |shapeA| be a [=list/clone=] of |a|'s [=MLOperand/shape=]
4257
-
1. Let |sizeA| be the[=list/size=] of |shapeA|.
4262
+
1. Let |sizeA| be |shapeA|'s[=list/size=].
4258
4263
1. Let |shapeB| be a [=list/clone=] of |b|'s [=MLOperand/shape=]
4259
-
1. Let |sizeB| be the[=list/size=] of |shapeB|.
4264
+
1. Let |sizeB| be |shapeB|'s[=list/size=].
4260
4265
1. If either |sizeA| or |sizeB| is less than 2, then [=exception/throw=] a {{TypeError}}.
4261
4266
1. Let |colsA| be |shapeA|[|sizeA| - 1].
4262
4267
1. Let |rowsA| be |shapeA|[|sizeA| - 2].
@@ -4616,7 +4621,7 @@ Apply the L2 norm function to a region of the input feature map. The L2 norm is
4616
4621
Calculate the maximum value for patches of a feature map, and use it to create a pooled feature map. See [[#api-mlgraphbuilder-pool2d]] for more detail.
4617
4622
4618
4623
### prelu ### {#api-mlgraphbuilder-prelu}
4619
-
Calculate the <a href="https://en.wikipedia.org/wiki/Rectifier_(neural_networks)#Parametric_ReLU">parametric version of rectified linear function (Parametric ReLU)</a> on the input tensor element-wise. Parametric ReLU is a type of leaky ReLU that, instead of having a scalar slope like 0.01, making the slope (coefficient of leakage) into a parameter that is learned during the model training phase of this operation. The calculation follows the expression `max(0, x) + slope ∗ min(0, x)`.
4624
+
Calculate the <a href="https://en.wikipedia.org/wiki/Rectifier_(neural_networks)#Parametric_ReLU">parametric version of rectified linear function (Parametric ReLU)</a> on the input tensor element-wise. Parametric ReLU is a type of leaky ReLU that, instead of having a scalar slope like 0.01, making the slope (coefficient of leakage) into a parameter that is learned during the model training phase of this operation. The calculation follows the expression `max(0, x) + slope * min(0, x)`.
0 commit comments