Skip to content

Commit edc9e52

Browse files
authored
Merge pull request #618 from inexorabletash/conventions-misc-tidying
Conventions: Add and apply a few more spec coding conventions
2 parents 44a8674 + 6fa9bf0 commit edc9e52

File tree

2 files changed

+37
-27
lines changed

2 files changed

+37
-27
lines changed

docs/SpecCodingConventions.md

+4
Original file line numberDiff line numberDiff line change
@@ -80,6 +80,9 @@ Example:
8080
* Commonly used punctuation and symbol characters include:
8181
* « » (U+00AB / U+00BB Left/Right Pointing Double Angle Quotation Marks) used for [list literals](https://infra.spec.whatwg.org/#lists)
8282
* → (U+2192 Rightwards Arrow) used for [map iteration](https://infra.spec.whatwg.org/#map-iterate)
83+
* In expressions:
84+
* Use * (U+002A Asterisk) for multiplication, / (U+002F Solidus) for division, and - (U+002D Hyphen-Minux), to reduce friction for implementers. Don't use × (U+00D7 Multiplication Sign), ∗ (U+2217 Asterisk Operator), ÷ (U+00F7 Division Sign), or − (U+2212 Minus Sign).
85+
* Use named functions like _floor(x)_ and _ceil()_ rather than syntax like ⌊_x_⌋ and ⌈_x_⌉.
8386

8487

8588
### Formatting
@@ -88,6 +91,7 @@ Example:
8891
* Outside of examples, which should be appropriately styled automatically, literals such as numbers within spec prose are not JavaScript values and should not be styled as code.
8992
* Strings used internally (e.g. operator names) should not be styled as code.
9093
* When concisely defining a list's members or a tensor's layout, use the syntax `*[ ... ]*` (e.g. _"nchw" means the input tensor has the layout *[batches, inputChannels, height, width]*_)
94+
* In Web IDL `<pre class=idl>` blocks, wrap long lines to avoid horizontal scrollbars. 88 characters seems to be the magic number.
9195

9296

9397
### Algorithms

index.bs

+33-27
Original file line numberDiff line numberDiff line change
@@ -887,7 +887,7 @@ When the {{MLContext/[[contextType]]}} is set to [=context type/default=] with t
887887
1. [=map/For each=] |name| → |view| of |views|:
888888
1. Let |transferredBuffer| be the result of [=ArrayBuffer/transfer|transferring=] |view|'s [=BufferSource/underlying buffer=].
889889
1. Let |constructor| be the appropriate [=view constructor=] for the type of {{ArrayBufferView}} |view| from |realm|.
890-
1. Let |elementsNumber| be the result of |view|'s [=BufferSource/byte length=] ÷ |view|'s [=element size=].
890+
1. Let |elementsNumber| be the result of |view|'s [=BufferSource/byte length=] / |view|'s [=element size=].
891891
1. Let |transferredView| be [$Construct$](|constructor|, |transferredBuffer|, |view|.\[[ByteOffset]], |elementsNumber|).
892892
1. Set |transferredViews|[|name|] to |transferredView|.
893893
1. Return |transferredViews|.
@@ -1030,9 +1030,9 @@ dictionary MLOperandDescriptor {
10301030
</summary>
10311031
1. Let |elementLength| be 1.
10321032
1. [=list/For each=] |dimension| of |desc|.{{MLOperandDescriptor/dimensions}}:
1033-
1. Set |elementLength| to |elementLength| × |dimension|.
1033+
1. Set |elementLength| to |elementLength| * |dimension|.
10341034
1. Let |elementSize| be the [=element size=] of one of the {{ArrayBufferView}} types that matches |desc|.{{MLOperandDescriptor/dataType}} according to [this table](#appendices-mloperanddatatype-arraybufferview-compatibility).
1035-
1. Return |elementLength| × |elementSize|.
1035+
1. Return |elementLength| * |elementSize|.
10361036
</details>
10371037

10381038
<details open algorithm>
@@ -1254,7 +1254,7 @@ Create a named {{MLOperand}} based on a descriptor, that can be used as an input
12541254
**Arguments:**
12551255
- *name*: a [=string=] name of the input.
12561256
- *descriptor*: an {{MLOperandDescriptor}} object.
1257-
**Returns:**: an {{MLOperand}} object.
1257+
**Returns:** an {{MLOperand}} object.
12581258
</div>
12591259

12601260
<details open algorithm>
@@ -1280,7 +1280,7 @@ Create a constant {{MLOperand}} of the specified data type and shape that contai
12801280
**Arguments:**
12811281
- *descriptor*: an {{MLOperandDescriptor}}. The descriptor of the output tensor.
12821282
- *bufferView*: an {{ArrayBufferView}}. The view of the buffer containing the initializing data.
1283-
**Returns:**: an {{MLOperand}}. The constant output tensor.
1283+
**Returns:** an {{MLOperand}}. The constant output tensor.
12841284
</div>
12851285

12861286
<details open algorithm>
@@ -1307,7 +1307,7 @@ Data truncation will occur when the specified value exceeds the range of the spe
13071307
**Arguments:**
13081308
- *value*: a {{float}} number. The value of the constant.
13091309
- *type*: an optional {{MLOperandDataType}}. If not specified, it is assumed to be {{MLOperandDataType/"float32"}}.
1310-
**Returns:**: an {{MLOperand}}. The constant output.
1310+
**Returns:** an {{MLOperand}}. The constant output.
13111311
</div>
13121312

13131313
<details open algorithm>
@@ -1336,7 +1336,7 @@ Data truncation will occur when the values in the range exceed the range of the
13361336
- *end*: a {{float}} scalar. The ending value of the range.
13371337
- *step*: a {{float}} scalar. The gap value between two data points in the range.
13381338
- *type*: an optional {{MLOperandDataType}}. If not specified, it is assumed to be {{MLOperandDataType/"float32"}}.
1339-
**Returns:**: an {{MLOperand}}. The constant 1-D output tensor of size `max(0, ceil((end - start)/step))`.
1339+
**Returns:** an {{MLOperand}}. The constant 1-D output tensor of size `max(0, ceil((end - start)/step))`.
13401340
</div>
13411341

13421342
<details open algorithm>
@@ -1495,7 +1495,7 @@ dictionary MLBatchNormalizationOptions {
14951495

14961496
partial interface MLGraphBuilder {
14971497
MLOperand batchNormalization(MLOperand input, MLOperand mean, MLOperand variance,
1498-
optional MLBatchNormalizationOptions options = {});
1498+
optional MLBatchNormalizationOptions options = {});
14991499
};
15001500
</script>
15011501

@@ -1778,7 +1778,9 @@ dictionary MLConv2dOptions {
17781778
};
17791779

17801780
partial interface MLGraphBuilder {
1781-
MLOperand conv2d(MLOperand input, MLOperand filter, optional MLConv2dOptions options = {});
1781+
MLOperand conv2d(MLOperand input,
1782+
MLOperand filter,
1783+
optional MLConv2dOptions options = {});
17821784
};
17831785
</script>
17841786

@@ -2670,7 +2672,9 @@ dictionary MLGatherOptions {
26702672
};
26712673

26722674
partial interface MLGraphBuilder {
2673-
MLOperand gather(MLOperand input, MLOperand indices, optional MLGatherOptions options = {});
2675+
MLOperand gather(MLOperand input,
2676+
MLOperand indices,
2677+
optional MLGatherOptions options = {});
26742678
};
26752679
</script>
26762680

@@ -2843,9 +2847,9 @@ partial interface MLGraphBuilder {
28432847
</summary>
28442848
1. If [=MLGraphBuilder/validating operand=] with [=this=] and any of |a| and |b| returns false, then [=exception/throw=] a {{TypeError}}.
28452849
1. Let |shapeA| be a [=list/clone=] of |a|'s [=MLOperand/shape=].
2846-
1. Let |sizeA| be the [=list/size=] of |shapeA|.
2850+
1. Let |sizeA| be |shapeA|'s [=list/size=].
28472851
1. Let |shapeB| be a [=list/clone=] of |b|'s [=MLOperand/shape=].
2848-
1. Let |sizeB| be the [=list/size=] of |shapeB|.
2852+
1. Let |sizeB| be |shapeB|'s [=list/size=].
28492853
1. If |sizeA| is not 2 or |sizeB| is not 2, then [=exception/throw=] a {{TypeError}}.
28502854
1. If |options|.{{MLGemmOptions/aTranspose}} is true, then reverse the order of the items in |shapeA|.
28512855
1. If |options|.{{MLGemmOptions/bTranspose}} is true, then reverse the order of the items in |shapeB|.
@@ -3436,19 +3440,19 @@ dictionary MLInstanceNormalizationOptions {
34363440

34373441
partial interface MLGraphBuilder {
34383442
MLOperand instanceNormalization(MLOperand input,
3439-
optional MLInstanceNormalizationOptions options = {});
3443+
optional MLInstanceNormalizationOptions options = {});
34403444
};
34413445
</script>
34423446

34433447
{{MLInstanceNormalizationOptions}} has the following members:
34443448
<dl dfn-type=dict-member dfn-for=MLInstanceNormalizationOptions>
34453449
: <dfn>scale</dfn>
34463450
::
3447-
The 1-D tensor of the scaling values whose [=list/size=] is equal to the number of channels, i.e. the size of the feature dimension of the input. For example, for an |input| tensor with `nchw` layout, the [=list/size=] is equal to |input|'s [=MLOperand/shape=][1].
3451+
The 1-D tensor of the scaling values whose [=list/size=] is equal to the number of channels, i.e. the size of the feature dimension of the input. For example, for an |input| tensor with {{MLInputOperandLayout/"nchw"}} layout, the [=list/size=] is equal to |input|'s [=MLOperand/shape=][1].
34483452

34493453
: <dfn>bias</dfn>
34503454
::
3451-
The 1-D tensor of the bias values whose [=list/size=] is equal to the size of the feature dimension of the input. For example, for an |input| tensor with `nchw` layout, the [=list/size=] is equal to |input|'s [=MLOperand/shape=][1].
3455+
The 1-D tensor of the bias values whose [=list/size=] is equal to the size of the feature dimension of the input. For example, for an |input| tensor with {{MLInputOperandLayout/"nchw"}} layout, the [=list/size=] is equal to |input|'s [=MLOperand/shape=][1].
34523456

34533457
: <dfn>epsilon</dfn>
34543458
::
@@ -3535,7 +3539,8 @@ dictionary MLLayerNormalizationOptions {
35353539
};
35363540

35373541
partial interface MLGraphBuilder {
3538-
MLOperand layerNormalization(MLOperand input, optional MLLayerNormalizationOptions options = {});
3542+
MLOperand layerNormalization(MLOperand input,
3543+
optional MLLayerNormalizationOptions options = {});
35393544
};
35403545
</script>
35413546

@@ -3626,7 +3631,7 @@ partial interface MLGraphBuilder {
36263631
</div>
36273632

36283633
### leakyRelu ### {#api-mlgraphbuilder-leakyrelu}
3629-
Calculate the <a href="https://en.wikipedia.org/wiki/Rectifier_(neural_networks)#Leaky_ReLU"> leaky version of rectified linear function</a> on the input tensor element-wise. The calculation follows the expression `max(0, x) + alpha min(0, x)`.
3634+
Calculate the <a href="https://en.wikipedia.org/wiki/Rectifier_(neural_networks)#Leaky_ReLU"> leaky version of rectified linear function</a> on the input tensor element-wise. The calculation follows the expression `max(0, x) + alpha * min(0, x)`.
36303635

36313636
<script type=idl>
36323637
dictionary MLLeakyReluOptions {
@@ -4254,9 +4259,9 @@ partial interface MLGraphBuilder {
42544259
To <dfn dfn-for=MLGraphBuilder>calculate matmul output sizes</dfn>, given |a| and |b| run the following steps:
42554260
</summary>
42564261
1. Let |shapeA| be a [=list/clone=] of |a|'s [=MLOperand/shape=]
4257-
1. Let |sizeA| be the [=list/size=] of |shapeA|.
4262+
1. Let |sizeA| be |shapeA|'s [=list/size=].
42584263
1. Let |shapeB| be a [=list/clone=] of |b|'s [=MLOperand/shape=]
4259-
1. Let |sizeB| be the [=list/size=] of |shapeB|.
4264+
1. Let |sizeB| be |shapeB|'s [=list/size=].
42604265
1. If either |sizeA| or |sizeB| is less than 2, then [=exception/throw=] a {{TypeError}}.
42614266
1. Let |colsA| be |shapeA|[|sizeA| - 1].
42624267
1. Let |rowsA| be |shapeA|[|sizeA| - 2].
@@ -4616,7 +4621,7 @@ Apply the L2 norm function to a region of the input feature map. The L2 norm is
46164621
Calculate the maximum value for patches of a feature map, and use it to create a pooled feature map. See [[#api-mlgraphbuilder-pool2d]] for more detail.
46174622

46184623
### prelu ### {#api-mlgraphbuilder-prelu}
4619-
Calculate the <a href="https://en.wikipedia.org/wiki/Rectifier_(neural_networks)#Parametric_ReLU">parametric version of rectified linear function (Parametric ReLU)</a> on the input tensor element-wise. Parametric ReLU is a type of leaky ReLU that, instead of having a scalar slope like 0.01, making the slope (coefficient of leakage) into a parameter that is learned during the model training phase of this operation. The calculation follows the expression `max(0, x) + slope min(0, x)`.
4624+
Calculate the <a href="https://en.wikipedia.org/wiki/Rectifier_(neural_networks)#Parametric_ReLU">parametric version of rectified linear function (Parametric ReLU)</a> on the input tensor element-wise. Parametric ReLU is a type of leaky ReLU that, instead of having a scalar slope like 0.01, making the slope (coefficient of leakage) into a parameter that is learned during the model training phase of this operation. The calculation follows the expression `max(0, x) + slope * min(0, x)`.
46204625
<script type=idl>
46214626
partial interface MLGraphBuilder {
46224627
MLOperand prelu(MLOperand input, MLOperand slope);
@@ -5388,9 +5393,10 @@ dictionary MLSplitOptions {
53885393
};
53895394

53905395
partial interface MLGraphBuilder {
5391-
sequence<MLOperand> split(MLOperand input,
5392-
([EnforceRange] unsigned long or sequence<[EnforceRange] unsigned long>) splits,
5393-
optional MLSplitOptions options = {});
5396+
sequence<MLOperand> split(
5397+
MLOperand input,
5398+
([EnforceRange] unsigned long or sequence<[EnforceRange] unsigned long>) splits,
5399+
optional MLSplitOptions options = {});
53945400
};
53955401
</script>
53965402

@@ -5421,7 +5427,7 @@ partial interface MLGraphBuilder {
54215427
1. Otherwise, let |splitCount| be |splits|.
54225428
1. If |splits| is a sequence of {{unsigned long}}:
54235429
1. If the sum of its elements is not equal to |input|'s [=MLOperand/shape=][|axis|], then [=exception/throw=] a {{TypeError}}.
5424-
1. Otherwise, let |splitCount| be the [=list/size=] of |splits|.
5430+
1. Otherwise, let |splitCount| be |splits|'s [=list/size=].
54255431
1. *Make graph connections:*
54265432
1. Let |operator| be an [=operator=] for the split operation, given |splits| and |options|.
54275433
1. Let |outputs| be a new [=/list=].
@@ -5813,11 +5819,11 @@ const context = await navigator.ml.createContext({powerPreference: 'low-power'})
58135819
Given the following build graph:
58145820
<pre>
58155821
constant1 ---+
5816-
+--- Add ---> intermediateOutput1 ---+
5822+
+--- Add ---> intermediateOutput1 ---+
58175823
input1 ---+ |
5818-
+--- Mul---> output
5824+
+--- Mul---> output
58195825
constant2 ---+ |
5820-
+--- Add ---> intermediateOutput2 ---+
5826+
+--- Add ---> intermediateOutput2 ---+
58215827
input2 ---+
58225828
</pre>
58235829
<details open>

0 commit comments

Comments
 (0)