Skip to content

Commit 2ee6825

Browse files
Complain about RFC2119 terms, and fix usage
1 parent 4e80205 commit 2ee6825

File tree

1 file changed

+30
-23
lines changed

1 file changed

+30
-23
lines changed

index.bs

+30-23
Original file line numberDiff line numberDiff line change
@@ -21,6 +21,7 @@ Markup Shorthands: css no
2121
Logo: https://webmachinelearning.github.io/webmachinelearning-logo.png
2222
Deadline: 2023-10-01
2323
Assume Explicit For: yes
24+
Complain About: accidental-2119 yes
2425
Status Text: <p>
2526
Since the <a href="https://www.w3.org/TR/2023/CR-webnn-20230330/">initial Candidate Recommendation Snapshot</a> the Working Group has gathered further <a href="https://webmachinelearning.github.io/webnn-status/">implementation experience</a> and added new operations and data types needed for well-known <a href="https://github.com/webmachinelearning/webnn/issues/375">transformers to support generative AI use cases</a>. In addition, informed by this implementation experience, the group removed <code>MLCommandEncoder</code>, support for synchronous execution, and higher-level operations that can be expressed in terms of lower-level primitives in a performant manner. The group has also updated the specification to use modern authoring conventions to improve interoperability and precision of normative definitions.
2627
The group is developing a new feature, a <a href="https://github.com/webmachinelearning/webnn/issues/482">backend-agnostic storage type</a>, to improve performance and interoperability between the WebNN, WebGPU APIs and purpose-built hardware for ML and expects to republish this document as a Candidate Recommendation Snapshot when ready for implementation.
@@ -401,7 +402,7 @@ This section illustrates application-level use cases for neural network
401402
inference hardware acceleration. All applications in those use cases can be
402403
built on top of pre-trained deep neural network (DNN) [[models]].
403404

404-
Note: Please be aware that some of the use cases described here, are by their very nature, privacy-invasive. Developers who are planning to use the API for such use cases should ensure that the API is being used to benefit users, for purposes that users understand, and approve. They should apply the Ethical Principles for Web Machine Learning [[webmachinelearning-ethics]] and implement appropriate privacy risk mitigations such as transparency, data minimisation, and users controls.
405+
Note: Please be aware that some of the use cases described here, are by their very nature, privacy-invasive. Developers who are planning to use the API for such use cases <span class=allow-2119>should</span> ensure that the API is being used to benefit users, for purposes that users understand, and approve. They <span class=allow-2119>should</span> apply the Ethical Principles for Web Machine Learning [[webmachinelearning-ethics]] and implement appropriate privacy risk mitigations such as transparency, data minimisation, and users controls.
405406

406407
### Person Detection ### {#usecase-person-detection}
407408

@@ -630,22 +631,28 @@ Purpose-built Web APIs for measuring high-resolution time mitigate against timin
630631

631632
## Guidelines for new operations ## {#security-new-ops}
632633

634+
*This section is non-normative.*
635+
636+
<div class=informative>
637+
633638
To ensure operations defined in this specification are shaped in a way they can be implemented securely, this section includes guidelines on how operations are expected to be defined to reduce potential for implementation problems. These guidelines are expected to evolve over time to align with industry best practices:
634639

635640
- Prefer simplicity of arguments
636641
- Don't use parsers for complex data formats
637642
- If an operation can be decomposed to low level primitives:
638643
- Add an informative emulation path
639644
- Prefer primitives over new high level operations but consider performance consequences
640-
- Operations should follow a consistent style for inputs and attributes
641-
- Operation families such as pooling and reduction should share API shape and options
645+
- Follow a consistent style for operation inputs and attributes
646+
- Share API shape and options for operation families such as pooling and reduction
642647
- Formalize failure cases into test cases whenever possible
643-
- When in doubt, leave it out: API surface should be as small as possible required to satisfy the use cases, but no smaller
648+
- When in doubt, leave it out: keep the API surface as small as possible to satisfy the use cases, but no smaller
644649
- Try to keep the API free of implementation details that might inhibit future evolution, do not overspecify
645650
- Fail fast: the sooner the web developer is informed of an issue, the better
646651

647652
In general, always consider the security and privacy implications as documented in [[security-privacy-questionnaire]] by the Technical Architecture Group and the Privacy Interest Group when adding new features.
648653

654+
</div>
655+
649656
Privacy Considerations {#privacy}
650657
===================================
651658

@@ -665,7 +672,7 @@ The WebNN API defines two developer-settable preferences to help inform [[#progr
665672

666673
Issue(623): {{MLContextOptions}} is under active development, and the design is expected to change, informed by further implementation experience and new use cases from the wider web community.
667674

668-
If a future version of this specification introduces support for a new {{MLDeviceType}} that can only support a subset of {{MLOperandDataType}}s, that may introduce a new fingerprint.
675+
If a future version of this specification introduces support for a new {{MLDeviceType}} that can only support a subset of {{MLOperandDataType}}s, that could introduce a new fingerprint.
669676

670677
In general, implementers of this API are expected to apply <a href="https://gpuweb.github.io/gpuweb/#privacy-considerations">WebGPU Privacy Considerations</a> to their implementations where applicable.
671678

@@ -951,7 +958,7 @@ Schedules the computational workload of a compiled {{MLGraph}} on the {{MLContex
951958
**Returns:** {{undefined}}.
952959
</div>
953960

954-
Note: `dispatch()` itself provides no signal that graph execution has completed. Rather, callers should await the results of reading back the output tensors. See [[#api-mlcontext-dispatch-examples]] below.
961+
Note: `dispatch()` itself provides no signal that graph execution has completed. Rather, callers can `await` the results of reading back the output tensors. See [[#api-mlcontext-dispatch-examples]] below.
955962

956963
<details open algorithm>
957964
<summary>
@@ -1108,7 +1115,7 @@ Bring-your-own-buffer variant of {{MLContext/readTensor(tensor)}}. Reads back th
11081115
1. Otherwise, [=queue an ML task=] with |global| and the following steps:
11091116
1. If |outputData| is [=BufferSource/detached=], [=reject=] |promise| with a {{TypeError}}, and abort these steps.
11101117

1111-
Note: [=Validating buffer with descriptor=] above will fail if |outputData| is detached, but it's possible |outputData| may detach between then and now.
1118+
Note: [=Validating buffer with descriptor=] above will fail if |outputData| is detached, but it is possible that |outputData| could detach between then and now.
11121119

11131120
1. [=ArrayBuffer/Write=] |bytes| to |outputData|.
11141121
1. [=Resolve=] |promise| with {{undefined}}.
@@ -1145,7 +1152,7 @@ Writes data to the {{MLTensor/[[data]]}} of an {{MLTensor}} on the {{MLContext}}
11451152
1. Return {{undefined}}.
11461153
</details>
11471154

1148-
Note: Similar to `dispatch()`, `writeTensor()` itself provides no signal that the write has completed. To inspect the contents of a tensor, callers should await the results of reading back the tensor.
1155+
Note: Similar to `dispatch()`, `writeTensor()` itself provides no signal that the write has completed. To inspect the contents of a tensor, callers can `await` the results of reading back the tensor.
11491156

11501157
### {{MLContext/opSupportLimits()}} ### {#api-mlcontext-opsupportlimits}
11511158
The {{MLContext/opSupportLimits()}} exposes level of support that differs across implementations at operator level. Consumers of the WebNN API are encouraged to probe feature support level by using {{MLContext/opSupportLimits()}} to determine the optimal model architecture to be deployed for each target platform.
@@ -1325,7 +1332,7 @@ Issue(391): Should 0-size dimensions be supported?
13251332

13261333
An {{MLOperand}} represents an intermediary graph being constructed as a result of compositing parts of an operation into a fully composed operation.
13271334

1328-
For instance, an {{MLOperand}} may represent a constant feeding to an operation or the result from combining multiple constants together into an operation. See also [[#programming-model]].
1335+
For instance, an {{MLOperand}} can represent a constant feeding to an operation or the result from combining multiple constants together into an operation. See also [[#programming-model]].
13291336

13301337
<script type=idl>
13311338
[SecureContext, Exposed=(Window, DedicatedWorker)]
@@ -1446,7 +1453,7 @@ dictionary MLTensorDescriptor : MLOperandDescriptor {
14461453

14471454
The {{MLTensor}} interface represents a tensor which may be used as an input or output to an {{MLGraph}}. The memory backing an {{MLTensor}} should be allocated in an [=implementation-defined=] fashion according to the requirements of the {{MLContext}} and the {{MLTensorDescriptor}} used to create it. Operations involving the {{MLTensor/[[data]]}} of an {{MLTensor}} occur on the {{MLContext/[[timeline]]}} of its associated {{MLContext}}.
14481455

1449-
Note: The [=implementation-defined=] requirements of how an {{MLTensor}} is allocated may include constraints such as that the memory is allocated with a particular byte alignment or in a particular memory pool.
1456+
The [=implementation-defined=] requirements of how an {{MLTensor}} is allocated may include constraints such as that the memory is allocated with a particular byte alignment or in a particular memory pool.
14501457

14511458
<script type=idl>
14521459
[SecureContext, Exposed=(Window, DedicatedWorker)]
@@ -1614,7 +1621,7 @@ Create a named {{MLOperand}} based on a descriptor, that can be used as an input
16141621
</details>
16151622

16161623
<div class="note">
1617-
The {{MLGraphBuilder}} API allows creating an {{MLGraph}} without input operands. If the underlying platform doesn't support that, implementations may add a stub input, or pass constants as inputs to the graph.
1624+
The {{MLGraphBuilder}} API allows creating an {{MLGraph}} without input operands. If the underlying platform doesn't support that, implementations can add a stub input, or pass constants as inputs to the graph.
16181625
</div>
16191626

16201627
### constant operands ### {#api-mlgraphbuilder-constant}
@@ -2676,7 +2683,7 @@ partial dictionary MLOpSupportLimits {
26762683
interpreted according to the value of *options*.{{MLConvTranspose2dOptions/filterLayout}} and {{MLConvTranspose2dOptions/groups}}.
26772684
- <dfn>options</dfn>: an optional {{MLConvTranspose2dOptions}}.
26782685

2679-
**Returns:** an {{MLOperand}}. The output 4-D tensor that contains the transposed convolution result. The output shape is interpreted according to the *options*.{{MLConvTranspose2dOptions/inputLayout}} value. More specifically, unless the *options*.{{MLConvTranspose2dOptions/outputSizes}} values are explicitly specified, the *options*.{{MLConvTranspose2dOptions/outputPadding}} may be needed to compute the spatial dimension values of the output tensor as follows:
2686+
**Returns:** an {{MLOperand}}. The output 4-D tensor that contains the transposed convolution result. The output shape is interpreted according to the *options*.{{MLConvTranspose2dOptions/inputLayout}} value. More specifically, unless the *options*.{{MLConvTranspose2dOptions/outputSizes}} values are explicitly specified, the *options*.{{MLConvTranspose2dOptions/outputPadding}} is be needed to compute the spatial dimension values of the output tensor as follows:
26802687

26812688
`outputSize = (inputSize - 1) * stride + (filterSize - 1) * dilation + 1 - beginningPadding - endingPadding + outputPadding`
26822689
</div>
@@ -3587,7 +3594,7 @@ partial dictionary MLOpSupportLimits {
35873594
</div>
35883595

35893596
<div class="note">
3590-
The {{MLGraphBuilder/gather(input, indices, options)/indices}} parameter to {{MLGraphBuilder/gather()}} can not be clamped to the allowed range when the graph is built because the inputs are not known until execution. Implementations can introduce {{MLGraphBuilder/clamp()}} in the compiled graph if the required clamping behavior is not provided by the underlying platform. Similarly, if the underlying platform does not support negative indices, the implementation can introduce operations in the compiled graph to transform a negative index from the end of the dimension into a positive index.
3597+
The {{MLGraphBuilder/gather(input, indices, options)/indices}} parameter to {{MLGraphBuilder/gather()}} can not be clamped to the allowed range when the graph is built because the inputs are not known until execution. Implementations can introduce {{MLGraphBuilder/clamp()}} in the compiled graph if the specified clamping behavior is not provided by the underlying platform. Similarly, if the underlying platform does not support negative indices, the implementation can introduce operations in the compiled graph to transform a negative index from the end of the dimension into a positive index.
35913598
</div>
35923599

35933600
<table id=constraints-gather class='data' link-for="MLGraphBuilder/gather(input, indices, options)">
@@ -3811,7 +3818,7 @@ partial dictionary MLOpSupportLimits {
38113818
</div>
38123819

38133820
### gemm ### {#api-mlgraphbuilder-gemm}
3814-
Calculate the [general matrix multiplication of the Basic Linear Algebra Subprograms](https://en.wikipedia.org/wiki/Basic_Linear_Algebra_Subprograms#Level_3). The calculation follows the expression `alpha * A * B + beta * C`, where `A` is a 2-D tensor with shape *[M, K]* or *[K, M]*, `B` is a 2-D tensor with shape *[K, N]* or *[N, K]*, and `C` is [=unidirectionally broadcastable=] to the shape *[M, N]*. `A` and `B` may optionally be transposed prior to the calculation.
3821+
Calculate the [general matrix multiplication of the Basic Linear Algebra Subprograms](https://en.wikipedia.org/wiki/Basic_Linear_Algebra_Subprograms#Level_3). The calculation follows the expression `alpha * A * B + beta * C`, where `A` is a 2-D tensor with shape *[M, K]* or *[K, M]*, `B` is a 2-D tensor with shape *[K, N]* or *[N, K]*, and `C` is [=unidirectionally broadcastable=] to the shape *[M, N]*. `A` and `B` can optionally be transposed prior to the calculation.
38153822

38163823
<script type=idl>
38173824
dictionary MLGemmOptions : MLOperatorOptions {
@@ -3854,11 +3861,11 @@ partial dictionary MLOpSupportLimits {
38543861

38553862
: <dfn>aTranspose</dfn>
38563863
::
3857-
Indicates if the first input should be transposed prior to calculating the output.
3864+
Indicates if the first input is transposed prior to calculating the output.
38583865

38593866
: <dfn>bTranspose</dfn>
38603867
::
3861-
Indicates if the second input should be transposed prior to calculating the output.
3868+
Indicates if the second input is transposed prior to calculating the output.
38623869
</dl>
38633870

38643871
<div dfn-for="MLGraphBuilder/gemm(a, b, options)" dfn-type=argument>
@@ -4042,7 +4049,7 @@ partial dictionary MLOpSupportLimits {
40424049
: <dfn>initialHiddenState</dfn>
40434050
::
40444051
The 3-D initial hidden state tensor of shape *[numDirections, batchSize, hiddenSize]*.
4045-
When not specified, implementations SHOULD use a tensor filled with zero.
4052+
When not specified, implementations must use a tensor filled with zero.
40464053

40474054
: <dfn>resetAfter</dfn>
40484055
::
@@ -4169,7 +4176,7 @@ partial dictionary MLOpSupportLimits {
41694176
1. If |hiddenSize| * 6 is not a [=valid dimension=], then [=exception/throw=] a {{TypeError}}.
41704177
<details class=note>
41714178
<summary>Why |hiddenSize| * 6 ?</summary>
4172-
Some underlying platforms operate on a single bias tensor which is a concatenation of {{MLGruOptions/bias}} and {{MLGruOptions/recurrentBias}}. Therefore, 3 * |hiddenSize| + 3 * |hiddenSize| must also be a [=valid dimension=].
4179+
Some underlying platforms operate on a single bias tensor which is a concatenation of {{MLGruOptions/bias}} and {{MLGruOptions/recurrentBias}}. Therefore, 3 * |hiddenSize| + 3 * |hiddenSize| also needs to be a [=valid dimension=].
41734180
</details>
41744181
1. If |options|.{{MLGruOptions/bias}} [=map/exists=]:
41754182
1. If its [=MLOperand/dataType=] is not one of its [=/allowed data types=] (according to [this table](#constraints-gru)), then [=exception/throw=] a {{TypeError}}.
@@ -4462,7 +4469,7 @@ partial dictionary MLOpSupportLimits {
44624469
1. If |hiddenSize| * 6 is not a [=valid dimension=], then [=exception/throw=] a {{TypeError}}.
44634470
<details class=note>
44644471
<summary>Why |hiddenSize| * 6 ?</summary>
4465-
Some underlying platforms operate on a single bias tensor which is a concatenation of {{MLGruCellOptions/bias}} and {{MLGruCellOptions/recurrentBias}}. Therefore, 3 * |hiddenSize| + 3 * |hiddenSize| must also be a [=valid dimension=].
4472+
Some underlying platforms operate on a single bias tensor which is a concatenation of {{MLGruCellOptions/bias}} and {{MLGruCellOptions/recurrentBias}}. Therefore, 3 * |hiddenSize| + 3 * |hiddenSize| also needs to be a [=valid dimension=].
44664473
</details>
44674474
1. If |options|.{{MLGruCellOptions/bias}} [=map/exists=]:
44684475
1. If its [=MLOperand/dataType=] is not one of its [=/allowed data types=] (according to [this table](#constraints-gruCell)), then [=exception/throw=] a {{TypeError}}.
@@ -5343,11 +5350,11 @@ partial dictionary MLOpSupportLimits {
53435350

53445351
: <dfn>initialHiddenState</dfn>
53455352
::
5346-
The 3-D initial hidden state tensor of shape *[numDirections, batchSize, hiddenSize]*. When not specified, implementations SHOULD use a tensor filled with zero.
5353+
The 3-D initial hidden state tensor of shape *[numDirections, batchSize, hiddenSize]*. When not specified, implementations must use a tensor filled with zero.
53475354

53485355
: <dfn>initialCellState</dfn>
53495356
::
5350-
The 3-D initial hidden state tensor of shape *[numDirections, batchSize, hiddenSize]*. When not specified, implementations SHOULD use a tensor filled with zero.
5357+
The 3-D initial hidden state tensor of shape *[numDirections, batchSize, hiddenSize]*. When not specified, implementations must use a tensor filled with zero.
53515358

53525359
: <dfn>returnSequence</dfn>
53535360
::
@@ -5489,7 +5496,7 @@ partial dictionary MLOpSupportLimits {
54895496
1. If |hiddenSize| * 8 is not a [=valid dimension=], then [=exception/throw=] a {{TypeError}}.
54905497
<details class=note>
54915498
<summary>Why |hiddenSize| * 8 ?</summary>
5492-
Some underlying platforms operate on a single bias tensor which is a concatenation of {{MLLstmOptions/bias}} and {{MLLstmOptions/recurrentBias}}. Therefore, 4 * |hiddenSize| + 4 * |hiddenSize| must also be a [=valid dimension=].
5499+
Some underlying platforms operate on a single bias tensor which is a concatenation of {{MLLstmOptions/bias}} and {{MLLstmOptions/recurrentBias}}. Therefore, 4 * |hiddenSize| + 4 * |hiddenSize| also needs to be a [=valid dimension=].
54935500
</details>
54945501
1. If |options|.{{MLLstmOptions/bias}} [=map/exists=]:
54955502
1. If its [=MLOperand/dataType=] is not one of its [=/allowed data types=] (according to [this table](#constraints-lstm)), then [=exception/throw=] a {{TypeError}}.
@@ -5844,7 +5851,7 @@ partial dictionary MLOpSupportLimits {
58445851
1. If |hiddenSize| * 8 is not a [=valid dimension=], then [=exception/throw=] a {{TypeError}}.
58455852
<details class=note>
58465853
<summary>Why |hiddenSize| * 8 ?</summary>
5847-
Some underlying platforms operate on a single bias tensor which is a concatenation of {{MLLstmCellOptions/bias}} and {{MLLstmCellOptions/recurrentBias}}. Therefore, 4 * |hiddenSize| + 4 * |hiddenSize| must also be a [=valid dimension=].
5854+
Some underlying platforms operate on a single bias tensor which is a concatenation of {{MLLstmCellOptions/bias}} and {{MLLstmCellOptions/recurrentBias}}. Therefore, 4 * |hiddenSize| + 4 * |hiddenSize| also needs to be a [=valid dimension=].
58485855
</details>
58495856
1. If |options|.{{MLLstmCellOptions/bias}} [=map/exists=]:
58505857
1. If its [=MLOperand/dataType=] is not one of its [=/allowed data types=] (according to [this table](#constraints-lstmCell)), then [=exception/throw=] a {{TypeError}}.

0 commit comments

Comments
 (0)