Skip to content

Commit 60b8e0a

Browse files
Conventions: Ensure all dict members have definitions (#621)
* Conventions: Ensure all dict members have definitions - Documents the convention. - Dedupes `MLEluOptions`'s `alpha` definitions - Dedupes `MLClampOptions`'s `minValue` and `maxValue`, and references #396 - Moves `MLOperandDescriptor` member documentation out of IDL comments - Introduces simple definitions for `MLComputeResult`'s `inputs` and `outputs` - Converts "device type" and "power preference" into definitions for `MLContextOptions`'s `deviceType` and `powerPreference`. Also includes these adjacent changes to improve the document flow: - Moves the "context type" definition into the `MLContext` section. - Moves the Permission Policy Integration section from API into Programming Model which seems like a slightly better home for it. Fixes #483 * Add subsection for MLContextOptions * Feedback from @huningxin
1 parent edc9e52 commit 60b8e0a

File tree

2 files changed

+63
-38
lines changed

2 files changed

+63
-38
lines changed

docs/SpecCodingConventions.md

+1-2
Original file line numberDiff line numberDiff line change
@@ -138,5 +138,4 @@ Example:
138138

139139
* Dictionary members are referenced using dotted property syntax. e.g. _options.padding_
140140
* Note that this is contrary to Web IDL + Infra; formally, a JavaScript object has been mapped to a Web IDL [dictionary](https://webidl.spec.whatwg.org/#idl-dictionaries) and then processed into an Infra [map](ordered) by the time a spec is using it. So formally the syntax _options["padding"]_ should be used.
141-
142-
141+
* Dictionary members should be given definitions somewhere in the text. This is usually done with a `<dl dfn-type=dict-member dfn-for=...>` for the dictionary as a whole, containing a `<dfn>` for each member.

index.bs

+62-36
Original file line numberDiff line numberDiff line change
@@ -616,9 +616,9 @@ Unlike WebGPU, this API does not intrinsically support custom shader authoring;
616616

617617
The WebGPU API identifies <a href="https://gpuweb.github.io/gpuweb/#privacy-machine-artifacts">machine-specific artifacts</a> as a privacy consideration. Similarly, the WebNN API's compute unit scheduling may under certain circumstances introduce a fingerprint. However, similarly to WebGPU, such fingerprints are identical across most or all of the devices of each vendor, mitigating the concern. Furthermore, software implementations can be used to further eliminate such artifacts.
618618

619-
The WebNN API defines two developer-settable preferences to help inform [[#programming-model-device-selection]] and allow the implementation to better select the most appropriate underlying execution device for the workload. [=Device type=] normatively indicates the kind of device and is either {{MLDeviceType/"cpu"}} or {{MLDeviceType/"gpu"}}. If this type cannot be satisfied, an "{{OperationError}}" {{DOMException}} is thrown, thus this type can in some cases add two bits of entropy to the fingerprint. [=Power preference=] indicates preference as related to the power consumption and is considered a hint only and as such does not increase entropy of the fingerprint.
619+
The WebNN API defines two developer-settable preferences to help inform [[#programming-model-device-selection]] and allow the implementation to better select the most appropriate underlying execution device for the workload. An {{MLDeviceType}} normatively indicates the kind of device and is either {{MLDeviceType/"cpu"}} or {{MLDeviceType/"gpu"}}. If this type cannot be satisfied, an "{{OperationError}}" {{DOMException}} is thrown, thus this type can in some cases add two bits of entropy to the fingerprint. An {{MLPowerPreference}} indicates preference as related to the power consumption and is considered a hint only and as such does not increase entropy of the fingerprint.
620620

621-
If a future version of this specification introduces support for new a [=device type=] that can only support a subset of {{MLOperandDataType}}s, that may introduce a new fingerprint.
621+
If a future version of this specification introduces support for a new {{MLDeviceType}} that can only support a subset of {{MLOperandDataType}}s, that may introduce a new fingerprint.
622622

623623
In general, implementers of this API are expected to apply <a href="https://gpuweb.github.io/gpuweb/#privacy-considerations">WebGPU Privacy Considerations</a> to their implementations where applicable.
624624

@@ -668,7 +668,7 @@ An {{MLContext}} interface represents a global state of neural network execution
668668

669669
In a situation when a GPU context executes a graph with a constant or an input in the system memory as an {{ArrayBufferView}}, the input content is automatically uploaded from the system memory to the GPU memory, and downloaded back to the system memory of an {{ArrayBufferView}} output buffer at the end of the graph execution. This data upload and download cycles will only occur whenever the execution device requires the data to be copied out of and back into the system memory, such as in the case of the GPU. It doesn't occur when the device is a CPU device. Additionally, the result of the graph execution is in a known layout format. While the execution may be optimized for a native memory access pattern in an intermediate result within the graph, the output of the last operation of the graph must convert the content back to a known layout format at the end of the graph in order to maintain the expected behavior from the caller's perspective.
670670

671-
When an {{MLContext}} is created with {{MLContextOptions}}, the user agent selects and creates the underlying execution device by taking into account the application's [=power preference=] and [=device type=] specified in the {{MLPowerPreference}} and {{MLDeviceType}} options.
671+
When an {{MLContext}} is created with {{MLContextOptions}}, the user agent selects and creates the underlying execution device by taking into account the application's {{MLPowerPreference}} and {{MLDeviceType}} options.
672672

673673
## Task Source ## {#programming-model-task-source}
674674

@@ -678,6 +678,12 @@ The <dfn>ML task source</dfn> is a [=task source=] to be used for all [=tasks=]
678678
<p>To <dfn>queue an ML task</dfn> given a [=global object=] |global| and a series of steps |steps|, [=queue a global task=] on the [=ML task source=] with |global| and |steps|.
679679
</div>
680680

681+
## Permissions Policy Integration ## {#permissions-policy-integration}
682+
683+
This specification defines a [=policy-controlled feature=] identified by the
684+
string "<code><dfn data-lt="webnn-feature">webnn</dfn></code>".
685+
Its [=policy-controlled feature/default allowlist=] is <code>'self'</code>.
686+
681687

682688
API {#api}
683689
=====================
@@ -720,31 +726,17 @@ interface ML {
720726
};
721727
</script>
722728

723-
### Permissions Policy Integration ### {#permissions-policy-integration}
724-
725-
This specification defines a [=policy-controlled feature=] identified by the
726-
string "<code><dfn data-lt="webnn-feature">webnn</dfn></code>".
727-
Its [=policy-controlled feature/default allowlist=] is <code>'self'</code>.
729+
### {{MLContextOptions}} ### {#api-mlcontextoptions}
728730

729-
### {{ML/createContext()}} ### {#api-ml-createcontext}
730-
731-
The <dfn>context type</dfn> is the type of the execution context that manages the resources and facilitates the compilation and execution of the neural network graph:
732-
<dl dfn-for="context type">
733-
<dt>"<dfn>default</dfn>"</dt>
734-
<dd>Context created per user preference options.</dd>
735-
<dt>"<dfn>webgpu</dfn>"</dt>
736-
<dd>Context created from WebGPU device.</dd>
737-
</dl>
738-
739-
The <dfn>device type</dfn> indicates the kind of device used for the context. It is one of the following:
731+
The <dfn dfn-for=MLContextOptions dfn-type=dict-member>deviceType</dfn> option is an <dfn dfn-type=enum>MLDeviceType</dfn> and indicates the application's preference for the kind of device used for the context. It is one of the following:
740732
<dl dfn-for="MLDeviceType">
741733
<dt>"<dfn enum-value>cpu</dfn>"</dt>
742734
<dd>Provides the broadest compatibility and usability across all client devices with varying degrees of performance.</dd>
743735
<dt>"<dfn enum-value>gpu</dfn>"</dt>
744736
<dd>Provides the broadest range of achievable performance across graphics hardware platforms from consumer devices to professional workstations.</dd>
745737
</dl>
746738

747-
The <dfn>power preference</dfn> indicates preference as related to power consumption. It is one of the following:
739+
The <dfn dfn-for=MLContextOptions dfn-type=dict-member>powerPreference</dfn> option is an <dfn dfn-type=enum>MLPowerPreference</dfn> and indicates the application's preference as related to power consumption. It is one of the following:
748740
<dl dfn-for="MLPowerPreference">
749741
<dt>"<dfn enum-value>default</dfn>"</dt>
750742
<dd>Let the user agent select the most suitable behavior.</dd>
@@ -754,6 +746,8 @@ The <dfn>power preference</dfn> indicates preference as related to power consump
754746
<dd>Prioritizes power consumption over other considerations such as execution speed.</dd>
755747
</dl>
756748

749+
### {{ML/createContext()}} ### {#api-ml-createcontext}
750+
757751
<details open algorithm>
758752
<summary>
759753
To <dfn>create a context</dfn> given [=realm=] |realm| and |options| (a {{GPUDevice}} or {{MLContextOptions}}), run these steps:
@@ -800,7 +794,7 @@ The <dfn>power preference</dfn> indicates preference as related to power consump
800794
</details>
801795

802796
## {{MLContext}} interface ## {#api-mlcontext}
803-
The {{MLContext}} interface represents a global state of neural network compute workload and execution processes. Each {{MLContext}} object has associated [=context type=], [=device type=] and [=power preference=].
797+
The {{MLContext}} interface represents a global state of neural network compute workload and execution processes. Each {{MLContext}} object has associated [=context type=], {{MLDeviceType}} and {{MLPowerPreference}}.
804798

805799
<script type=idl>
806800
typedef record<DOMString, ArrayBufferView> MLNamedArrayBufferViews;
@@ -820,22 +814,38 @@ interface MLContext {
820814
<div class=internal-slots>
821815
{{MLContext}} has the following internal slots:
822816
<dl dfn-type=attribute dfn-for="MLContext">
823-
: <dfn>\[[contextType]]</dfn> of type [=context type=]
817+
: <dfn>\[[contextType]]</dfn> of type [=context type=].
824818
::
825819
The {{MLContext}}'s [=context type=].
826-
: <dfn>\[[deviceType]]</dfn> of type [=device type=]
820+
: <dfn>\[[deviceType]]</dfn> of type {{MLDeviceType}}.
827821
::
828-
The {{MLContext}}'s [=device type=].
829-
: <dfn>\[[powerPreference]]</dfn> of type [=power preference=]
822+
The {{MLContext}}'s {{MLDeviceType}}.
823+
: <dfn>\[[powerPreference]]</dfn> of type {{MLPowerPreference}}.
830824
::
831-
The {{MLContext}}'s [=power preference=].
825+
The {{MLContext}}'s {{MLPowerPreference}}.
832826
</dl>
833827
</div>
834828

829+
The <dfn>context type</dfn> is the type of the execution context that manages the resources and facilitates the compilation and execution of the neural network graph:
830+
<dl dfn-for="context type">
831+
<dt>"<dfn>default</dfn>"</dt>
832+
<dd>Context created per user preference options.</dd>
833+
<dt>"<dfn>webgpu</dfn>"</dt>
834+
<dd>Context created from WebGPU device.</dd>
835+
</dl>
836+
835837
<div class="note">
836838
When the {{MLContext/[[contextType]]}} is set to [=context type/default=] with the {{MLContextOptions}}.{{MLContextOptions/deviceType}} set to {{MLDeviceType/"gpu"}}, the user agent is responsible for creating an internal GPU device that operates within the context and is capable of ML workload submission on behalf of the calling application. In this setting however, only {{ArrayBufferView}} inputs and outputs are allowed in and out of the graph execution since the application has no way to know what type of internal GPU device is being created on their behalf. In this case, the user agent is responsible for automatic uploads and downloads of the inputs and outputs to and from the GPU memory using this said internal device.
837839
</div>
838840

841+
<dl dfn-type=dict-member dfn-for=MLComputeResult>
842+
: <dfn>inputs</dfn>
843+
:: An object where the keys are the graph input names, and the values are the transferred {{ArrayBufferView}}s for the supplied input tensor values.
844+
845+
: <dfn>outputs</dfn>
846+
:: An object where the keys are the graph output names, and the values are the transferred {{ArrayBufferView}}s for the computed output tensor values.
847+
</dl>
848+
839849
<details open algorithm>
840850
<summary>
841851
To <dfn>validate graph resources</dfn>, given {{MLNamedArrayBufferViews}} |resources| and [=ordered map=] |descriptors|, run the following steps:
@@ -1015,15 +1025,19 @@ enum MLOperandDataType {
10151025
};
10161026

10171027
dictionary MLOperandDescriptor {
1018-
// The operand type.
10191028
required MLOperandDataType dataType;
1020-
1021-
// The dimensions field is empty for scalar operands,
1022-
// and non-empty for tensor operands.
10231029
sequence<[EnforceRange] unsigned long> dimensions = [];
10241030
};
10251031
</script>
10261032

1033+
<dl dfn-type=dict-member dfn-for=MLOperandDescriptor>
1034+
: <dfn>dataType</dfn>
1035+
:: The operand data type.
1036+
1037+
: <dfn>dimensions</dfn>
1038+
:: The shape of the operand. It is empty for scalar operands, and non-empty for tensor operands.
1039+
</dl>
1040+
10271041
<details open algorithm>
10281042
<summary>
10291043
The <dfn for="MLOperandDescriptor">byte length</dfn> of an {{MLOperandDescriptor}} |desc| is the value returned by the following steps:
@@ -1625,6 +1639,14 @@ partial interface MLGraphBuilder {
16251639
};
16261640
</script>
16271641

1642+
<dl dfn-type=dict-member dfn-for=MLClampOptions>
1643+
: <dfn>minValue</dfn>
1644+
:: The minimum value of the range. When it is not specified, the clamping is not performed on the lower limit of the range.
1645+
1646+
: <dfn>maxValue</dfn>
1647+
:: The maximum value of the range. When it is not specified, the clamping is not performed on the upper limit of the range.
1648+
</dl>
1649+
16281650
<div class="note">
16291651
<details open>
16301652
<summary>
@@ -1658,6 +1680,9 @@ partial interface MLGraphBuilder {
16581680
To <dfn>check clamp options</dfn> given {{MLClampOptions}} |options|, run the following steps:
16591681
</summary>
16601682
1. If |options|.{{MLClampOptions/minValue}} is greater than |options|.{{MLClampOptions/maxValue}}, then return false.
1683+
1684+
Issue(396): Not all implementations support {{MLClampOptions/minValue}} equal to {{MLClampOptions/maxValue}}.
1685+
16611686
1. Return true.
16621687
</details>
16631688

@@ -1666,8 +1691,6 @@ partial interface MLGraphBuilder {
16661691
**Arguments:**
16671692
- *input*: an {{MLOperand}}. The input tensor.
16681693
- *options*: an optional {{MLClampOptions}}. The optional parameters of the operation.
1669-
- *minValue*: a {{float}} scalar. Specifies the minimum value of the range. When it is not specified, the clamping is not performed on the lower limit of the range.
1670-
- *maxValue*: a {{float}} scalar. Specifies the maximum value of the range. When it is not specified, the clamping is not performed on the upper limit of the range.
16711694
**Returns:**
16721695
- an {{MLOperand}}. The output tensor of the same shape as *operand*.
16731696
</div>
@@ -1691,8 +1714,6 @@ partial interface MLGraphBuilder {
16911714
<div>
16921715
**Arguments:**
16931716
- *options*: an optional {{MLClampOptions}}. The optional parameters of the operation.
1694-
- *minValue*: a {{float}} scalar. Specifies the minimum value of the range. When it is not specified, the clamping is not performed on the lower limit of the range.
1695-
- *maxValue*: a {{float}} scalar. Specifies the maximum value of the range. When it is not specified, the clamping is not performed on the upper limit of the range.
16961717
**Returns:**
16971718
- an {{MLActivation}}. The operator representing the clamp operation.
16981719
</div>
@@ -2568,6 +2589,13 @@ partial interface MLGraphBuilder {
25682589
};
25692590
</script>
25702591

2592+
{{MLEluOptions}} has the following members:
2593+
<dl dfn-type=dict-member dfn-for=MLEluOptions>
2594+
: <dfn>alpha</dfn>
2595+
:: A scalar multiplier.
2596+
</dl>
2597+
2598+
25712599
<div class="note">
25722600
<details open>
25732601
<summary>
@@ -2593,7 +2621,6 @@ partial interface MLGraphBuilder {
25932621
**Arguments:**
25942622
- *input*: an {{MLOperand}}. The input tensor.
25952623
- *options*: an optional {{MLEluOptions}}. The optional parameters of the operation.
2596-
- *alpha*: a {{float}} scalar multiplier, default to 1.
25972624

25982625
**Returns:**
25992626
- an {{MLOperand}}. The output tensor of the same shape as *input*.
@@ -2617,7 +2644,6 @@ partial interface MLGraphBuilder {
26172644
<div>
26182645
**Arguments:**
26192646
- *options*: an optional {{MLEluOptions}}. The optional parameters of the operation.
2620-
- *alpha*: a {{float}} scalar multiplier, default to 1.
26212647

26222648
**Returns:**
26232649
- an {{MLActivation}}. The activation function representing the elu operation.

0 commit comments

Comments
 (0)