Skip to content

Commit 1e70c23

Browse files
zolkisinexorabletashanssiko
committed
Device selection - Remove MLDeviceType (#809)
- remove MLDeviceType and related prose - update "create a context" algorithm - be explicit MLContextOptions is a hint Closes #302, closes #350, closes #749 Signed-off-by: Zoltan Kis <[email protected]> Co-authored-by: Joshua Bell <[email protected]> Co-authored-by: Anssi Kostiainen <[email protected]>
1 parent 8bff8a8 commit 1e70c23

File tree

1 file changed

+12
-33
lines changed

1 file changed

+12
-33
lines changed

index.bs

+12-33
Original file line numberDiff line numberDiff line change
@@ -669,11 +669,11 @@ Unlike WebGPU, this API does not intrinsically support custom shader authoring;
669669

670670
The WebGPU API identifies <a href="https://gpuweb.github.io/gpuweb/#privacy-machine-artifacts">machine-specific artifacts</a> as a privacy consideration. Similarly, the WebNN API's compute unit scheduling may under certain circumstances introduce a fingerprint. However, similarly to WebGPU, such fingerprints are identical across most or all of the devices of each vendor, mitigating the concern. Furthermore, software implementations can be used to further eliminate such artifacts.
671671

672-
The WebNN API defines two developer-settable preferences to help inform [[#programming-model-device-selection]] and allow the implementation to better select the most appropriate underlying execution device for the workload. An {{MLDeviceType}} normatively indicates the kind of device and is one of: {{MLDeviceType/"cpu"}}, {{MLDeviceType/"gpu"}}, {{MLDeviceType/"npu"}}. If this type cannot be satisfied, an "{{OperationError}}" {{DOMException}} is thrown, thus this type can in some cases add two bits of entropy to the fingerprint. An {{MLPowerPreference}} indicates preference as related to the power consumption and is considered a hint only and as such does not increase entropy of the fingerprint.
672+
The WebNN API defines developer-settable preferences to help inform [[#programming-model-device-selection]] and allow the implementation to better select the underlying execution device for the workload. An {{MLPowerPreference}} indicates preference as related to the desired low power consumption or high performance, is considered a hint only and as such does not increase entropy of the fingerprint.
673673

674674
Issue(623): {{MLContextOptions}} is under active development, and the design is expected to change, informed by further implementation experience and new use cases from the wider web community.
675675

676-
If a future version of this specification introduces support for a new {{MLDeviceType}} that can only support a subset of {{MLOperandDataType}}s, that could introduce a new fingerprint.
676+
If a future version of this specification introduces support for a new {{MLContextOptions}} member for supporting only a subset of {{MLOperandDataType}}s, that could introduce a new fingerprint.
677677

678678
In general, implementers of this API are expected to apply <a href="https://gpuweb.github.io/gpuweb/#privacy-considerations">WebGPU Privacy Considerations</a> to their implementations where applicable.
679679

@@ -729,7 +729,13 @@ An {{MLContext}} interface represents a global state of neural network execution
729729

730730
In a situation when a GPU context executes a graph with a constant or an input in the system memory as an {{ArrayBufferView}}, the input content is automatically uploaded from the system memory to the GPU memory, and downloaded back to the system memory of an {{ArrayBufferView}} output buffer at the end of the graph execution. This data upload and download cycles will only occur whenever the execution device requires the data to be copied out of and back into the system memory, such as in the case of the GPU. It doesn't occur when the device is a CPU device. Additionally, the result of the graph execution is in a known layout format. While the execution may be optimized for a native memory access pattern in an intermediate result within the graph, the output of the last operation of the graph must convert the content back to a known layout format at the end of the graph in order to maintain the expected behavior from the caller's perspective.
731731

732-
When an {{MLContext}} is created with {{MLContextOptions}}, the user agent selects and creates the underlying execution device by taking into account the application's {{MLPowerPreference}} and {{MLDeviceType}} options.
732+
<div class="note">
733+
When an {{MLContext}} is created with {{MLContextOptions}}, the user agent selects and creates the underlying execution device by taking into account these options, currently only the {{MLPowerPreference}} option.
734+
735+
Depending on the underlying platform, the user agent <span class=allow-2119>may</span> select different combinations of CPU, NPU and GPU devices.
736+
</div>
737+
738+
For a history and rationale of this design, please see the <a href="https://github.com/webmachinelearning/webnn/blob/master/device-selection-explainer.md">device selection explainer</a>.
733739

734740
## Task Source ## {#programming-model-task-source}
735741

@@ -764,20 +770,13 @@ WorkerNavigator includes NavigatorML;
764770

765771
## {{ML}} interface ## {#api-ml}
766772
<script type=idl>
767-
enum MLDeviceType {
768-
"cpu",
769-
"gpu",
770-
"npu"
771-
};
772-
773773
enum MLPowerPreference {
774774
"default",
775775
"high-performance",
776776
"low-power"
777777
};
778778

779779
dictionary MLContextOptions {
780-
MLDeviceType deviceType = "cpu";
781780
MLPowerPreference powerPreference = "default";
782781
};
783782

@@ -792,16 +791,6 @@ interface ML {
792791

793792
Issue(623): {{MLContextOptions}} is under active development, and the design is expected to change, informed by further implementation experience and new use cases from the wider web community. The Working Group is considering additional API controls to allow the definition of a fallback device, multiple devices in a preferred order, or an exclusion of a specific device. Other considerations under discussion include error handling, ultimate fallback, and quantized operators. Feedback is welcome on any of these design considerations from web developers, library authors, OS and hardware vendors, and other stakeholders via GitHub:
794793

795-
The <dfn dfn-for=MLContextOptions dfn-type=dict-member>deviceType</dfn> option is an <dfn dfn-type=enum>MLDeviceType</dfn> and indicates the application's preference for the kind of device used for the context. It is one of the following:
796-
<dl dfn-for="MLDeviceType">
797-
<dt>"<dfn enum-value>cpu</dfn>"</dt>
798-
<dd>Provides the broadest compatibility and usability across all client devices with varying degrees of performance.</dd>
799-
<dt>"<dfn enum-value>gpu</dfn>"</dt>
800-
<dd>Provides the broadest range of achievable performance across graphics hardware platforms from consumer devices to professional workstations. The underlying platform implementation may fall back to other devices for certain operators and parts of the graph.</dd>
801-
<dt>"<dfn enum-value>npu</dfn>"</dt>
802-
<dd>Provides power efficiency for sustained workloads across hardware platforms with purpose-built accelerators. The underlying platform implementation may fall back to other devices for certain operators and parts of the graph.</dd>
803-
</dl>
804-
805794
The <dfn dfn-for=MLContextOptions dfn-type=dict-member>powerPreference</dfn> option is an <dfn dfn-type=enum>MLPowerPreference</dfn> and indicates the application's preference as related to power consumption. It is one of the following:
806795
<dl dfn-for="MLPowerPreference">
807796
<dt>"<dfn enum-value>default</dfn>"</dt>
@@ -828,16 +817,13 @@ The <dfn dfn-for=MLContextOptions dfn-type=dict-member>powerPreference</dfn> opt
828817
1. Let |context| be a new {{MLContext}} in |realm|.
829818
1. If |options| is a {{GPUDevice}} object:
830819
1. Set |context|.{{MLContext/[[contextType]]}} to "[=context type/webgpu=]".
831-
1. Set |context|.{{MLContext/[[deviceType]]}} to {{MLDeviceType/"gpu"}}.
832820
1. Set |context|.{{MLContext/[[powerPreference]]}} to {{MLPowerPreference/"default"}}.
833821
1. Otherwise:
834822
1. Set |context|.{{MLContext/[[contextType]]}} to "[=context type/default=]".
835823
1. Set |context|.{{MLContext/[[lost]]}} to [=a new promise=] in |realm|.
836-
1. If |options|["{{MLContextOptions/deviceType}}"] [=map/exists=], then set |context|.{{MLContext/[[deviceType]]}} to |options|["{{MLContextOptions/deviceType}}"].
837-
1. Otherwise, set |context|.{{MLContext/[[deviceType]]}} to {{MLDeviceType/"cpu"}}.
838824
1. If |options|["{{MLContextOptions/powerPreference}}"] [=map/exists=], then set |context|.{{MLContext/[[powerPreference]]}} to |options|["{{MLContextOptions/powerPreference}}"].
839825
1. Otherwise, set |context|.{{MLContext/[[powerPreference]]}} to {{MLPowerPreference/"default"}}.
840-
1. If the user agent cannot support |context|.{{MLContext/[[contextType]]}}, |context|.{{MLContext/[[deviceType]]}} and |context|.{{MLContext/[[powerPreference]]}}, return failure.
826+
1. If the user agent cannot support |context|.{{MLContext/[[contextType]]}}, return failure.
841827
1. Return |context|.
842828
</details>
843829

@@ -870,7 +856,7 @@ The <dfn dfn-for=MLContextOptions dfn-type=dict-member>powerPreference</dfn> opt
870856
</details>
871857

872858
## {{MLContext}} interface ## {#api-mlcontext}
873-
The {{MLContext}} interface represents a global state of neural network compute workload and execution processes. Each {{MLContext}} object has associated [=context type=], {{MLDeviceType}} and {{MLPowerPreference}}.
859+
The {{MLContext}} interface represents a global state of neural network compute workload and execution processes. Each {{MLContext}} object has associated [=context type=] and {{MLPowerPreference}}.
874860

875861
<script type=idl>
876862
typedef record<USVString, MLTensor> MLNamedTensors;
@@ -904,9 +890,6 @@ interface MLContext {
904890
: <dfn>\[[contextType]]</dfn> of type [=context type=].
905891
::
906892
The {{MLContext}}'s [=context type=].
907-
: <dfn>\[[deviceType]]</dfn> of type {{MLDeviceType}}.
908-
::
909-
The {{MLContext}}'s {{MLDeviceType}}.
910893
: <dfn>\[[powerPreference]]</dfn> of type {{MLPowerPreference}}.
911894
::
912895
The {{MLContext}}'s {{MLPowerPreference}}.
@@ -929,10 +912,6 @@ The <dfn>context type</dfn> is the type of the execution context that manages th
929912
<dd>Context created from WebGPU device.</dd>
930913
</dl>
931914

932-
<div class="note">
933-
When the {{MLContext/[[contextType]]}} is set to [=context type/default=] with the {{MLContextOptions}}.{{MLContextOptions/deviceType}} set to {{MLDeviceType/"gpu"}}, the user agent is responsible for creating an internal GPU device that operates within the context and is capable of ML workload submission on behalf of the calling application.
934-
</div>
935-
936915
<details open algorithm>
937916
<summary>
938917
To <dfn>validate buffer with descriptor</dfn> given {{AllowSharedBufferSource}} |bufferSource| and {{MLOperandDescriptor}} |descriptor|, run the following steps:
@@ -1044,7 +1023,7 @@ Note: `dispatch()` itself provides no signal that graph execution has completed.
10441023
'C': outputTensorC
10451024
};
10461025
context.dispatch(graph, inputs, outputs);
1047-
1026+
10481027
// 6. Read back the computed result.
10491028
const result = await context.readTensor(outputTensorC);
10501029
console.log('Output value:', new Float32Array(result)); // [1, 1, 1, 1]

0 commit comments

Comments
 (0)