Skip to content

Commit 11314e8

Browse files
committed
remove MLCommandEncoder
1 parent 479ce17 commit 11314e8

File tree

1 file changed

+2
-151
lines changed

1 file changed

+2
-151
lines changed

index.bs

+2-151
Original file line numberDiff line numberDiff line change
@@ -672,7 +672,7 @@ Note: The group is <a href="https://github.com/webmachinelearning/webnn/issues/8
672672

673673
Unlike WebGPU, this API does not intrinsically support custom shader authoring; and as a result is not prone to timing attacks that rely on shader caches, or other persistent data. The API builds upon pre-existing shaders and lower level primitives of the browser or the underlying OS. Web developers who interface with {{GPUDevice}} are expected to be aware of <a href="https://gpuweb.github.io/gpuweb/#privacy-user-agent-state">WebGPU compilation cache considerations</a>.
674674

675-
The WebGPU API identifies <a href="https://gpuweb.github.io/gpuweb/#privacy-machine-artifacts">machine-specific artifacts</a> as a privacy consideration. Given the WebNN API defines means to record an ML workload onto a WebGPU-compatible {{GPUCommandBuffer}}, compute unit scheduling may under certain circumstances introduce a fingerprint. However, similarly to WebGPU, such fingerprints are identical across most or all of the devices of each vendor, mitigating the concern. Furthermore, software implementations can be used to further eliminate such artifacts.
675+
The WebGPU API identifies <a href="https://gpuweb.github.io/gpuweb/#privacy-machine-artifacts">machine-specific artifacts</a> as a privacy consideration. Similarly, the WebNN API's compute unit scheduling may under certain circumstances introduce a fingerprint. However, similarly to WebGPU, such fingerprints are identical across most or all of the devices of each vendor, mitigating the concern. Furthermore, software implementations can be used to further eliminate such artifacts.
676676

677677
The WebNN API defines two developer-settable preferences to help inform [[#programming-model-device-selection]] and allow the implementation to better select the most appropriate underlying execution device for the workload. [=Device type=] normatively indicates the kind of device and is either {{MLDeviceType/"cpu"}} or {{MLDeviceType/"gpu"}}. If this type cannot be satisfied, an "{{OperationError}}" {{DOMException}} is thrown, thus this type can in some cases add two bits of entropy to the fingerprint. [=Power preference=] indicates preference as related to the power consumption and is considered a hint only and as such does not increase entropy of the fingerprint.
678678

@@ -744,13 +744,6 @@ In both the {{MLContext}}.{{MLContext/compute()}} and {{MLContext}}.{{MLContext/
744744
the input values using {{MLNamedArrayBufferViews}}, binding the input {{MLOperand}}s to their values. The caller
745745
then supplies pre-allocated buffers for output {{MLOperand}}s using {{MLNamedArrayBufferViews}}.
746746

747-
The {{MLCommandEncoder}} interface created by the {{MLContext}}.{{MLContext/createCommandEncoder()}} method supports
748-
a graph execution method that provides the maximum flexibility to callers that also utilize WebGPU in their
749-
application. It does this by placing the workload required to initialize and compute the results of the
750-
operations in the graph onto a {{GPUCommandBuffer}}. The callers are responsible for the eventual submission
751-
of this workload on the {{GPUQueue}} through the WebGPU queue submission mechanism. Once the submitted workload
752-
is completely executed, the result is avaialble in the bound output buffers.
753-
754747
## Device Selection ## {#programming-model-device-selection}
755748

756749
An {{MLContext}} interface represents a global state of neural network execution. One of the important context states is the underlying execution device that manages the resources and facilitates the compilation and the eventual execution of the neural network graph. In addition to the default method of creation with {{MLContextOptions}}, an {{MLContext}} could also be created from a specific {{GPUDevice}} that is already in use by the application, in which case the corresponding {{GPUBuffer}} resources used as graph constants, as well as the {{GPUTexture}} as graph inputs must also be created from the same device. In a multi-adapter configuration, the device used for {{MLContext}} must be created from the same adapter as the device used to allocate the resources referenced in the graph.
@@ -948,135 +941,6 @@ The {{MLActivation}} objects (including the ones passed as input to methods) are
948941
</div>
949942
</details>
950943

951-
## {{MLCommandEncoder}} interface ## {#api-mlcommandencoder}
952-
The {{MLCommandEncoder}} interface represents a method of execution that synchronously records the computational workload of a compiled {{MLGraph}} to a {{GPUCommandBuffer}} on the calling thread. Since the workload is not immediately executed, just recorded, this method allows more flexibility for the caller to determine how and when the recorded commands will be submitted for execution on the GPU relative to other GPU workload on the same or different queue.
953-
954-
<script type=idl>
955-
typedef (GPUBuffer or GPUTexture) MLGPUResource;
956-
957-
typedef record<DOMString, MLGPUResource> MLNamedGPUResources;
958-
959-
[SecureContext, Exposed=(Window, DedicatedWorker)]
960-
interface MLCommandEncoder {};
961-
</script>
962-
963-
<div class=internal-slots>
964-
{{MLCommandEncoder}} has the following internal slots:
965-
<dl dfn-type=attribute dfn-for="MLCommandEncoder">
966-
: <dfn>\[[context]]</dfn> of type {{MLContext}}
967-
::
968-
The context of type {{MLContext}} associated with this {{MLCommandEncoder}}.
969-
970-
: <dfn>\[[implementation]]</dfn>
971-
::
972-
The underlying implementation provided by the User Agent.
973-
</dl>
974-
</div>
975-
976-
### Graph Initialization ### {#api-mlcommandencoder-graph-initialization}
977-
Record the initialization of the {{MLGraph}}. This is a necessary step for optimal performance during graph execution as it gives the platform an opportunity to prepare and optimize constant input data for the subsequent execution of the graph. This method should only be called once per graph.
978-
979-
<script type=idl>
980-
partial interface MLCommandEncoder {
981-
undefined initializeGraph(MLGraph graph);
982-
};
983-
</script>
984-
985-
<div>
986-
**Arguments:**
987-
- *graph*: an {{MLGraph}}. The compiled graph to be initialized with graph constant inputs.
988-
989-
**Returns:** {{undefined}}.
990-
</div>
991-
992-
<details open algorithm>
993-
<summary>
994-
The <dfn method for=MLCommandEncoder>initializeGraph(<var ignore>graph</var>)</dfn> method steps are:
995-
</summary>
996-
<div>
997-
<div class="note">
998-
Graph initialization stage typically involves a process known as "weight preprocessing" where all the constant inputs to the graph are preprocessed and cached at the operating system level for subsequent graph execution calls. The initializing inputs are typically the constant weight data specified through the {{MLGraphBuilder/constant(descriptor, bufferView)|MLGraphBuilder/constant(value, type)}} method as constant operands during graph construction time.
999-
</div>
1000-
</div>
1001-
</details>
1002-
1003-
### Dispatch Execution Commands ### {#api-mlcommandencoder-dispatch-commands}
1004-
Record the {{MLGraph}} execution with the inputs {{MLNamedGPUResources}} and outputs {{MLNamedGPUResources}}.
1005-
1006-
<script type=idl>
1007-
partial interface MLCommandEncoder {
1008-
undefined dispatch(MLGraph graph, MLNamedGPUResources inputs, MLNamedGPUResources outputs);
1009-
};
1010-
</script>
1011-
1012-
<div>
1013-
**Arguments:**
1014-
- *graph*: an {{MLGraph}}. The compiled graph to be executed.
1015-
- *inputs*: an {{MLNamedGPUResources}}. The resources of inputs.
1016-
- *outputs*: an {{MLNamedGPUResources}}. The pre-allocated resources of required outputs.
1017-
1018-
**Returns:** {{undefined}}.
1019-
</div>
1020-
1021-
<details open algorithm>
1022-
<summary>
1023-
The <dfn method for=MLCommandEncoder>dispatch(|graph|, |inputs|, |outputs|)</dfn> method steps are:
1024-
</summary>
1025-
<div class=algorithm-steps>
1026-
1. If any of the following requirements are unmet, then [=exception/throw=] a "{{DataError}}" {{DOMException}}.
1027-
<div class=validusage>
1028-
1. [=map/For each=] |name| &rarr; |input| of |inputs|:
1029-
1. |graph|.{{MLGraph/[[inputDescriptors]]}}[|name|] must [=map/exist=].
1030-
1. Let |inputDesc| be |graph|.{{MLGraph/[[inputDescriptors]]}}[|name|].
1031-
1. If |input| is a {{GPUBuffer}}, then:
1032-
1. |input|.{{GPUBuffer/size}} must equal to [=byte length=] of |inputDesc|.
1033-
1. [=map/For each=] |name| &rarr; |output| of |outputs|:
1034-
1. |graph|.{{MLGraph/[[outputDescriptors]]}}[|name|] must [=map/exist=].
1035-
1. Let |outputDesc| be |graph|.{{MLGraph/[[outputDescriptors]]}}[|name|].
1036-
1. If |output| is a {{GPUBuffer}}, then:
1037-
1. |output|.{{GPUBuffer/size}} must equal to [=byte length=] of |outputDesc|.
1038-
</div>
1039-
1. [=map/For each=] |name| &rarr; |input| of |inputs|:
1040-
1. Set the input of |graph|.{{MLGraph/[[implementation]]}} that is associated with |name| to |input|.
1041-
1. [=map/For each=] |name| &rarr; |output| of |outputs|:
1042-
1. Set the output of |graph|.{{MLGraph/[[implementation]]}} that is associated with |name| to |output|.
1043-
1. Issue a compute request of |graph|.{{MLGraph/[[implementation]]}}.
1044-
1. If there is an error returned by |graph|.{{MLGraph/[[implementation]]}}, then:
1045-
1. Throw an "{{OperationError}}" {{DOMException}}.
1046-
1. Return {{undefined}}.
1047-
</div>
1048-
</details>
1049-
1050-
### Generate GPU Command Buffer ### {#api-mlcommandencoder-generate-gpu-command-buffer}
1051-
Complete the recording of ML workload and return a WebGPU-compatible {{GPUCommandBuffer}} containing the recorded workload.
1052-
1053-
<script type=idl>
1054-
partial interface MLCommandEncoder {
1055-
GPUCommandBuffer finish(optional GPUCommandBufferDescriptor descriptor = {});
1056-
};
1057-
</script>
1058-
1059-
<div>
1060-
**Arguments:**
1061-
- *descriptor*: an optional {{GPUCommandBufferDescriptor}}. Descriptor of the command buffer.
1062-
1063-
**Returns:** {{GPUCommandBuffer}}.
1064-
</div>
1065-
1066-
<details open algorithm>
1067-
<summary>
1068-
The <dfn method for=MLCommandEncoder>finish(|descriptor|)</dfn> method steps are:
1069-
</summary>
1070-
<div class=algorithm-steps>
1071-
1. If any of the following sub-steps fail, [=exception/throw=] an "{{OperationError}}" {{DOMException}}.
1072-
1. Make a request to the underlying platform to complete the recording of the ML workload, given |descriptor|.
1073-
<div class="note">
1074-
See the related <a href="https://www.w3.org/TR/webgpu/#dom-gpucommandencoder-finish">WebGPU steps</a>.
1075-
</div>
1076-
1. Return a {{GPUCommandBuffer}} containing the recorded workload.
1077-
</div>
1078-
</details>
1079-
1080944
## {{MLContext}} interface ## {#api-mlcontext}
1081945
The {{MLContext}} interface represents a global state of neural network compute workload and execution processes. Each {{MLContext}} object has associated [=context type=], [=device type=] and [=power preference=].
1082946

@@ -1353,19 +1217,6 @@ partial interface MLContext {
13531217
</details>
13541218
</div>
13551219

1356-
### WebGPU Interoperability ### {#api-mlcontext-webgpu-interop}
1357-
Create {{MLCommandEncoder}} interface used to record the ML workload onto a WebGPU-compatible {{GPUCommandBuffer}} to allow mixing of ML workload with other GPU workload in an application that leverages WebGPU. This method only succeeds on an {{MLContext}} created with {{GPUDevice}}. Otherwise, it [=exception/throws=] an "{{OperationError}}" {{DOMException}}.
1358-
1359-
<script type=idl>
1360-
partial interface MLContext {
1361-
MLCommandEncoder createCommandEncoder();
1362-
};
1363-
</script>
1364-
1365-
<div algorithm=mlcontext.createcommandencoder>
1366-
**Returns:** {{MLCommandEncoder}}. The command encoder used to record ML workload on the GPU.
1367-
</div>
1368-
13691220
## {{MLGraph}} interface ## {#api-mlgraph}
13701221
The {{MLGraph}} interface represents a compiled computational graph. A compiled graph once constructed is immutable and cannot be subsequently changed.
13711222

@@ -1434,7 +1285,7 @@ interface MLGraphBuilder {
14341285
</script>
14351286

14361287
<div class="note">
1437-
Both {{MLGraphBuilder}}.{{MLGraphBuilder/build()}} and {{MLGraphBuilder}}.{{MLGraphBuilder/buildSync()}} methods compile the graph builder state up to the specified output operands into a compiled graph according to the type of {{MLContext}} that creates it. Since this operation can be costly in some machine configurations, the calling thread of the {{MLGraphBuilder}}.{{MLGraphBuilder/buildSync()}} method must only be a worker thread to avoid potential disruption of the user experience. When the {{[[contextType]]}} of the {{MLContext}} is set to "[=context type/default=]", the compiled graph is initialized right before the {{MLGraph}} is returned. This graph initialization stage is important for optimal performance of the subsequent graph executions. See [[#api-mlcommandencoder-graph-initialization]] for more detail.
1288+
Both {{MLGraphBuilder}}.{{MLGraphBuilder/build()}} and {{MLGraphBuilder}}.{{MLGraphBuilder/buildSync()}} methods compile the graph builder state up to the specified output operands into a compiled graph according to the type of {{MLContext}} that creates it. Since this operation can be costly in some machine configurations, the calling thread of the {{MLGraphBuilder}}.{{MLGraphBuilder/buildSync()}} method must only be a worker thread to avoid potential disruption of the user experience. When the {{[[contextType]]}} of the {{MLContext}} is set to "[=context type/default=]", the compiled graph is initialized right before the {{MLGraph}} is returned. This graph initialization stage is important for optimal performance of the subsequent graph executions.
14381289
</div>
14391290

14401291
{{MLBufferResourceView}} has the following members:

0 commit comments

Comments
 (0)