Skip to content

Commit 1aa0bbc

Browse files
committed
Drop the support of synchronous execution
Remove the definition and algorithm steps for - ML.createContextSync() - MLGraphBuilder.buildSync() - MLContext.computeSync() Fix #531
1 parent eb06ccf commit 1aa0bbc

File tree

1 file changed

+27
-167
lines changed

1 file changed

+27
-167
lines changed

index.bs

+27-167
Original file line numberDiff line numberDiff line change
@@ -723,24 +723,9 @@ The implementation may use views, as above, for intermediate values.
723723

724724
Before the execution, the computation graph that is used to compute one or more specified outputs needs to be compiled and optimized. The key purpose of the compilation step is to enable optimizations that span two or more operations, such as operation or loop fusion.
725725

726-
There are multiple ways by which the graph may be compiled. The {{MLGraphBuilder}}.{{MLGraphBuilder/build()}} method compiles the graph in the background without blocking the calling thread, and returns a {{Promise}} that resolves to an {{MLGraph}}. The {{MLGraphBuilder}}.{{MLGraphBuilder/buildSync()}} method compiles the graph immediately on the calling thread, which must be a worker thread running on CPU or GPU device, and returns an {{MLGraph}}. Both compilation methods produce an {{MLGraph}} that represents a compiled graph for optimal execution.
726+
The {{MLGraphBuilder}}.{{MLGraphBuilder/build()}} method compiles the graph in the background without blocking the calling thread, and returns a {{Promise}} that resolves to an {{MLGraph}}. The compilation step produces an {{MLGraph}} that represents a compiled graph for optimal execution.
727727

728-
Once the {{MLGraph}} is constructed, there are multiple ways by which the graph may be executed. The
729-
{{MLContext}}.{{MLContext/computeSync()}} method represents a way the execution of the graph is carried out immediately
730-
on the calling thread, which must also be a worker thread, either on a CPU or GPU device. The execution
731-
produces the results of the computation from all the inputs bound to the graph.
732-
733-
The {{MLContext}}.{{MLContext/compute()}} method represents a way the execution of the graph is performed asynchronously
734-
either on a parallel timeline in a separate worker thread for the CPU execution or on a GPU timeline in a GPU
735-
command queue. This method returns immediately without blocking the calling thread while the actual execution is
736-
offloaded to a different timeline. This type of execution is appropriate when the responsiveness of the calling
737-
thread is critical to good user experience. The computation results will be placed at the bound outputs at the
738-
time the operation is successfully completed on the offloaded timeline at which time the calling thread is
739-
signaled. This type of execution supports both the CPU and GPU device.
740-
741-
In both the {{MLContext}}.{{MLContext/compute()}} and {{MLContext}}.{{MLContext/computeSync()}} execution methods, the caller supplies
742-
the input values using {{MLNamedArrayBufferViews}}, binding the input {{MLOperand}}s to their values. The caller
743-
then supplies pre-allocated buffers for output {{MLOperand}}s using {{MLNamedArrayBufferViews}}.
728+
Once the {{MLGraph}} is constructed, the {{MLContext}}.{{MLContext/compute()}} method performs the execution of the graph asynchronously either on a parallel timeline in a separate worker thread for the CPU execution or on a GPU timeline in a GPU command queue. This method returns immediately without blocking the calling thread while the actual execution is offloaded to a different timeline. The caller supplies the input values using {{MLNamedArrayBufferViews}}, binding the input {{MLOperand}}s to their values. The caller then supplies pre-allocated buffers for output {{MLOperand}}s using {{MLNamedArrayBufferViews}}. The execution produces the results of the computation from all the inputs bound to the graph. The computation results will be placed at the bound outputs at the time the operation is successfully completed on the offloaded timeline at which time the calling thread is signaled. This type of execution supports both the CPU and GPU device.
744729

745730
## Device Selection ## {#programming-model-device-selection}
746731

@@ -798,11 +783,6 @@ dictionary MLContextOptions {
798783
interface ML {
799784
Promise<MLContext> createContext(optional MLContextOptions options = {});
800785
Promise<MLContext> createContext(GPUDevice gpuDevice);
801-
802-
[Exposed=(DedicatedWorker)]
803-
MLContext createContextSync(optional MLContextOptions options = {});
804-
[Exposed=(DedicatedWorker)]
805-
MLContext createContextSync(GPUDevice gpuDevice);
806786
};
807787
</script>
808788

@@ -859,30 +839,6 @@ Its <a>default allowlist</a> is <code>'self'</code>.
859839
</div>
860840
</details>
861841

862-
### {{ML/createContextSync}} ### {#api-ml-createcontextsync}
863-
864-
<details open algorithm>
865-
<summary>
866-
The <dfn method for=ML>createContextSync(|options|)</dfn> method steps are:
867-
</summary>
868-
<div class=algorithm-steps>
869-
1. If [=this=]'s [=relevant global object=]'s [=associated Document=] is not [=allowed to use=] the [=webnn-feature|webnn=] feature, then [=exception/throw=] a "{{SecurityError}}" {{DOMException}}.
870-
1. Let |context| be the result of [=creating a context=] with |options|. If that returns failure, then [=exception/throw=] a "{{NotSupportedError}}" {{DOMException}}.
871-
1. Return |context|.
872-
</div>
873-
</details>
874-
875-
<details open algorithm>
876-
<summary>
877-
The <dfn method for=ML>createContextSync(|gpuDevice|)</dfn> method steps are:
878-
</summary>
879-
<div class=algorithm-steps>
880-
1. If [=this=]'s [=relevant global object=]'s [=associated Document=] is not [=allowed to use=] the [=webnn-feature|webnn=] feature, then [=exception/throw=] a "{{SecurityError}}" {{DOMException}}.
881-
1. Let |context| be the result of [=creating a context=] with |gpuDevice|. If that returns failure, then [=exception/throw=] a "{{NotSupportedError}}" {{DOMException}}.
882-
1. Return |context|.
883-
</div>
884-
</details>
885-
886842
## {{MLActivation}} interface ## {#api-mlactivation}
887843

888844
Objects implementing the {{MLActivation}} interface represent activation function types.
@@ -994,40 +950,6 @@ interface MLContext {};
994950
When the {{MLContext/[[contextType]]}} is set to [=context type/default=] with the {{MLContextOptions}}.{{deviceType}} set to {{MLDeviceType/"gpu"}}, the user agent is responsible for creating an internal GPU device that operates within the context and is capable of ML workload submission on behalf of the calling application. In this setting however, only {{ArrayBufferView}} inputs and outputs are allowed in and out of the graph execution since the application has no way to know what type of internal GPU device is being created on their behalf. In this case, the user agent is responsible for automatic uploads and downloads of the inputs and outputs to and from the GPU memory using this said internal device.
995951
</div>
996952

997-
### Synchronous Execution ### {#api-mlcontext-sync-execution}
998-
Synchronously carries out the computational workload of a compiled graph {{MLGraph}} on the calling thread, which must be a worker thread, to produce results as defined by the operations in the graph. This method of execution requires an {{MLContext}} created with {{MLContextOptions}}. Otherwise, it [=exception/throws=] an "{{OperationError}}" {{DOMException}}.
999-
1000-
<script type=idl>
1001-
partial interface MLContext {
1002-
[Exposed=(DedicatedWorker)]
1003-
undefined computeSync(
1004-
MLGraph graph, MLNamedArrayBufferViews inputs, MLNamedArrayBufferViews outputs);
1005-
};
1006-
</script>
1007-
1008-
<div>
1009-
**Arguments:**
1010-
- *graph*: an {{MLGraph}}. The compiled graph to be executed.
1011-
- *inputs*: an {{MLNamedArrayBufferViews}}. The resources of inputs.
1012-
- *outputs*: an {{MLNamedArrayBufferViews}}. The pre-allocated resources of required outputs.
1013-
1014-
**Returns:** {{undefined}}.
1015-
</div>
1016-
1017-
<details open algorithm>
1018-
<summary>
1019-
The <dfn method for=MLContext>computeSync(|graph|, |inputs|, |outputs|)</dfn> method steps are:
1020-
</summary>
1021-
<div class=algorithm-steps>
1022-
1. If |graph|.{{MLGraph/[[context]]}}.{{MLContext/[[contextType]]}} is not "[=context type/default=]", [=exception/throw=] an "{{OperationError}}" {{DOMException}}.
1023-
1. If [=validating graph resources=] given |inputs| and |graph|.{{MLGraph/[[inputDescriptors]]}} returns false, then [=exception/throw=] a "{{DataError}}" {{DOMException}}.
1024-
1. If [=validating graph resources=] given |outputs| and |graph|.{{MLGraph/[[outputDescriptors]]}} returns false, then [=exception/throw=] a "{{DataError}}" {{DOMException}}.
1025-
1. Invoke [=execute graph=] given |graph|, |inputs| and |outputs|.
1026-
1. If that [=exception/throws=] an error, re-[=exception/throw=] the error.
1027-
1. Return {{undefined}}.
1028-
</div>
1029-
</details>
1030-
1031953
<details open algorithm>
1032954
<summary>
1033955
To <dfn>validate graph resources</dfn>, given {{MLNamedArrayBufferViews}} |resources| and [=ordered map=] |descriptors|, run the following steps:
@@ -1075,46 +997,6 @@ partial interface MLContext {
1075997
</div>
1076998
</details>
1077999

1078-
#### Examples #### {#api-mlcontext-sync-execution-examples}
1079-
1080-
<div class="example">
1081-
<details open>
1082-
<summary>
1083-
The following code showcases the synchronous computation with optional outputs in a worker.
1084-
</summary>
1085-
<pre highlight="js">
1086-
const context = navigator.ml.createContextSync();
1087-
1088-
// Build a graph with two outputs.
1089-
const builder = new MLGraphBuilder(context);
1090-
const descA = {dataType: 'float32', dimensions: [3, 4]};
1091-
const a = builder.input('a', descA);
1092-
const descB = {dataType: 'float32', dimensions: [4, 3]};
1093-
const bufferB = new Float32Array(sizeOfShape(descB.dimensions)).fill(0.5);
1094-
const b = builder.constant(descB, bufferB);
1095-
const descC = {dataType: 'float32', dimensions: [3, 3]};
1096-
const bufferC = new Float32Array(sizeOfShape(descC.dimensions)).fill(1);
1097-
const c = builder.constant(descC, bufferC);
1098-
const d = builder.matmul(a, b);
1099-
const e = builder.add(d, c);
1100-
const graph = builder.buildSync({'d': d, 'e': e});
1101-
1102-
const bufferA = new Float32Array(sizeOfShape(descA.dimensions)).fill(0.5);
1103-
const inputs = {'a': bufferA};
1104-
1105-
// Compute d.
1106-
const bufferD = new Float32Array(sizeOfShape([3, 3]));
1107-
context.computeSync(graph, inputs, {'d': bufferD});
1108-
console.log(&#96;values: ${bufferD}&#96;);
1109-
1110-
// Compute e.
1111-
const bufferE = new Float32Array(sizeOfShape([3, 3]));
1112-
context.computeSync(graph, inputs, {'e': bufferE});
1113-
console.log(&#96;values: ${bufferE}&#96;);
1114-
</pre>
1115-
</details>
1116-
</div>
1117-
11181000
### {{MLNamedArrayBufferViews}} transfer algorithm ### {#mlnamedarraybufferviews-transfer-alg}
11191001

11201002
<details open algorithm>
@@ -1275,15 +1157,11 @@ interface MLGraphBuilder {
12751157

12761158
// Compile the graph up to the specified output operands asynchronously.
12771159
Promise<MLGraph> build(MLNamedOperands outputs);
1278-
1279-
// Compile the graph up to the specified output operands synchronously.
1280-
[Exposed=(DedicatedWorker)]
1281-
MLGraph buildSync(MLNamedOperands outputs);
12821160
};
12831161
</script>
12841162

12851163
<div class="note">
1286-
Both {{MLGraphBuilder}}.{{MLGraphBuilder/build()}} and {{MLGraphBuilder}}.{{MLGraphBuilder/buildSync()}} methods compile the graph builder state up to the specified output operands into a compiled graph according to the type of {{MLContext}} that creates it. Since this operation can be costly in some machine configurations, the calling thread of the {{MLGraphBuilder}}.{{MLGraphBuilder/buildSync()}} method must only be a worker thread to avoid potential disruption of the user experience. When the {{MLContext/[[contextType]]}} of the {{MLContext}} is set to "[=context type/default=]", the compiled graph is initialized right before the {{MLGraph}} is returned. This graph initialization stage is important for optimal performance of the subsequent graph executions. It typically involves a process known as "weight preprocessing" where all the constant inputs to the graph are preprocessed and cached at the operating system level for subsequent graph execution calls. The initializing inputs are typically the constant weight data specified through the {{MLGraphBuilder/constant(descriptor, bufferView)|MLGraphBuilder/constant(value, type)}} method as constant operands during graph construction time.
1164+
The {{MLGraphBuilder}}.{{MLGraphBuilder/build()}} method compiles the graph builder state up to the specified output operands into a compiled graph according to the type of {{MLContext}} that creates it. When the {{MLContext/[[contextType]]}} of the {{MLContext}} is set to "[=context type/default=]", the compiled graph is initialized right before the {{MLGraph}} is returned. This graph initialization stage is important for optimal performance of the subsequent graph executions. It typically involves a process known as "weight preprocessing" where all the constant inputs to the graph are preprocessed and cached at the operating system level for subsequent graph execution calls. The initializing inputs are typically the constant weight data specified through the {{MLGraphBuilder/constant(descriptor, bufferView)|MLGraphBuilder/constant(value, type)}} method as constant operands during graph construction time.
12871165

12881166
Issue(552): Decide how to specify graph initialization.
12891167
</div>
@@ -1504,7 +1382,7 @@ partial interface MLGraphBuilder {
15041382
</div>
15051383

15061384
### build ### {#api-mlgraphbuilder-build}
1507-
Build a composed graph up to a given output operand into a computational graph, asynchronously or synchronously.
1385+
Build a composed graph up to a given output operand into a computational graph asynchronously.
15081386

15091387
#### {{MLGraphBuilder/build(outputs)}} #### {#api-mlgraphbuilder-build-outputs}
15101388

@@ -1513,48 +1391,30 @@ Build a composed graph up to a given output operand into a computational graph,
15131391
The <dfn method for=MLGraphBuilder>build(|outputs|)</dfn> method steps are:
15141392
</summary>
15151393
<div class=algorithm-steps>
1516-
<div class="note">
1517-
The permissions and context validity have been checked by [[#api-mlgraphbuilder-constructor]] steps.
1518-
</div>
15191394
1. Let |promise| be [=a new promise=].
1520-
1. Return |promise| and run the following steps [=in parallel=].
1521-
1. Return the result of invoking {{MLGraphBuilder/buildSync(outputs)}} given |outputs|.
1522-
1. If that [=exception/throws=], re-[=exception/throw=] the error.
1523-
</div>
1524-
</details>
1525-
1526-
#### {{MLGraphBuilder/buildSync(outputs)}} #### {#api-mlgraphbuilder-buildsync-outputs}
1527-
1528-
<details open algorithm>
1529-
<summary>
1530-
The <dfn method for=MLGraphBuilder>buildSync(|outputs|)</dfn> method steps are:
1531-
</summary>
1532-
<div class=algorithm-steps>
1533-
<div class="note">
1534-
The permissions and context validity have been checked by [[#api-mlgraphbuilder-constructor]] steps.
1535-
</div>
1536-
1. If |outputs| is empty, then [=exception/throw=] a {{TypeError}}.
1537-
1. [=map/For each=] |name| &rarr; |operand| of |outputs|:
1538-
1. If |name| is empty, then [=exception/throw=] a {{TypeError}}.
1539-
1. If any of the following sub-steps fail, [=exception/throw=] an "{{OperationError}}" {{DOMException}}.
1540-
1. Let |graph| be a new {{MLGraph}}:
1541-
1. Set |graph|.{{MLGraph/[[context]]}} to [=this=].{{MLGraphBuilder/[[context]]}}.
1542-
1. Make a request to the underlying platform to:
1543-
1. Connect |graph| to a new [=implementation-defined=] graph implementation |graphImpl| given |graph|.
1544-
1. Set |graph|.{{MLGraph/[[implementation]]}} to |graphImpl|.
1545-
1. Make a request to the underlying platform to initialize the graph:
1546-
1. [=map/For each=] |name| &rarr; |operand| of |outputs|:
1547-
1. If [=validating MLOperand=] given |operand| and [=this=] returns false, then [=exception/throw=] a {{TypeError}}.
1548-
1. If |operand| was created as an input by the underlying platform:
1549-
1. If |operand|.{{MLOperand/[[name]]}}] is not unique for |graphImpl|, then [=exception/throw=] a {{TypeError}}.
1550-
1. Add |operand|.{{MLOperand/[[descriptor]]}} to |graph|.{{MLGraph/[[inputDescriptors]]}}[|operand|.{{MLOperand/[[name]]}}].
1551-
1. If |operand| was created as a constant by the underlying platform:
1552-
1. Implementations MAY preprocess and optimize the tensor data of |operand| for the underlying platform.
1553-
1. Register |operand|.{{MLOperand/[[operand]]}} in |graphImpl| as graph output.
1554-
1. Register |operand|.{{MLOperand/[[operator]]}} to |graphImpl|.
1555-
1556-
Issue(552): Decide how to specify graph initialization.
1557-
1. Return |graph|.
1395+
1. Return |promise| and run the following steps [=in parallel=]:
1396+
1. If |outputs| is empty, then [=reject=] |promise| with a "{{TypeError}}" {{DOMException}}.
1397+
1. [=map/For each=] |name| &rarr; |operand| of |outputs|:
1398+
1. If |name| is empty, then [=reject=] |promise| with a "{{TypeError}}" {{DOMException}}.
1399+
1. If any of the following sub-steps fail, then [=reject=] |promise| with an "{{OperationError}}" {{DOMException}}.
1400+
1. Let |graph| be a new {{MLGraph}}:
1401+
1. Set |graph|.{{MLGraph/[[context]]}} to [=this=].{{MLGraphBuilder/[[context]]}}.
1402+
1. Make a request to the underlying platform to:
1403+
1. Connect |graph| to a new [=implementation-defined=] graph implementation |graphImpl| given |graph|.
1404+
1. Set |graph|.{{MLGraph/[[implementation]]}} to |graphImpl|.
1405+
1. Make a request to the underlying platform to initialize the graph:
1406+
1. [=map/For each=] |name| &rarr; |operand| of |outputs|:
1407+
1. If [=validating MLOperand=] given |operand| and [=this=] returns false, then [=reject=] |promise| with a "{{TypeError}}" {{DOMException}}.
1408+
1. If |operand| was created as an input by the underlying platform:
1409+
1. If |operand|.{{MLOperand/[[name]]}} is not unique for |graphImpl|, then [=reject=] |promise| with a "{{TypeError}}" {{DOMException}}.
1410+
1. Add |operand|.{{MLOperand/[[descriptor]]}} to |graph|.{{MLGraph/[[inputDescriptors]]}}[|operand|.{{MLOperand/[[name]]}}].
1411+
1. If |operand| was created as a constant by the underlying platform:
1412+
1. Implementations MAY preprocess and optimize the tensor data of |operand| for the underlying platform.
1413+
1. Register |operand|.{{MLOperand/[[operand]]}} in |graphImpl| as graph output.
1414+
1. Register |operand|.{{MLOperand/[[operator]]}} to |graphImpl|.
1415+
1416+
Issue(552): Decide how to specify graph initialization.
1417+
1. [=Resolve=] |promise| with |graph|.
15581418
</div>
15591419
</details>
15601420

0 commit comments

Comments
 (0)