You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: index.bs
+21-4Lines changed: 21 additions & 4 deletions
Original file line number
Diff line number
Diff line change
@@ -748,7 +748,7 @@ An {{MLContext}} interface represents a global state of neural network execution
748
748
In a situation when a GPU context executes a graph with a constant or an input in the system memory as an {{ArrayBufferView}}, the input content is automatically uploaded from the system memory to the GPU memory, and downloaded back to the system memory of an {{ArrayBufferView}} output buffer at the end of the graph execution. This data upload and download cycles will only occur whenever the execution device requires the data to be copied out of and back into the system memory, such as in the case of the GPU. It doesn't occur when the device is a CPU device. Additionally, the result of the graph execution is in a known layout format. While the execution may be optimized for a native memory access pattern in an intermediate result within the graph, the output of the last operation of the graph must convert the content back to a known layout format at the end of the graph in order to maintain the expected behavior from the caller's perspective.
749
749
750
750
<div class="note">
751
-
When an {{MLContext}} is created with {{MLContextOptions}}, the user agent selects and creates the underlying execution device by taking into account these options, currently only the {{MLPowerPreference}} option.
751
+
When an {{MLContext}} is created with {{MLContextOptions}}, the user agent selects and creates the underlying execution device by taking into account these options.
752
752
753
753
Depending on the underlying platform, the user agent <span class=allow-2119>may</span> select different combinations of CPU, NPU and GPU devices.
754
754
</div>
@@ -978,6 +978,7 @@ enum MLPowerPreference {
978
978
979
979
dictionary MLContextOptions {
980
980
MLPowerPreference powerPreference = "default";
981
+
boolean accelerated = true;
981
982
};
982
983
983
984
[SecureContext, Exposed=(Window, Worker)]
@@ -1001,6 +1002,8 @@ The <dfn dfn-for=MLContextOptions dfn-type=dict-member>powerPreference</dfn> opt
1001
1002
<dd>Prioritizes power consumption over other considerations such as execution speed.</dd>
1002
1003
</dl>
1003
1004
1005
+
The <dfn dfn-for=MLContextOptions dfn-type=dict-member>accelerated</dfn> option indicates the application's preference as related to massively parallel acceleration. When set to `true` (by default), the underlying platform will attempt to use the available massively parallel accelerators, such as GPU or NPU, also depending on the {{MLContextOptions/powerPreference}}. When set to `false`, the application hints to prefer CPU inference.
@@ -1018,11 +1021,16 @@ The <dfn dfn-for=MLContextOptions dfn-type=dict-member>powerPreference</dfn> opt
1018
1021
1. If |options| is a {{GPUDevice}} object, then:
1019
1022
1. Set |context|.{{MLContext/[[contextType]]}} to "[=context type/webgpu=]".
1020
1023
1. Set |context|.{{MLContext/[[powerPreference]]}} to {{MLPowerPreference/"default"}}.
1024
+
1. Set |context|.{{MLContext/[[accelerated]]}} to `true`.
1025
+
1. Set |context|.{{MLContext/[[cpuFallbackActive]]}} to `false`.
1021
1026
1. Otherwise:
1022
1027
1. Set |context|.{{MLContext/[[contextType]]}} to "[=context type/default=]".
1023
1028
1. Set |context|.{{MLContext/[[lost]]}} to [=a new promise=] in |realm|.
1024
1029
1. If |options|["{{MLContextOptions/powerPreference}}"] [=map/exists=], then set |context|.{{MLContext/[[powerPreference]]}} to |options|["{{MLContextOptions/powerPreference}}"].
1025
1030
1. Otherwise, set |context|.{{MLContext/[[powerPreference]]}} to {{MLPowerPreference/"default"}}.
1031
+
1. If |options|["{{MLContextOptions/accelerated}}"] [=map/exists=], then set |context|.{{MLContext/[[accelerated]]}} to |options|["{{MLContextOptions/accelerated}}"].
1032
+
1. Otherwise, set |context|.{{MLContext/[[accelerated]]}} to `true`.
1033
+
1. Set |context|.{{MLContext/[[cpuFallbackActive]]}} to `false`.
1026
1034
1. If the user agent cannot support |context|.{{MLContext/[[contextType]]}}, then return failure.
: <dfn>\[[powerPreference]]</dfn> of type {{MLPowerPreference}}.
1096
1106
::
1097
1107
The {{MLContext}}'s {{MLPowerPreference}}.
1108
+
: <dfn>\[[accelerated]]</dfn> of type {{boolean}}.
1109
+
::
1110
+
The {{MLContext}}'s processing type (CPU or massively parallel processing).
1111
+
: <dfn>\[[cpuFallbackActive]]</dfn> of type {{boolean}}.
1112
+
::
1113
+
The {{MLContext}}'s status for CPU fallback type (CPU or massively parallel processing).
1098
1114
: <dfn>\[[lost]]</dfn> of type {{Promise}}<{{MLContextLostInfo}}>.
1099
1115
::
1100
1116
A {{Promise}} that is resolved when the {{MLContext}}'s underlying execution device is no longer available.
@@ -1178,7 +1194,8 @@ Note: `dispatch()` itself provides no signal that graph execution has completed.
1178
1194
1. If [=validating tensors with descriptors=] given |outputs| and |graph|.{{MLGraph/[[outputDescriptors]]}} returns false, then [=exception/throw=] a {{TypeError}}.
1179
1195
1. Enqueue the following steps to |graph|.{{MLGraph/[[context]]}}.{{MLContext/[[timeline]]}}:
1180
1196
1. Run these steps, but [=/abort when=] [=this=] [=MLContext/is lost=]:
1181
-
1. Issue a compute request to |graph|.{{MLGraph/[[implementation]]}} given |inputs| and |outputs|.
1197
+
1. Issue a compute request to |graph|.{{MLGraph/[[implementation]]}} given |inputs| and |outputs|, as well as |graph|.{{MLGraph/[[context]]}}.{{MLContext/[[powerPreference]]}} and |graph|.{{MLGraph/[[context]]}}.{{MLContext/[[accelerated]]}}.
1198
+
1. If |graph|.{{MLGraph/[[context]]}}.{{MLContext/[[accelerated]]}} is `true` and the underlying platform can only do CPU inference at the moment, then set |graph|.{{MLGraph/[[context]]}}.{{MLContext/[[cpuFallbackActive]]}} to `true`, otherwise set it to `false`.
1182
1199
1183
1200
Issue(778): Add a mechanism for reporting errors during graph execution.
1184
1201
@@ -1730,7 +1747,7 @@ typedef (bigint or unrestricted double) MLNumber;
1730
1747
: <dfn>\[[operator]]</dfn> of type [=operator=]
1731
1748
::
1732
1749
Reference to {{MLOperand}}'s corresponding [=operator=].
1733
-
1750
+
1734
1751
: <dfn>\[[constantTensor]]</dfn> of type {{MLTensor}}
1735
1752
::
1736
1753
The {{MLOperand}}'s tensor (only for constant operands).
@@ -2151,7 +2168,7 @@ Build a composed graph up to a given output operand into a computational graph a
2151
2168
1. If |name| is empty, then return [=a new promise=] in |realm| [=rejected=] with a {{TypeError}}.
2152
2169
1. If [=MLGraphBuilder/validating operand=] given [=this=] and |operand| returns false, then return [=a new promise=] in |realm| [=rejected=] with a {{TypeError}}.
2153
2170
1. If |operand| is in [=this=]'s [=MLGraphBuilder/graph=]'s [=computational graph/inputs=] or [=computational graph/constants=], then return [=a new promise=] in |realm| [=rejected=] with a {{TypeError}}.
2154
-
1. If |operand|.{{MLOperand/[[constantTensor]]}} exists and |operand|.{{MLOperand/[[constantTensor]]}}.{{MLTensor/[[isDestroyed]]}} is true, then return [=a new promise=] in |realm| [=rejected=] with a {{TypeError}}.
2171
+
1. If |operand|.{{MLOperand/[[constantTensor]]}} exists and |operand|.{{MLOperand/[[constantTensor]]}}.{{MLTensor/[[isDestroyed]]}} is true, then return [=a new promise=] in |realm| [=rejected=] with a {{TypeError}}.
0 commit comments