You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Editorial: Make "generically emulated" text a macro, update wording (#638)
Many complex operations in the spec are given decompositions, showing
how the operation could be replaced through the use of more primitive
operations. This text is repeated in many places. Improve this in two
ways:
* Use Bikeshed's Text Macro[1] to reduce the repetition.
* Streamline the text, and make it explicit that if the underlying
platform doesn't support an operation, WebNN API implementations can
use the decomposition as a guide to emulate it.
The macro [EMULATED] is used at the end of an intro sentence since
there are variations - some of the decompositions are grouped, and
some make assumptions (e.g. activation functions, layouts, etc).
If we assume that implementations must implement all operations in the
spec, either via the underlying platform or emulation, this fixes#187
[1] https://speced.github.io/bikeshed/#metadata-text-macro
Copy file name to clipboardExpand all lines: index.bs
+26-75
Original file line number
Diff line number
Diff line change
@@ -36,7 +36,7 @@ Status Text: <p>
36
36
<li>it has at least two independent, interoperable implementations of every feature defined in the specification, where interoperability can be verified by passing open test suites, and two or more implementations interoperating with each other;</li>
37
37
<li>it has an open test suite of every feature defined in the specification.</li>
38
38
</ul>
39
-
39
+
Text Macro: EMULATED generically emulated from the usage of other operations as follows, although user agents typically have a more efficient implementation. In cases where the underlying platform does not directly support an operation, this decomposition can be used as a template to guide the implementation.
The behavior of this operation when the input tensor is 4-D of the {{MLInputOperandLayout/"nchw"}} layout and the activation is {{MLGraphBuilder/relu()}} can be generically emulated from the usage of other operations as follow. However, user agents typically have a more efficient implementation for it, therefore its usage is encouraged from the performance standpoint.
1589
+
The behavior of this operation when the input tensor is 4-D of the {{MLInputOperandLayout/"nchw"}} layout and the activation is {{MLGraphBuilder/relu()}} can be [EMULATED]
The behavior of this operation can be generically emulated from the usage of other operations as follow. However, user agents typically have a more efficient implementation for it, therefore its usage is encouraged from the performance standpoint.
The behavior of this operation can be generically emulated from the usage of other operations as follows. However, user agents typically have a more efficient implementation for it, therefore its usage is encouraged from the performance standpoint.
The behavior of this operation can be generically emulated via other operations as shown below, when the weight layout is the default {{MLGruWeightLayout/"zrn"}} layout, and the activation functions of the update/reset gate and new gate are {{MLGraphBuilder/sigmoid()}} and {{MLGraphBuilder/tanh()}} respectively.
3270
+
The behavior of this operation when the weight layout is the default {{MLGruWeightLayout/"zrn"}} layout, and the activation functions of the update/reset gate and new gate are {{MLGraphBuilder/sigmoid()}} and {{MLGraphBuilder/tanh()}} respectively can be [EMULATED]
The behavior of this operation can be generically emulated from the usage of other operations as follow. However, user agents typically have a more efficient implementation for it, therefore its usage is encouraged from the performance standpoint.
The behavior of this operation can be generically emulated via other operations as shown below, when the weight layout is the default {{MLLstmWeightLayout/"iofg"}} layout, and the activation functions of the input/forget/output gate and the cell gate/the cell state's filter for the output hidden state are {{MLGraphBuilder/sigmoid()}} and {{MLGraphBuilder/tanh()}} respectively.
4199
+
The behavior of this operation when the weight layout is the default {{MLLstmWeightLayout/"iofg"}} layout, and the activation functions of the input/forget/output gate and the cell gate/the cell state's filter for the output hidden state are {{MLGraphBuilder/sigmoid()}} and {{MLGraphBuilder/tanh()}} respectively can be [EMULATED]
A *global* pooling operation such as one for the max pooling operation is a variant of pooling where the window dimensions is the spatial dimensions (last two dimensions) of the input shape, as follow.
4585
+
A *global* pooling operation such as one for the max pooling operation is a variant of pooling where the window dimensions is the spatial dimensions (last two dimensions) of the input shape, as follows.
The behavior of this operation can be generically emulated from the usage of other operations as follow. However, user agents typically have a more efficient implementation for it, therefore its usage is encouraged from the performance standpoint.
5816
+
The behavior of this operation can be [EMULATED]
5866
5817
</summary>
5867
5818
<pre highlight="js">
5868
5819
const c = builder.clamp(condition, {'minValue': 0, 'maxValue': 1});
0 commit comments