Skip to content

Commit 4b7abcb

Browse files
Bug fix: Link operator names to the MLGraphBuilder methods
For a "reshape" reference a malformed IDL link was present. The other references are linked as appropriately to the builder references rather than just being styled text or links to document sections.
1 parent d1a02f2 commit 4b7abcb

File tree

2 files changed

+11
-9
lines changed

2 files changed

+11
-9
lines changed

SpecCodingConventions.md

+2
Original file line numberDiff line numberDiff line change
@@ -70,6 +70,8 @@ Example:
7070
1. If |shape| is a [=circle=], draw it at |shape|'s [=circle/origin=].
7171
```
7272

73+
* When referencing an operator in text (e.g. sigmoid, tanh, etc), link the operator name to the `MLGraphBuilder` methods for creating the corresponding `MLOperand` or `MLActivation`, e.g. `{{MLGraphBuilder/sigmoid()}}`. This provides consistent styling, and provides a thorough overview of the operator, even if the method itself isn't being discussed.
74+
7375

7476
### Formatting
7577

index.bs

+9-9
Original file line numberDiff line numberDiff line change
@@ -920,12 +920,12 @@ interface MLActivation {
920920
</div>
921921

922922
<div class="note">
923-
These activations function types are used to create other operations. One such use of this interface is for when an activation function is fused into another operation such as [[#api-mlgraphbuilder-conv2d]] or [[#api-mlgraphbuilder-batchnorm]] during a graph construction session. Such fused activation functions can provide a significant performance improvement when supported natively by the underlying implementation. This is intended as an optimization opportunity for implementers.
923+
These activations function types are used to create other operations. One such use of this interface is for when an activation function is fused into another operation such as {{MLGraphBuilder/conv2d()}} or {{MLGraphBuilder/batchNormalization()}} during a graph construction session. Such fused activation functions can provide a significant performance improvement when supported natively by the underlying implementation. This is intended as an optimization opportunity for implementers.
924924
</div>
925925

926926
### Creating {{MLActivation}} ### {#api-mlactivation-create}
927927
<div class="note">
928-
The {{MLActivation}} objects (including the ones passed as input to methods) are created by the methods of {{MLGraphBuilder}} and are identified by their name. The |options| dictionary is defined by those methods. The actual creation of the activation function e.g. a [[#api-mlgraphbuilder-sigmoid-method]] or [[#api-mlgraphbuilder-relu-method]] can then be deferred until when the rest of the graph is ready to connect with it such as during the construction of [[#api-mlgraphbuilder-conv2d]] for example.
928+
The {{MLActivation}} objects (including the ones passed as input to methods) are created by the methods of {{MLGraphBuilder}} and are identified by their name. The |options| dictionary is defined by those methods. The actual creation of the activation function e.g. a {{MLGraphBuilder/sigmoid()}} or {{MLGraphBuilder/relu()}} can then be deferred until when the rest of the graph is ready to connect with it such as during the construction of {{MLGraphBuilder/conv2d()}} for example.
929929
</div>
930930

931931
<details open algorithm>
@@ -1634,7 +1634,7 @@ partial interface MLGraphBuilder {
16341634
<div class="note">
16351635
<details open>
16361636
<summary>
1637-
The behavior of this operation when the input tensor is 4-D of the {{MLInputOperandLayout/"nchw"}} layout and the activation is of operator type *relu* can be generically emulated from the usage of other operations as follow. However, user agents typically have a more efficient implementation for it, therefore its usage is encouraged from the performance standpoint.
1637+
The behavior of this operation when the input tensor is 4-D of the {{MLInputOperandLayout/"nchw"}} layout and the activation is {{MLGraphBuilder/relu()}} can be generically emulated from the usage of other operations as follow. However, user agents typically have a more efficient implementation for it, therefore its usage is encouraged from the performance standpoint.
16381638
</summary>
16391639
<pre highlight="js">
16401640
const shape = [1,null,1,1];
@@ -2505,7 +2505,7 @@ partial interface MLGraphBuilder {
25052505
</div>
25062506

25072507
<div class="note">
2508-
Although operations *greaterOrEqual* and *lesserOrEqual* can each be implemented in terms of operations *not*, *lesser*, and *greater* in other words `greater-or-equal(a, b)` is `not(lesser(a, b))`, they are specifically defined to handle NaN cases and for performance reason to avoid double comparisons.
2508+
Although operations {{MLGraphBuilder/greaterOrEqual()}} and {{MLGraphBuilder/lesserOrEqual()}} can each be implemented in terms of operations {{MLGraphBuilder/not()}}, {{MLGraphBuilder/lesser()}}, and {{MLGraphBuilder/greater()}} in other words `builder.greaterOrEqual(a, b)` is `builder.not(builder.lesser(a, b))`, they are specifically defined to handle NaN cases and for performance reason to avoid double comparisons.
25092509
</div>
25102510

25112511
<details open algorithm>
@@ -3365,7 +3365,7 @@ partial interface MLGraphBuilder {
33653365
<div class="note">
33663366
<details open>
33673367
<summary>
3368-
The behavior of this operation can be generically emulated via other operations as shown below, when the weight layout is the default {{MLGruWeightLayout/"zrn"}} layout, and the activation functions of the update/reset gate and new gate are of the operator types *sigmoid* and *tanh* respectively.
3368+
The behavior of this operation can be generically emulated via other operations as shown below, when the weight layout is the default {{MLGruWeightLayout/"zrn"}} layout, and the activation functions of the update/reset gate and new gate are {{MLGraphBuilder/sigmoid()}} and {{MLGraphBuilder/tanh()}} respectively.
33693369
</summary>
33703370
<pre highlight="js">
33713371
const one = builder.constant(1);
@@ -3671,7 +3671,7 @@ Create a named {{MLOperand}} based on a descriptor, that can be used as an input
36713671
</details>
36723672

36733673
### instanceNormalization ### {#api-mlgraphbuilder-instancenorm}
3674-
Normalize the input using [[Instance-Normalization]]. Unlike [[#api-mlgraphbuilder-batchnorm]] where the mean and variance values used in the normalization are computed across all the samples in the batch dimension while the model is trained, the mean and variance values used in the instance normalization are computed on the fly for each input feature of each individual sample in the batch.
3674+
Normalize the input using [[Instance-Normalization]]. Unlike {{MLGraphBuilder/batchNormalization()}} where the mean and variance values used in the normalization are computed across all the samples in the batch dimension while the model is trained, the mean and variance values used in the instance normalization are computed on the fly for each input feature of each individual sample in the batch.
36753675

36763676
<script type=idl>
36773677
dictionary MLInstanceNormalizationOptions {
@@ -3773,7 +3773,7 @@ partial interface MLGraphBuilder {
37733773
</div>
37743774

37753775
### layerNormalization ### {#api-mlgraphbuilder-layernorm}
3776-
Normalize the input using [[Layer-Normalization]]. Unlike [[#api-mlgraphbuilder-batchnorm]] where the mean and variance values are computed across all the samples in the batch dimension while the model is trained, and in [[#api-mlgraphbuilder-instancenorm]] where the mean and variance values are computed on the fly for each input feature of each individual sample in the batch, the means and variance values of the layer normalization are computed on the fly across all the input features of each individual sample in the batch.
3776+
Normalize the input using [[Layer-Normalization]]. Unlike {{MLGraphBuilder/batchNormalization()}} where the mean and variance values are computed across all the samples in the batch dimension while the model is trained, and in {{MLGraphBuilder/instanceNormalization()}} where the mean and variance values are computed on the fly for each input feature of each individual sample in the batch, the means and variance values of the layer normalization are computed on the fly across all the input features of each individual sample in the batch.
37773777

37783778
<script type=idl>
37793779
dictionary MLLayerNormalizationOptions {
@@ -4369,7 +4369,7 @@ partial interface MLGraphBuilder {
43694369
<div class="note">
43704370
<details open>
43714371
<summary>
4372-
The behavior of this operation can be generically emulated via other operations as shown below, when the weight layout is the default {{MLLstmWeightLayout/"iofg"}} layout, and the activation functions of the input/forget/output gate and the cell gate/the cell state's filter for the output hidden state are of the operator types *sigmoid* and *tanh* respectively.
4372+
The behavior of this operation can be generically emulated via other operations as shown below, when the weight layout is the default {{MLLstmWeightLayout/"iofg"}} layout, and the activation functions of the input/forget/output gate and the cell gate/the cell state's filter for the output hidden state are {{MLGraphBuilder/sigmoid()}} and {{MLGraphBuilder/tanh()}} respectively.
43734373
</summary>
43744374
<pre highlight="js">
43754375
const zero = builder.constant(0);
@@ -5287,7 +5287,7 @@ partial interface MLGraphBuilder {
52875287
<div class="note">
52885288
<details open>
52895289
<summary>
5290-
Many shape-related operations such as [squeeze](https://pytorch.org/docs/stable/generated/torch.squeeze.html), [unsqueeze](https://pytorch.org/docs/stable/generated/torch.unsqueeze.html), and [flatten](https://pytorch.org/docs/stable/generated/torch.flatten.html) can be generically implemented using the *reshape*}} operation as follows:
5290+
Many shape-related operations such as [squeeze](https://pytorch.org/docs/stable/generated/torch.squeeze.html), [unsqueeze](https://pytorch.org/docs/stable/generated/torch.unsqueeze.html), and [flatten](https://pytorch.org/docs/stable/generated/torch.flatten.html) can be generically implemented using the {{MLGraphBuilder/reshape()}} operation as follows:
52915291
</summary>
52925292
<pre highlight="js">
52935293
// Returns a tensor with all specified dimensions of input of size 1 removed.

0 commit comments

Comments
 (0)