Skip to content

Commit dbcd0e5

Browse files
committed
auto-generating sphinx docs
1 parent 83af444 commit dbcd0e5

File tree

737 files changed

+5410
-1196
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

737 files changed

+5410
-1196
lines changed

docs/master/__config__.html

+1-1
Original file line numberDiff line numberDiff line change
@@ -159,7 +159,7 @@
159159

160160

161161
<div class="version">
162-
<a href='http://pytorch.org/docs/versions.html'>1.7.0a0+8a83851 &#x25BC</a>
162+
<a href='http://pytorch.org/docs/versions.html'>1.7.0a0+be34aa1 &#x25BC</a>
163163
</div>
164164

165165

docs/master/_modules/index.html

+1-1
Original file line numberDiff line numberDiff line change
@@ -158,7 +158,7 @@
158158

159159

160160
<div class="version">
161-
<a href='http://pytorch.org/docs/versions.html'>1.7.0a0+8a83851 &#x25BC</a>
161+
<a href='http://pytorch.org/docs/versions.html'>1.7.0a0+be34aa1 &#x25BC</a>
162162
</div>
163163

164164

docs/master/_modules/torch.html

+66-9
Original file line numberDiff line numberDiff line change
@@ -158,7 +158,7 @@
158158

159159

160160
<div class="version">
161-
<a href='http://pytorch.org/docs/versions.html'>1.7.0a0+8a83851 &#x25BC</a>
161+
<a href='http://pytorch.org/docs/versions.html'>1.7.0a0+be34aa1 &#x25BC</a>
162162
</div>
163163

164164

@@ -642,10 +642,67 @@ <h1>Source code for torch</h1><div class="highlight"><pre>
642642
<span class="n">_C</span><span class="o">.</span><span class="n">_set_default_dtype</span><span class="p">(</span><span class="n">d</span><span class="p">)</span>
643643

644644
<span class="k">def</span> <span class="nf">set_deterministic</span><span class="p">(</span><span class="n">d</span><span class="p">):</span>
645-
<span class="sa">r</span><span class="sd">&quot;&quot;&quot;Sets a global flag to force all operations to use a deterministic</span>
646-
<span class="sd"> implementation if available. If an operation that does not have a</span>
647-
<span class="sd"> deterministic implementation is called while this setting is True, the</span>
648-
<span class="sd"> operation will throw a RuntimeError.</span>
645+
<span class="sa">r</span><span class="sd">&quot;&quot;&quot; Sets whether native PyTorch operations must use deterministic</span>
646+
<span class="sd"> algorithms. When True, operations without deterministic algorithms</span>
647+
<span class="sd"> will throw a :class:RuntimeError when called.</span>
648+
649+
<span class="sd"> .. warning::</span>
650+
<span class="sd"> This feature is a beta feature, so it does not affect every</span>
651+
<span class="sd"> nondeterministic operation yet. The following operations are</span>
652+
<span class="sd"> affected by this flag.</span>
653+
654+
<span class="sd"> The following normally-nondeterministic operations will act</span>
655+
<span class="sd"> deterministically when `d=True`:</span>
656+
657+
<span class="sd"> * :class:`torch.nn.Conv1d` when called on CUDA tensor</span>
658+
<span class="sd"> * :class:`torch.nn.Conv2d` when called on CUDA tensor</span>
659+
<span class="sd"> * :class:`torch.nn.Conv3d` when called on CUDA tensor</span>
660+
<span class="sd"> * :class:`torch.nn.ConvTranspose1d` when called on CUDA tensor</span>
661+
<span class="sd"> * :class:`torch.nn.ConvTranspose2d` when called on CUDA tensor</span>
662+
<span class="sd"> * :class:`torch.nn.ConvTranspose3d` when called on CUDA tensor</span>
663+
<span class="sd"> * :func:`torch.bmm` when called on sparse-dense CUDA tensors</span>
664+
665+
<span class="sd"> The following normally-nondeterministic operations will throw a</span>
666+
<span class="sd"> :class:`RuntimeError` when `d=True`:</span>
667+
668+
<span class="sd"> * :class:`torch.nn.AvgPool3d` when called on a CUDA tensor that requires grad</span>
669+
<span class="sd"> * :class:`torch.nn.AdaptiveAvgPool2d` when called on a CUDA tensor that requires grad</span>
670+
<span class="sd"> * :class:`torch.nn.AdaptiveAvgPool3d` when called on a CUDA tensor that requires grad</span>
671+
<span class="sd"> * :class:`torch.nn.MaxPool3d` when called on a CUDA tensor that requires grad</span>
672+
<span class="sd"> * :class:`torch.nn.AdaptiveMaxPool2d` when called on a CUDA tensor that requires grad</span>
673+
<span class="sd"> * :class:`torch.nn.FractionalMaxPool2d` when called on a CUDA tensor that requires grad</span>
674+
<span class="sd"> * :class:`torch.nn.FractionalMaxPool3d` when called on a CUDA tensor that requires grad</span>
675+
<span class="sd"> * :func:`torch.nn.functional.interpolate` when called on a CUDA tensor that requires grad</span>
676+
<span class="sd"> and one of the following modes is used:</span>
677+
<span class="sd"> - `linear`</span>
678+
<span class="sd"> - `bilinear`</span>
679+
<span class="sd"> - `bicubic`</span>
680+
<span class="sd"> - `trilinear`</span>
681+
<span class="sd"> * :class:`torch.nn.ReflectionPad1d` when called on a CUDA tensor that requires grad</span>
682+
<span class="sd"> * :class:`torch.nn.ReflectionPad2d` when called on a CUDA tensor that requires grad</span>
683+
<span class="sd"> * :class:`torch.nn.ReplicationPad1d` when called on a CUDA tensor that requires grad</span>
684+
<span class="sd"> * :class:`torch.nn.ReplicationPad2d` when called on a CUDA tensor that requires grad</span>
685+
<span class="sd"> * :class:`torch.nn.ReplicationPad3d` when called on a CUDA tensor that requires grad</span>
686+
<span class="sd"> * :class:`torch.nn.NLLLoss` when called on a CUDA tensor that requires grad</span>
687+
<span class="sd"> * :class:`torch.nn.CTCLoss` when called on a CUDA tensor that requires grad</span>
688+
<span class="sd"> * :class:`torch.nn.EmbeddingBag` when called on a CUDA tensor that requires grad</span>
689+
<span class="sd"> * :func:`torch.scatter_add_` when called on a CUDA tensor</span>
690+
<span class="sd"> * :func:`torch.index_add_` when called on a CUDA tensor</span>
691+
<span class="sd"> * :func:`torch.index_select` when called on a CUDA tensor that requires grad</span>
692+
<span class="sd"> * :func:`torch.repeat_interleave` when called on a CUDA tensor that requires grad</span>
693+
<span class="sd"> * :func:`torch.histc` when called on a CUDA tensor</span>
694+
<span class="sd"> * :func:`torch.bincount` when called on a CUDA tensor</span>
695+
696+
<span class="sd"> A handful of CUDA operations are nondeterministic if the CUDA version is</span>
697+
<span class="sd"> 10.2 or greater, unless the environment variable `CUBLAS_WORKSPACE_CONFIG=:4096:8`</span>
698+
<span class="sd"> or `CUBLAS_WORKSPACE_CONFIG=:16:8` is set. See the CUDA documentation for more</span>
699+
<span class="sd"> details: `&lt;https://docs.nvidia.com/cuda/cublas/index.html#cublasApi_reproducibility&gt;`_</span>
700+
<span class="sd"> If one of these environment variable configurations is not set, a :class:`RuntimeError`</span>
701+
<span class="sd"> will be raised from these operations when called with CUDA tensors:</span>
702+
703+
<span class="sd"> * :func:`torch.mm`</span>
704+
<span class="sd"> * :func:`torch.mv`</span>
705+
<span class="sd"> * :func:`torch.bmm`</span>
649706

650707
<span class="sd"> Note that deterministic operations tend to have worse performance than</span>
651708
<span class="sd"> non-deterministic operations.</span>
@@ -656,11 +713,11 @@ <h1>Source code for torch</h1><div class="highlight"><pre>
656713
<span class="sd"> &quot;&quot;&quot;</span>
657714
<span class="n">_C</span><span class="o">.</span><span class="n">_set_deterministic</span><span class="p">(</span><span class="n">d</span><span class="p">)</span>
658715

659-
<span class="k">def</span> <span class="nf">is_deterministic</span><span class="p">():</span>
660-
<span class="sa">r</span><span class="sd">&quot;&quot;&quot;Returns True if the global deterministic flag is turned on and</span>
661-
<span class="sd"> operations are being forced to use a deterministic implementation.</span>
716+
<div class="viewcode-block" id="is_deterministic"><a class="viewcode-back" href="../generated/torch.is_deterministic.html#torch.is_deterministic">[docs]</a><span class="k">def</span> <span class="nf">is_deterministic</span><span class="p">():</span>
717+
<span class="sa">r</span><span class="sd">&quot;&quot;&quot;Returns True if the global deterministic flag is turned on. Refer to</span>
718+
<span class="sd"> :func:`torch.set_deterministic` documentation for more details.</span>
662719
<span class="sd"> &quot;&quot;&quot;</span>
663-
<span class="k">return</span> <span class="n">_C</span><span class="o">.</span><span class="n">_get_deterministic</span><span class="p">()</span>
720+
<span class="k">return</span> <span class="n">_C</span><span class="o">.</span><span class="n">_get_deterministic</span><span class="p">()</span></div>
664721

665722
<span class="c1">################################################################################</span>
666723
<span class="c1"># Define Storage and Tensor classes</span>

docs/master/_modules/torch/__config__.html

+1-1
Original file line numberDiff line numberDiff line change
@@ -158,7 +158,7 @@
158158

159159

160160
<div class="version">
161-
<a href='http://pytorch.org/docs/versions.html'>1.7.0a0+8a83851 &#x25BC</a>
161+
<a href='http://pytorch.org/docs/versions.html'>1.7.0a0+be34aa1 &#x25BC</a>
162162
</div>
163163

164164

docs/master/_modules/torch/_jit_internal.html

+1-1
Original file line numberDiff line numberDiff line change
@@ -158,7 +158,7 @@
158158

159159

160160
<div class="version">
161-
<a href='http://pytorch.org/docs/versions.html'>1.7.0a0+8a83851 &#x25BC</a>
161+
<a href='http://pytorch.org/docs/versions.html'>1.7.0a0+be34aa1 &#x25BC</a>
162162
</div>
163163

164164

docs/master/_modules/torch/_lobpcg.html

+1-1
Original file line numberDiff line numberDiff line change
@@ -158,7 +158,7 @@
158158

159159

160160
<div class="version">
161-
<a href='http://pytorch.org/docs/versions.html'>1.7.0a0+8a83851 &#x25BC</a>
161+
<a href='http://pytorch.org/docs/versions.html'>1.7.0a0+be34aa1 &#x25BC</a>
162162
</div>
163163

164164

docs/master/_modules/torch/_lowrank.html

+1-1
Original file line numberDiff line numberDiff line change
@@ -158,7 +158,7 @@
158158

159159

160160
<div class="version">
161-
<a href='http://pytorch.org/docs/versions.html'>1.7.0a0+8a83851 &#x25BC</a>
161+
<a href='http://pytorch.org/docs/versions.html'>1.7.0a0+be34aa1 &#x25BC</a>
162162
</div>
163163

164164

docs/master/_modules/torch/_tensor_str.html

+1-1
Original file line numberDiff line numberDiff line change
@@ -158,7 +158,7 @@
158158

159159

160160
<div class="version">
161-
<a href='http://pytorch.org/docs/versions.html'>1.7.0a0+8a83851 &#x25BC</a>
161+
<a href='http://pytorch.org/docs/versions.html'>1.7.0a0+be34aa1 &#x25BC</a>
162162
</div>
163163

164164

docs/master/_modules/torch/_utils.html

+1-1
Original file line numberDiff line numberDiff line change
@@ -158,7 +158,7 @@
158158

159159

160160
<div class="version">
161-
<a href='http://pytorch.org/docs/versions.html'>1.7.0a0+8a83851 &#x25BC</a>
161+
<a href='http://pytorch.org/docs/versions.html'>1.7.0a0+be34aa1 &#x25BC</a>
162162
</div>
163163

164164

docs/master/_modules/torch/autograd.html

+1-1
Original file line numberDiff line numberDiff line change
@@ -158,7 +158,7 @@
158158

159159

160160
<div class="version">
161-
<a href='http://pytorch.org/docs/versions.html'>1.7.0a0+8a83851 &#x25BC</a>
161+
<a href='http://pytorch.org/docs/versions.html'>1.7.0a0+be34aa1 &#x25BC</a>
162162
</div>
163163

164164

docs/master/_modules/torch/autograd/anomaly_mode.html

+1-1
Original file line numberDiff line numberDiff line change
@@ -158,7 +158,7 @@
158158

159159

160160
<div class="version">
161-
<a href='http://pytorch.org/docs/versions.html'>1.7.0a0+8a83851 &#x25BC</a>
161+
<a href='http://pytorch.org/docs/versions.html'>1.7.0a0+be34aa1 &#x25BC</a>
162162
</div>
163163

164164

docs/master/_modules/torch/autograd/function.html

+1-1
Original file line numberDiff line numberDiff line change
@@ -158,7 +158,7 @@
158158

159159

160160
<div class="version">
161-
<a href='http://pytorch.org/docs/versions.html'>1.7.0a0+8a83851 &#x25BC</a>
161+
<a href='http://pytorch.org/docs/versions.html'>1.7.0a0+be34aa1 &#x25BC</a>
162162
</div>
163163

164164

docs/master/_modules/torch/autograd/functional.html

+1-1
Original file line numberDiff line numberDiff line change
@@ -158,7 +158,7 @@
158158

159159

160160
<div class="version">
161-
<a href='http://pytorch.org/docs/versions.html'>1.7.0a0+8a83851 &#x25BC</a>
161+
<a href='http://pytorch.org/docs/versions.html'>1.7.0a0+be34aa1 &#x25BC</a>
162162
</div>
163163

164164

0 commit comments

Comments
 (0)