Skip to content

Commit c9da806

Browse files
committed
Generate Python docs from pytorch/pytorch@af95408
1 parent 70508a4 commit c9da806

File tree

1,735 files changed

+2067
-1911
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

1,735 files changed

+2067
-1911
lines changed

docs/master/__config__.html

+1-1
Original file line numberDiff line numberDiff line change
@@ -194,7 +194,7 @@
194194
<div class="pytorch-left-menu-search">
195195

196196
<div class="version">
197-
<a href='https://pytorch.org/docs/versions.html'>master (1.11.0a0+git42cf661 ) &#x25BC</a>
197+
<a href='https://pytorch.org/docs/versions.html'>master (1.11.0a0+gitaf95408 ) &#x25BC</a>
198198
</div>
199199

200200

docs/master/_images/RReLU.png

98 Bytes
Loading

docs/master/_modules/index.html

+1-1
Original file line numberDiff line numberDiff line change
@@ -193,7 +193,7 @@
193193
<div class="pytorch-left-menu-search">
194194

195195
<div class="version">
196-
<a href='https://pytorch.org/docs/versions.html'>master (1.11.0a0+git42cf661 ) &#x25BC</a>
196+
<a href='https://pytorch.org/docs/versions.html'>master (1.11.0a0+gitaf95408 ) &#x25BC</a>
197197
</div>
198198

199199

docs/master/_modules/torch.html

+42-39
Large diffs are not rendered by default.

docs/master/_modules/torch/__config__.html

+1-1
Original file line numberDiff line numberDiff line change
@@ -193,7 +193,7 @@
193193
<div class="pytorch-left-menu-search">
194194

195195
<div class="version">
196-
<a href='https://pytorch.org/docs/versions.html'>master (1.11.0a0+git42cf661 ) &#x25BC</a>
196+
<a href='https://pytorch.org/docs/versions.html'>master (1.11.0a0+gitaf95408 ) &#x25BC</a>
197197
</div>
198198

199199

docs/master/_modules/torch/_jit_internal.html

+1-1
Original file line numberDiff line numberDiff line change
@@ -193,7 +193,7 @@
193193
<div class="pytorch-left-menu-search">
194194

195195
<div class="version">
196-
<a href='https://pytorch.org/docs/versions.html'>master (1.11.0a0+git42cf661 ) &#x25BC</a>
196+
<a href='https://pytorch.org/docs/versions.html'>master (1.11.0a0+gitaf95408 ) &#x25BC</a>
197197
</div>
198198

199199

docs/master/_modules/torch/_lobpcg.html

+1-1
Original file line numberDiff line numberDiff line change
@@ -193,7 +193,7 @@
193193
<div class="pytorch-left-menu-search">
194194

195195
<div class="version">
196-
<a href='https://pytorch.org/docs/versions.html'>master (1.11.0a0+git42cf661 ) &#x25BC</a>
196+
<a href='https://pytorch.org/docs/versions.html'>master (1.11.0a0+gitaf95408 ) &#x25BC</a>
197197
</div>
198198

199199

docs/master/_modules/torch/_lowrank.html

+1-1
Original file line numberDiff line numberDiff line change
@@ -193,7 +193,7 @@
193193
<div class="pytorch-left-menu-search">
194194

195195
<div class="version">
196-
<a href='https://pytorch.org/docs/versions.html'>master (1.11.0a0+git42cf661 ) &#x25BC</a>
196+
<a href='https://pytorch.org/docs/versions.html'>master (1.11.0a0+gitaf95408 ) &#x25BC</a>
197197
</div>
198198

199199

docs/master/_modules/torch/_tensor.html

+9-9
Original file line numberDiff line numberDiff line change
@@ -193,7 +193,7 @@
193193
<div class="pytorch-left-menu-search">
194194

195195
<div class="version">
196-
<a href='https://pytorch.org/docs/versions.html'>master (1.11.0a0+git42cf661 ) &#x25BC</a>
196+
<a href='https://pytorch.org/docs/versions.html'>master (1.11.0a0+gitaf95408 ) &#x25BC</a>
197197
</div>
198198

199199

@@ -640,7 +640,7 @@ <h1>Source code for torch._tensor</h1><div class="highlight"><pre>
640640
<span class="c1"># All strings are unicode in Python 3.</span>
641641
<span class="k">return</span> <span class="n">torch</span><span class="o">.</span><span class="n">_tensor_str</span><span class="o">.</span><span class="n">_str</span><span class="p">(</span><span class="bp">self</span><span class="p">)</span>
642642

643-
<span class="k">def</span> <span class="nf">backward</span><span class="p">(</span><span class="bp">self</span><span class="p">,</span> <span class="n">gradient</span><span class="o">=</span><span class="kc">None</span><span class="p">,</span> <span class="n">retain_graph</span><span class="o">=</span><span class="kc">None</span><span class="p">,</span> <span class="n">create_graph</span><span class="o">=</span><span class="kc">False</span><span class="p">,</span> <span class="n">inputs</span><span class="o">=</span><span class="kc">None</span><span class="p">):</span>
643+
<div class="viewcode-block" id="Tensor.backward"><a class="viewcode-back" href="../../generated/torch.Tensor.backward.html#torch.Tensor.backward">[docs]</a> <span class="k">def</span> <span class="nf">backward</span><span class="p">(</span><span class="bp">self</span><span class="p">,</span> <span class="n">gradient</span><span class="o">=</span><span class="kc">None</span><span class="p">,</span> <span class="n">retain_graph</span><span class="o">=</span><span class="kc">None</span><span class="p">,</span> <span class="n">create_graph</span><span class="o">=</span><span class="kc">False</span><span class="p">,</span> <span class="n">inputs</span><span class="o">=</span><span class="kc">None</span><span class="p">):</span>
644644
<span class="sa">r</span><span class="sd">&quot;&quot;&quot;Computes the gradient of current tensor w.r.t. graph leaves.</span>
645645

646646
<span class="sd"> The graph is differentiated using the chain rule. If the tensor is</span>
@@ -696,7 +696,7 @@ <h1>Source code for torch._tensor</h1><div class="highlight"><pre>
696696
<span class="n">retain_graph</span><span class="o">=</span><span class="n">retain_graph</span><span class="p">,</span>
697697
<span class="n">create_graph</span><span class="o">=</span><span class="n">create_graph</span><span class="p">,</span>
698698
<span class="n">inputs</span><span class="o">=</span><span class="n">inputs</span><span class="p">)</span>
699-
<span class="n">torch</span><span class="o">.</span><span class="n">autograd</span><span class="o">.</span><span class="n">backward</span><span class="p">(</span><span class="bp">self</span><span class="p">,</span> <span class="n">gradient</span><span class="p">,</span> <span class="n">retain_graph</span><span class="p">,</span> <span class="n">create_graph</span><span class="p">,</span> <span class="n">inputs</span><span class="o">=</span><span class="n">inputs</span><span class="p">)</span>
699+
<span class="n">torch</span><span class="o">.</span><span class="n">autograd</span><span class="o">.</span><span class="n">backward</span><span class="p">(</span><span class="bp">self</span><span class="p">,</span> <span class="n">gradient</span><span class="p">,</span> <span class="n">retain_graph</span><span class="p">,</span> <span class="n">create_graph</span><span class="p">,</span> <span class="n">inputs</span><span class="o">=</span><span class="n">inputs</span><span class="p">)</span></div>
700700

701701
<span class="k">def</span> <span class="nf">register_hook</span><span class="p">(</span><span class="bp">self</span><span class="p">,</span> <span class="n">hook</span><span class="p">):</span>
702702
<span class="sa">r</span><span class="sd">&quot;&quot;&quot;Registers a backward hook.</span>
@@ -807,7 +807,7 @@ <h1>Source code for torch._tensor</h1><div class="highlight"><pre>
807807
<span class="k">return</span> <span class="n">handle_torch_function</span><span class="p">(</span><span class="n">Tensor</span><span class="o">.</span><span class="n">is_shared</span><span class="p">,</span> <span class="p">(</span><span class="bp">self</span><span class="p">,),</span> <span class="bp">self</span><span class="p">)</span>
808808
<span class="k">return</span> <span class="bp">self</span><span class="o">.</span><span class="n">storage</span><span class="p">()</span><span class="o">.</span><span class="n">is_shared</span><span class="p">()</span></div>
809809

810-
<div class="viewcode-block" id="Tensor.share_memory_"><a class="viewcode-back" href="../../generated/torch.Tensor.share_memory_.html#torch.Tensor.share_memory_">[docs]</a> <span class="k">def</span> <span class="nf">share_memory_</span><span class="p">(</span><span class="bp">self</span><span class="p">):</span>
810+
<span class="k">def</span> <span class="nf">share_memory_</span><span class="p">(</span><span class="bp">self</span><span class="p">):</span>
811811
<span class="sa">r</span><span class="sd">&quot;&quot;&quot;Moves the underlying storage to shared memory.</span>
812812

813813
<span class="sd"> This is a no-op if the underlying storage is already in shared memory</span>
@@ -816,7 +816,7 @@ <h1>Source code for torch._tensor</h1><div class="highlight"><pre>
816816
<span class="k">if</span> <span class="n">has_torch_function_unary</span><span class="p">(</span><span class="bp">self</span><span class="p">):</span>
817817
<span class="k">return</span> <span class="n">handle_torch_function</span><span class="p">(</span><span class="n">Tensor</span><span class="o">.</span><span class="n">share_memory_</span><span class="p">,</span> <span class="p">(</span><span class="bp">self</span><span class="p">,),</span> <span class="bp">self</span><span class="p">)</span>
818818
<span class="bp">self</span><span class="o">.</span><span class="n">storage</span><span class="p">()</span><span class="o">.</span><span class="n">share_memory_</span><span class="p">()</span>
819-
<span class="k">return</span> <span class="bp">self</span></div>
819+
<span class="k">return</span> <span class="bp">self</span>
820820

821821
<span class="k">def</span> <span class="fm">__reversed__</span><span class="p">(</span><span class="bp">self</span><span class="p">):</span>
822822
<span class="sa">r</span><span class="sd">&quot;&quot;&quot;Reverses the tensor along dimension 0.&quot;&quot;&quot;</span>
@@ -845,7 +845,7 @@ <h1>Source code for torch._tensor</h1><div class="highlight"><pre>
845845
<span class="k">else</span><span class="p">:</span>
846846
<span class="k">return</span> <span class="n">LU</span><span class="p">,</span> <span class="n">pivots</span>
847847

848-
<div class="viewcode-block" id="Tensor.stft"><a class="viewcode-back" href="../../generated/torch.Tensor.stft.html#torch.Tensor.stft">[docs]</a> <span class="k">def</span> <span class="nf">stft</span><span class="p">(</span><span class="bp">self</span><span class="p">,</span> <span class="n">n_fft</span><span class="p">:</span> <span class="nb">int</span><span class="p">,</span> <span class="n">hop_length</span><span class="p">:</span> <span class="n">Optional</span><span class="p">[</span><span class="nb">int</span><span class="p">]</span> <span class="o">=</span> <span class="kc">None</span><span class="p">,</span>
848+
<span class="k">def</span> <span class="nf">stft</span><span class="p">(</span><span class="bp">self</span><span class="p">,</span> <span class="n">n_fft</span><span class="p">:</span> <span class="nb">int</span><span class="p">,</span> <span class="n">hop_length</span><span class="p">:</span> <span class="n">Optional</span><span class="p">[</span><span class="nb">int</span><span class="p">]</span> <span class="o">=</span> <span class="kc">None</span><span class="p">,</span>
849849
<span class="n">win_length</span><span class="p">:</span> <span class="n">Optional</span><span class="p">[</span><span class="nb">int</span><span class="p">]</span> <span class="o">=</span> <span class="kc">None</span><span class="p">,</span> <span class="n">window</span><span class="p">:</span> <span class="s1">&#39;Optional[Tensor]&#39;</span> <span class="o">=</span> <span class="kc">None</span><span class="p">,</span>
850850
<span class="n">center</span><span class="p">:</span> <span class="nb">bool</span> <span class="o">=</span> <span class="kc">True</span><span class="p">,</span> <span class="n">pad_mode</span><span class="p">:</span> <span class="nb">str</span> <span class="o">=</span> <span class="s1">&#39;reflect&#39;</span><span class="p">,</span> <span class="n">normalized</span><span class="p">:</span> <span class="nb">bool</span> <span class="o">=</span> <span class="kc">False</span><span class="p">,</span>
851851
<span class="n">onesided</span><span class="p">:</span> <span class="n">Optional</span><span class="p">[</span><span class="nb">bool</span><span class="p">]</span> <span class="o">=</span> <span class="kc">None</span><span class="p">,</span> <span class="n">return_complex</span><span class="p">:</span> <span class="n">Optional</span><span class="p">[</span><span class="nb">bool</span><span class="p">]</span> <span class="o">=</span> <span class="kc">None</span><span class="p">):</span>
@@ -862,7 +862,7 @@ <h1>Source code for torch._tensor</h1><div class="highlight"><pre>
862862
<span class="n">onesided</span><span class="o">=</span><span class="n">onesided</span><span class="p">,</span> <span class="n">return_complex</span><span class="o">=</span><span class="n">return_complex</span>
863863
<span class="p">)</span>
864864
<span class="k">return</span> <span class="n">torch</span><span class="o">.</span><span class="n">stft</span><span class="p">(</span><span class="bp">self</span><span class="p">,</span> <span class="n">n_fft</span><span class="p">,</span> <span class="n">hop_length</span><span class="p">,</span> <span class="n">win_length</span><span class="p">,</span> <span class="n">window</span><span class="p">,</span> <span class="n">center</span><span class="p">,</span>
865-
<span class="n">pad_mode</span><span class="p">,</span> <span class="n">normalized</span><span class="p">,</span> <span class="n">onesided</span><span class="p">,</span> <span class="n">return_complex</span><span class="o">=</span><span class="n">return_complex</span><span class="p">)</span></div>
865+
<span class="n">pad_mode</span><span class="p">,</span> <span class="n">normalized</span><span class="p">,</span> <span class="n">onesided</span><span class="p">,</span> <span class="n">return_complex</span><span class="o">=</span><span class="n">return_complex</span><span class="p">)</span>
866866

867867
<div class="viewcode-block" id="Tensor.istft"><a class="viewcode-back" href="../../generated/torch.Tensor.istft.html#torch.Tensor.istft">[docs]</a> <span class="k">def</span> <span class="nf">istft</span><span class="p">(</span><span class="bp">self</span><span class="p">,</span> <span class="n">n_fft</span><span class="p">:</span> <span class="nb">int</span><span class="p">,</span> <span class="n">hop_length</span><span class="p">:</span> <span class="n">Optional</span><span class="p">[</span><span class="nb">int</span><span class="p">]</span> <span class="o">=</span> <span class="kc">None</span><span class="p">,</span>
868868
<span class="n">win_length</span><span class="p">:</span> <span class="n">Optional</span><span class="p">[</span><span class="nb">int</span><span class="p">]</span> <span class="o">=</span> <span class="kc">None</span><span class="p">,</span> <span class="n">window</span><span class="p">:</span> <span class="s1">&#39;Optional[Tensor]&#39;</span> <span class="o">=</span> <span class="kc">None</span><span class="p">,</span>
@@ -893,7 +893,7 @@ <h1>Source code for torch._tensor</h1><div class="highlight"><pre>
893893
<span class="kn">from</span> <span class="nn">torch.autograd._functions</span> <span class="kn">import</span> <span class="n">Resize</span>
894894
<span class="k">return</span> <span class="n">Resize</span><span class="o">.</span><span class="n">apply</span><span class="p">(</span><span class="bp">self</span><span class="p">,</span> <span class="n">tensor</span><span class="o">.</span><span class="n">size</span><span class="p">())</span>
895895

896-
<div class="viewcode-block" id="Tensor.split"><a class="viewcode-back" href="../../generated/torch.Tensor.split.html#torch.Tensor.split">[docs]</a> <span class="k">def</span> <span class="nf">split</span><span class="p">(</span><span class="bp">self</span><span class="p">,</span> <span class="n">split_size</span><span class="p">,</span> <span class="n">dim</span><span class="o">=</span><span class="mi">0</span><span class="p">):</span>
896+
<span class="k">def</span> <span class="nf">split</span><span class="p">(</span><span class="bp">self</span><span class="p">,</span> <span class="n">split_size</span><span class="p">,</span> <span class="n">dim</span><span class="o">=</span><span class="mi">0</span><span class="p">):</span>
897897
<span class="sa">r</span><span class="sd">&quot;&quot;&quot;See :func:`torch.split`</span>
898898
<span class="sd"> &quot;&quot;&quot;</span>
899899
<span class="k">if</span> <span class="n">has_torch_function_unary</span><span class="p">(</span><span class="bp">self</span><span class="p">):</span>
@@ -907,7 +907,7 @@ <h1>Source code for torch._tensor</h1><div class="highlight"><pre>
907907
<span class="k">except</span> <span class="ne">ValueError</span><span class="p">:</span>
908908
<span class="k">return</span> <span class="nb">super</span><span class="p">(</span><span class="n">Tensor</span><span class="p">,</span> <span class="bp">self</span><span class="p">)</span><span class="o">.</span><span class="n">split_with_sizes</span><span class="p">(</span><span class="n">split_size</span><span class="p">,</span> <span class="n">dim</span><span class="p">)</span>
909909
<span class="k">else</span><span class="p">:</span>
910-
<span class="k">return</span> <span class="nb">super</span><span class="p">(</span><span class="n">Tensor</span><span class="p">,</span> <span class="bp">self</span><span class="p">)</span><span class="o">.</span><span class="n">split_with_sizes</span><span class="p">(</span><span class="n">split_size</span><span class="p">,</span> <span class="n">dim</span><span class="p">)</span></div>
910+
<span class="k">return</span> <span class="nb">super</span><span class="p">(</span><span class="n">Tensor</span><span class="p">,</span> <span class="bp">self</span><span class="p">)</span><span class="o">.</span><span class="n">split_with_sizes</span><span class="p">(</span><span class="n">split_size</span><span class="p">,</span> <span class="n">dim</span><span class="p">)</span>
911911

912912
<span class="k">def</span> <span class="nf">unique</span><span class="p">(</span><span class="bp">self</span><span class="p">,</span> <span class="nb">sorted</span><span class="o">=</span><span class="kc">True</span><span class="p">,</span> <span class="n">return_inverse</span><span class="o">=</span><span class="kc">False</span><span class="p">,</span> <span class="n">return_counts</span><span class="o">=</span><span class="kc">False</span><span class="p">,</span> <span class="n">dim</span><span class="o">=</span><span class="kc">None</span><span class="p">):</span>
913913
<span class="sa">r</span><span class="sd">&quot;&quot;&quot;Returns the unique elements of the input tensor.</span>

docs/master/_modules/torch/_tensor_str.html

+1-1
Original file line numberDiff line numberDiff line change
@@ -193,7 +193,7 @@
193193
<div class="pytorch-left-menu-search">
194194

195195
<div class="version">
196-
<a href='https://pytorch.org/docs/versions.html'>master (1.11.0a0+git42cf661 ) &#x25BC</a>
196+
<a href='https://pytorch.org/docs/versions.html'>master (1.11.0a0+gitaf95408 ) &#x25BC</a>
197197
</div>
198198

199199

docs/master/_modules/torch/_utils.html

+1-1
Original file line numberDiff line numberDiff line change
@@ -193,7 +193,7 @@
193193
<div class="pytorch-left-menu-search">
194194

195195
<div class="version">
196-
<a href='https://pytorch.org/docs/versions.html'>master (1.11.0a0+git42cf661 ) &#x25BC</a>
196+
<a href='https://pytorch.org/docs/versions.html'>master (1.11.0a0+gitaf95408 ) &#x25BC</a>
197197
</div>
198198

199199

docs/master/_modules/torch/_vmap_internals.html

+1-1
Original file line numberDiff line numberDiff line change
@@ -193,7 +193,7 @@
193193
<div class="pytorch-left-menu-search">
194194

195195
<div class="version">
196-
<a href='https://pytorch.org/docs/versions.html'>master (1.11.0a0+git42cf661 ) &#x25BC</a>
196+
<a href='https://pytorch.org/docs/versions.html'>master (1.11.0a0+gitaf95408 ) &#x25BC</a>
197197
</div>
198198

199199

docs/master/_modules/torch/ao/quantization/fake_quantize.html

+1-1
Original file line numberDiff line numberDiff line change
@@ -193,7 +193,7 @@
193193
<div class="pytorch-left-menu-search">
194194

195195
<div class="version">
196-
<a href='https://pytorch.org/docs/versions.html'>master (1.11.0a0+git42cf661 ) &#x25BC</a>
196+
<a href='https://pytorch.org/docs/versions.html'>master (1.11.0a0+gitaf95408 ) &#x25BC</a>
197197
</div>
198198

199199

docs/master/_modules/torch/ao/quantization/fuse_modules.html

+1-1
Original file line numberDiff line numberDiff line change
@@ -193,7 +193,7 @@
193193
<div class="pytorch-left-menu-search">
194194

195195
<div class="version">
196-
<a href='https://pytorch.org/docs/versions.html'>master (1.11.0a0+git42cf661 ) &#x25BC</a>
196+
<a href='https://pytorch.org/docs/versions.html'>master (1.11.0a0+gitaf95408 ) &#x25BC</a>
197197
</div>
198198

199199

docs/master/_modules/torch/ao/quantization/observer.html

+1-1
Original file line numberDiff line numberDiff line change
@@ -193,7 +193,7 @@
193193
<div class="pytorch-left-menu-search">
194194

195195
<div class="version">
196-
<a href='https://pytorch.org/docs/versions.html'>master (1.11.0a0+git42cf661 ) &#x25BC</a>
196+
<a href='https://pytorch.org/docs/versions.html'>master (1.11.0a0+gitaf95408 ) &#x25BC</a>
197197
</div>
198198

199199

docs/master/_modules/torch/ao/quantization/qconfig.html

+1-1
Original file line numberDiff line numberDiff line change
@@ -193,7 +193,7 @@
193193
<div class="pytorch-left-menu-search">
194194

195195
<div class="version">
196-
<a href='https://pytorch.org/docs/versions.html'>master (1.11.0a0+git42cf661 ) &#x25BC</a>
196+
<a href='https://pytorch.org/docs/versions.html'>master (1.11.0a0+gitaf95408 ) &#x25BC</a>
197197
</div>
198198

199199

docs/master/_modules/torch/ao/quantization/quantize.html

+1-1
Original file line numberDiff line numberDiff line change
@@ -193,7 +193,7 @@
193193
<div class="pytorch-left-menu-search">
194194

195195
<div class="version">
196-
<a href='https://pytorch.org/docs/versions.html'>master (1.11.0a0+git42cf661 ) &#x25BC</a>
196+
<a href='https://pytorch.org/docs/versions.html'>master (1.11.0a0+gitaf95408 ) &#x25BC</a>
197197
</div>
198198

199199

0 commit comments

Comments
 (0)