Skip to content

Commit 19901e7

Browse files
committed
update docs
1 parent 390742b commit 19901e7

File tree

2 files changed

+4
-4
lines changed

2 files changed

+4
-4
lines changed

docs/source/en/api/utilities.md

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -41,3 +41,7 @@ Utility and helper functions for working with 🤗 Diffusers.
4141
## randn_tensor
4242

4343
[[autodoc]] utils.torch_utils.randn_tensor
44+
45+
## apply_layerwise_upcasting
46+
47+
[[autodoc]] hooks.layerwise_upcasting.apply_layerwise_upcasting

docs/source/en/optimization/memory.md

Lines changed: 0 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -189,10 +189,6 @@ In the above example, layerwise upcasting is enabled on the transformer componen
189189

190190
However, you gain more control and flexibility by directly utilizing the [`~hooks.layerwise_upcasting.apply_layerwise_upcasting`] function instead of [`~ModelMixin.enable_layerwise_upcasting`].
191191

192-
[[autodoc]] ModelMixin.enable_layerwise_upcasting
193-
194-
[[autodoc]] hooks.layerwise_upcasting.apply_layerwise_upcasting
195-
196192
## Channels-last memory format
197193

198194
The channels-last memory format is an alternative way of ordering NCHW tensors in memory to preserve dimension ordering. Channels-last tensors are ordered in such a way that the channels become the densest dimension (storing images pixel-per-pixel). Since not all operators currently support the channels-last format, it may result in worst performance but you should still try and see if it works for your model.

0 commit comments

Comments
 (0)