Skip to content

Commit 7764a37

Browse files
committed
Sparsity api_ref
1 parent 32d9b0b commit 7764a37

File tree

2 files changed

+6
-5
lines changed

2 files changed

+6
-5
lines changed

docs/source/api_ref_sparsity.rst

+3-3
Original file line numberDiff line numberDiff line change
@@ -12,7 +12,7 @@ torchao.sparsity
1212

1313
WandaSparsifier
1414
PerChannelNormObserver
15-
apply_sparse_semi_structured
1615
apply_fake_sparsity
17-
18-
16+
sparsify_
17+
semi_sparse_weight
18+
int8_dynamic_activation_int8_semi_sparse_weight

torchao/sparsity/sparse_api.py

+3-2
Original file line numberDiff line numberDiff line change
@@ -43,7 +43,7 @@ def sparsify_(
4343
apply_tensor_subclass: Callable[[torch.Tensor], torch.Tensor],
4444
filter_fn: Optional[Callable[[torch.nn.Module, str], bool]] = None,
4545
) -> torch.nn.Module:
46-
"""Convert the weight of linear modules in the model with `apply_tensor_subclass`
46+
"""Convert the weight of linear modules in the model with `apply_tensor_subclass`.
4747
This function is essentially the same as quantize, put for sparsity subclasses.
4848
4949
Currently, we support three options for sparsity:
@@ -57,7 +57,8 @@ def sparsify_(
5757
filter_fn (Optional[Callable[[torch.nn.Module, str], bool]]): function that takes a nn.Module instance and fully qualified name of the module, returns True if we want to run `apply_tensor_subclass` on
5858
the weight of the module
5959
60-
Example::
60+
Example:
61+
.. code-block:: python
6162
import torch
6263
import torch.nn as nn
6364
from torchao.sparsity import sparsify_

0 commit comments

Comments
 (0)