You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
"""Convert the weight of linear modules in the model with `apply_tensor_subclass`
46
+
"""Convert the weight of linear modules in the model with `apply_tensor_subclass`.
47
47
This function is essentially the same as quantize, put for sparsity subclasses.
48
48
49
49
Currently, we support three options for sparsity:
@@ -54,26 +54,26 @@ def sparsify_(
54
54
Args:
55
55
model (torch.nn.Module): input model
56
56
apply_tensor_subclass (Callable[[torch.Tensor], torch.Tensor]): function that convert a floating point Tensor to a (sparsified) tensor subclass instance (e.g. affine quantized tensor instance)
57
-
filter_fn (Optional[Callable[[torch.nn.Module, str], bool]]): function that takes a nn.Module instance and fully qualified name of the module, returns True if we want to run `apply_tensor_subclass` on
58
-
the weight of the module
57
+
filter_fn (Optional[Callable[[torch.nn.Module, str], bool]]): function that takes a nn.Module instance and fully qualified name of the module, returns True if we want to run `apply_tensor_subclass` on the weight of the module
0 commit comments