You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: prototype_source/openvino_quantizer.rst
+8-6
Original file line number
Diff line number
Diff line change
@@ -11,13 +11,15 @@ Prerequisites
11
11
Introduction
12
12
--------------
13
13
14
-
**This is an experimental feature, the quantization API is subject to change.**
14
+
.. note::
15
+
16
+
This is an experimental feature, the quantization API is subject to change.
15
17
16
18
This tutorial demonstrates how to use `OpenVINOQuantizer` from `Neural Network Compression Framework (NNCF) <https://github.com/openvinotoolkit/nncf/tree/develop>`_ in PyTorch 2 Export Quantization flow to generate a quantized model customized for the `OpenVINO torch.compile backend <https://docs.openvino.ai/2024/openvino-workflow/torch-compile.html>`_ and explains how to lower the quantized model into the `OpenVINO <https://docs.openvino.ai/2024/index.html>`_ representation.
17
19
`OpenVINOQuantizer` unlocks the full potential of low-precision OpenVINO kernels due to the placement of quantizers designed specifically for the OpenVINO.
18
20
19
21
The PyTorch 2 export quantization flow uses the torch.export to capture the model into a graph and performs quantization transformations on top of the ATen graph.
20
-
This approach is expected to have significantly higher model coverage, better programmability, and a simplified UX.
22
+
This approach is expected to have significantly higher model coverage, improved flexibility, and a simplified UX.
21
23
OpenVINO backend compiles the FX Graph generated by TorchDynamo into an optimized OpenVINO model.
22
24
23
25
The quantization flow mainly includes four steps:
@@ -134,7 +136,7 @@ Below is the list of essential parameters and their description:
* ``model_type`` - used to specify quantization scheme required for specific type of the model. Transformer is the only supported special quantization scheme to preserve accuracy after quantization of Transformer models (BERT, DistilBERT, etc.). None is default, i.e. no specific scheme is defined.
139
+
* ``model_type`` - used to specify quantization scheme required for specific type of the model. Transformer is the only supported special quantization scheme to preserve accuracy after quantization of Transformer models (BERT, Llama, etc.). None is default, i.e. no specific scheme is defined.
138
140
139
141
.. code-block:: python
140
142
@@ -169,7 +171,7 @@ Below is the list of essential parameters and their description:
For futher details on `OpenVINOQuantizer` please see the `documentation <https://openvinotoolkit.github.io/nncf/autoapi/nncf/experimental/torch/fx/index.html#nncf.experimental.torch.fx.OpenVINOQuantizer>`_.
174
+
For further details on `OpenVINOQuantizer` please see the `documentation <https://openvinotoolkit.github.io/nncf/autoapi/nncf/experimental/torch/fx/index.html#nncf.experimental.torch.fx.OpenVINOQuantizer>`_.
173
175
174
176
After we import the backend-specific Quantizer, we will prepare the model for post-training quantization.
175
177
``prepare_pt2e`` folds BatchNorm operators into preceding Conv2d operators, and inserts observers in appropriate places in the model.
@@ -215,8 +217,8 @@ This should significantly speed up inference time in comparison with the eager m
215
217
4. Optional: Improve quantized model metrics
216
218
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
217
219
218
-
NNCF implements advanced quantization algorithms like SmoothQuant and BiasCorrection, which help
219
-
improve the quantized model metrics while minimizing the output discrepancies between the original and compressed models.
220
+
NNCF implements advanced quantization algorithms like `SmoothQuant <https://arxiv.org/abs/2211.10438>`_ and `BiasCorrection<https://arxiv.org/abs/1906.04721>`_, which help
221
+
to improve the quantized model metrics while minimizing the output discrepancies between the original and compressed models.
220
222
These advanced NNCF algorithms can be accessed via the NNCF `quantize_pt2e` API:
0 commit comments