Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

DOCS Update optimization docs with NNCF PTQ changes and deprecation of POT #17398

Merged
merged 99 commits into from
May 19, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
99 commits
Select commit Hold shift + click to select a range
bdecc94
Update model_optimization_guide.md
MaximProshin May 6, 2023
16c2815
Update model_optimization_guide.md
MaximProshin May 6, 2023
f22f6c7
Update model_optimization_guide.md
MaximProshin May 6, 2023
f516b36
Update model_optimization_guide.md
MaximProshin May 6, 2023
c47cd01
Update model_optimization_guide.md
MaximProshin May 6, 2023
cb4dd95
Update model_optimization_guide.md
MaximProshin May 6, 2023
a7ced37
Update model_optimization_guide.md
MaximProshin May 6, 2023
d993aa1
Update home.rst
MaximProshin May 6, 2023
23388ee
Update ptq_introduction.md
MaximProshin May 6, 2023
40ecdb2
Update Introduction.md
MaximProshin May 6, 2023
cf8fb1f
Update Introduction.md
MaximProshin May 6, 2023
01cf75e
Update Introduction.md
MaximProshin May 6, 2023
4f94f65
Update ptq_introduction.md
MaximProshin May 6, 2023
1fd7027
Update ptq_introduction.md
MaximProshin May 6, 2023
50c4d0b
Update basic_quantization_flow.md
MaximProshin May 6, 2023
7aaac92
Update basic_quantization_flow.md
MaximProshin May 6, 2023
4a7e9ea
Update basic_quantization_flow.md
MaximProshin May 7, 2023
4c396d9
Update quantization_w_accuracy_control.md
MaximProshin May 7, 2023
a11b566
Update quantization_w_accuracy_control.md
MaximProshin May 7, 2023
f50bff7
Update quantization_w_accuracy_control.md
MaximProshin May 7, 2023
2886ed4
Update quantization_w_accuracy_control.md
MaximProshin May 7, 2023
3956dd0
Update quantization_w_accuracy_control.md
MaximProshin May 7, 2023
180e0f5
Update quantization_w_accuracy_control.md
MaximProshin May 7, 2023
c9dee08
Update quantization_w_accuracy_control.md
MaximProshin May 7, 2023
d19b2dd
Update quantization_w_accuracy_control.md
MaximProshin May 7, 2023
ab5cc02
Update quantization_w_accuracy_control.md
MaximProshin May 7, 2023
227f294
Update basic_quantization_flow.md
MaximProshin May 7, 2023
a709d4e
Update basic_quantization_flow.md
MaximProshin May 7, 2023
83ba861
Update quantization_w_accuracy_control.md
MaximProshin May 7, 2023
178ce95
Update basic_quantization_flow.md
MaximProshin May 7, 2023
46d8a6d
Update basic_quantization_flow.md
MaximProshin May 7, 2023
65d096a
Update model_optimization_guide.md
MaximProshin May 8, 2023
94410aa
Update ptq_introduction.md
MaximProshin May 8, 2023
a3e2d93
Update quantization_w_accuracy_control.md
MaximProshin May 8, 2023
73e5415
Update model_optimization_guide.md
MaximProshin May 8, 2023
31e3260
Update quantization_w_accuracy_control.md
MaximProshin May 8, 2023
1bdc5d5
Update model_optimization_guide.md
MaximProshin May 8, 2023
3438758
Update quantization_w_accuracy_control.md
MaximProshin May 8, 2023
426a3f7
Update model_optimization_guide.md
MaximProshin May 8, 2023
80bb362
Update Introduction.md
MaximProshin May 8, 2023
e053339
Update basic_quantization_flow.md
MaximProshin May 8, 2023
c0263a3
Update basic_quantization_flow.md
MaximProshin May 8, 2023
f44e9fa
Update quantization_w_accuracy_control.md
MaximProshin May 8, 2023
d4ee04d
Update ptq_introduction.md
MaximProshin May 9, 2023
15975ba
Update Introduction.md
MaximProshin May 9, 2023
a8e23c5
Update model_optimization_guide.md
MaximProshin May 11, 2023
adc2fb6
Update basic_quantization_flow.md
MaximProshin May 11, 2023
4931ac2
Update quantization_w_accuracy_control.md
MaximProshin May 11, 2023
20ad788
Update quantization_w_accuracy_control.md
MaximProshin May 11, 2023
8e4a1da
Update quantization_w_accuracy_control.md
MaximProshin May 11, 2023
75c9cff
Update Introduction.md
MaximProshin May 11, 2023
3484315
Update FrequentlyAskedQuestions.md
MaximProshin May 11, 2023
abfff0b
Update model_optimization_guide.md
MaximProshin May 11, 2023
3d7e028
Update Introduction.md
MaximProshin May 11, 2023
d4a330b
Update model_optimization_guide.md
MaximProshin May 11, 2023
86ee0cb
Update model_optimization_guide.md
MaximProshin May 11, 2023
8811267
Update model_optimization_guide.md
MaximProshin May 12, 2023
365dba9
Update model_optimization_guide.md
MaximProshin May 12, 2023
c2bd494
Update model_optimization_guide.md
MaximProshin May 12, 2023
992532e
Update ptq_introduction.md
MaximProshin May 12, 2023
aeede28
Update ptq_introduction.md
MaximProshin May 12, 2023
9510506
added code snippet (#1)
alexsu52 May 15, 2023
f86efa5
Update basic_quantization_flow.md
MaximProshin May 15, 2023
c6f0627
Update basic_quantization_flow.md
MaximProshin May 15, 2023
cf3eb93
Update quantization_w_accuracy_control.md
MaximProshin May 15, 2023
f1eb2cc
Update basic_quantization_flow.md
MaximProshin May 15, 2023
db29bff
Update basic_quantization_flow.md
MaximProshin May 15, 2023
2984914
Update ptq_introduction.md
MaximProshin May 15, 2023
d633886
Update model_optimization_guide.md
MaximProshin May 15, 2023
d9a29f2
Update basic_quantization_flow.md
MaximProshin May 15, 2023
82b0b7b
Update ptq_introduction.md
MaximProshin May 15, 2023
0602ebc
Update quantization_w_accuracy_control.md
MaximProshin May 15, 2023
fa73991
Update basic_quantization_flow.md
MaximProshin May 15, 2023
0f1d08d
Update basic_quantization_flow.md
MaximProshin May 15, 2023
a0b31eb
Update basic_quantization_flow.md
MaximProshin May 16, 2023
e7041d6
Update ptq_introduction.md
MaximProshin May 16, 2023
2fdff8d
Update ptq_introduction.md
MaximProshin May 16, 2023
822db41
Delete ptq_introduction.md
MaximProshin May 16, 2023
276698d
Update FrequentlyAskedQuestions.md
MaximProshin May 16, 2023
1254af1
Update Introduction.md
MaximProshin May 16, 2023
b27a6aa
Update quantization_w_accuracy_control.md
MaximProshin May 16, 2023
66c13a3
Update introduction.md
MaximProshin May 16, 2023
0c97070
Update basic_quantization_flow.md code blocks
tsavina May 17, 2023
ca1bc8c
Update quantization_w_accuracy_control.md code snippets
tsavina May 17, 2023
005a759
Update docs/optimization_guide/nncf/ptq/code/ptq_torch.py
MaximProshin May 18, 2023
6ed85ab
Update model_optimization_guide.md
MaximProshin May 19, 2023
e093d73
Optimization docs proofreading (#2)
tsavina May 19, 2023
68ac01a
Update basic_quantization_flow.md
MaximProshin May 19, 2023
578e148
Update quantization_w_accuracy_control.md
MaximProshin May 19, 2023
6462797
Update images (#3)
tsavina May 19, 2023
22d2a98
Update model_optimization_guide.md
MaximProshin May 19, 2023
6da75da
Update docs/optimization_guide/nncf/ptq/code/ptq_tensorflow.py
MaximProshin May 19, 2023
ade83fd
Update docs/optimization_guide/nncf/ptq/code/ptq_torch.py
MaximProshin May 19, 2023
50cad15
Update docs/optimization_guide/nncf/ptq/code/ptq_onnx.py
MaximProshin May 19, 2023
0c2a803
Update docs/optimization_guide/nncf/ptq/code/ptq_aa_openvino.py
MaximProshin May 19, 2023
ad5a3f7
Update docs/optimization_guide/nncf/ptq/code/ptq_openvino.py
MaximProshin May 19, 2023
ffcf095
table format fix
tsavina May 19, 2023
d7c9170
Update headers
tsavina May 19, 2023
068c2a6
Update qat.md code blocks
tsavina May 19, 2023
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions docs/_static/images/DEVELOPMENT_FLOW_V3_crunch.svg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
4 changes: 2 additions & 2 deletions docs/_static/images/WHAT_TO_USE.svg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
4 changes: 2 additions & 2 deletions docs/_static/images/workflow_simple.svg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
2 changes: 1 addition & 1 deletion docs/home.rst
Original file line number Diff line number Diff line change
Expand Up @@ -69,7 +69,7 @@ You can integrate and offload to accelerators additional operations for pre- and
Model Quantization and Compression
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Boost your model’s speed even further with quantization and other state-of-the-art compression techniques available in OpenVINO’s Post-Training Optimization Tool and Neural Network Compression Framework. These techniques also reduce your model size and memory requirements, allowing it to be deployed on resource-constrained edge hardware.
Boost your model’s speed even further with quantization and other state-of-the-art compression techniques available in OpenVINO’s Neural Network Compression Framework. These techniques also reduce your model size and memory requirements, allowing it to be deployed on resource-constrained edge hardware.

.. panels::
:card: homepage-panels
Expand Down
28 changes: 9 additions & 19 deletions docs/optimization_guide/model_optimization_guide.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,40 +8,30 @@

ptq_introduction
tmo_introduction
(Experimental) Protecting Model <pot_ranger_README>


Model optimization is an optional offline step of improving final model performance by applying special optimization methods, such as quantization, pruning, preprocessing optimization, etc. OpenVINO provides several tools to optimize models at different steps of model development:
Model optimization is an optional offline step of improving the final model performance and reducing the model size by applying special optimization methods, such as 8-bit quantization, pruning, etc. OpenVINO offers two optimization paths implemented in `Neural Network Compression Framework (NNCF) <https://github.com/openvinotoolkit/nncf>`__:
AlexKoff88 marked this conversation as resolved.
Show resolved Hide resolved

- :doc:`Model Optimizer <openvino_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide>` implements most of the optimization parameters to a model by default. Yet, you are free to configure mean/scale values, batch size, RGB vs BGR input channels, and other parameters to speed up preprocess of a model (:doc:`Embedding Preprocessing Computation <openvino_docs_MO_DG_Additional_Optimization_Use_Cases>`).
- :doc:`Post-training Quantization <ptq_introduction>` is designed to optimize the inference of deep learning models by applying the post-training 8-bit integer quantization that does not require model retraining or fine-tuning.

- :doc:`Post-training Quantization <pot_introduction>` is designed to optimize inference of deep learning models by applying post-training methods that do not require model retraining or fine-tuning, for example, post-training 8-bit integer quantization.
- :doc:`Training-time Optimization <tmo_introduction>`, a suite of advanced methods for training-time model optimization within the DL framework, such as PyTorch and TensorFlow 2.x. It supports methods like Quantization-aware Training, Structured and Unstructured Pruning, etc.

- :doc:`Training-time Optimization <nncf_ptq_introduction>`, a suite of advanced methods for training-time model optimization within the DL framework, such as PyTorch and TensorFlow 2.x. It supports methods, like Quantization-aware Training and Filter Pruning. NNCF-optimized models can be inferred with OpenVINO using all the available workflows.
.. note:: OpenVINO also supports optimized models (for example, quantized) from source frameworks such as PyTorch, TensorFlow, and ONNX (in Q/DQ format). No special steps are required in this case and optimized models can be converted to the OpenVINO Intermediate Representation format (IR) right away.

Post-training Quantization is the fastest way to optimize a model and should be applied first, but it is limited in terms of achievable accuracy-performance trade-off. In case of poor accuracy or performance after Post-training Quantization, Training-time Optimization can be used as an option.

Detailed workflow:
##################

To understand which development optimization tool you need, refer to the diagram:
Once the model is optimized using the aforementioned methods, it can be used for inference using the regular OpenVINO inference workflow. No changes to the inference code are required.

.. image:: _static/images/DEVELOPMENT_FLOW_V3_crunch.svg

Post-training methods are limited in terms of achievable accuracy-performance trade-off for optimizing models. In this case, training-time optimization with NNCF is an option.

Once the model is optimized using the aforementioned tools it can be used for inference using the regular OpenVINO inference workflow. No changes to the inference code are required.

.. image:: _static/images/WHAT_TO_USE.svg
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this picture is confusing in document context.
There are two pictures one after another, which looks a bit like a draft. And it is not easy to distill message from this picture.
If I look at it I would think that training time quantization has best performance since pruning and sparsity are shown higher.
Secondly, methods are shown separately and perception that people can get is that it is either one or another.

I wonder if we should change the picture completely to show performance vs. accuracy chart or something similar? We can probably discuss verbally.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

pictures were updated in #17421 but I believe it doesn't address your comment. let's discuss how to change these pictures.


Post-training methods are limited in terms of achievable accuracy, which may degrade for certain scenarios. In such cases, training-time optimization with NNCF may give better results.

Once the model has been optimized using the aforementioned tools, it can be used for inference using the regular OpenVINO inference workflow. No changes to the code are required.

If you are not familiar with model optimization methods, refer to :doc:`post-training methods <pot_introduction>`.

Additional Resources
####################
MaximProshin marked this conversation as resolved.
Show resolved Hide resolved

- :doc:`Post-training Quantization <ptq_introduction>`
- :doc:`Training-time Optimization <tmo_introduction>`
- :doc:`Deployment optimization <openvino_docs_deployment_optimization_guide_dldt_optimization_guide>`
- `HuggingFace Optimum Intel <https://huggingface.co/docs/optimum/intel/optimization_ov>`__

@endsphinxdirective
220 changes: 122 additions & 98 deletions docs/optimization_guide/nncf/filter_pruning.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,15 +5,15 @@
Introduction
####################

Filter pruning is an advanced optimization method which allows reducing computational complexity of the model by removing
redundant or unimportant filters from convolutional operations of the model. This removal is done in two steps:
Filter pruning is an advanced optimization method that allows reducing the computational complexity of the model by removing
redundant or unimportant filters from the convolutional operations of the model. This removal is done in two steps:

1. Unimportant filters are zeroed out by the NNCF optimization with fine-tuning.

2. Zero filters are removed from the model during the export to OpenVINO Intermediate Representation (IR).


Filter Pruning method from the NNCF can be used stand-alone but we usually recommend to stack it with 8-bit quantization for
Filter Pruning method from the NNCF can be used stand-alone but we usually recommend stacking it with 8-bit quantization for
two reasons. First, 8-bit quantization is the best method in terms of achieving the highest accuracy-performance trade-offs so
stacking it with filter pruning can give even better performance results. Second, applying quantization along with filter
pruning does not hurt accuracy a lot since filter pruning removes noisy filters from the model which narrows down values
Expand All @@ -37,44 +37,52 @@ Here, we show the basic steps to modify the training script for the model and us

In this step, NNCF-related imports are added in the beginning of the training script:

.. tab:: PyTorch
.. tab-set::

.. doxygensnippet:: docs/optimization_guide/nncf/code/pruning_torch.py
:language: python
:fragment: [imports]
.. tab-item:: PyTorch
:sync: pytorch

.. doxygensnippet:: docs/optimization_guide/nncf/code/pruning_torch.py
:language: python
:fragment: [imports]

.. tab-item:: TensorFlow 2
:sync: tensorflow

.. tab:: TensorFlow 2

.. doxygensnippet:: docs/optimization_guide/nncf/code/pruning_tf.py
:language: python
:fragment: [imports]
.. doxygensnippet:: docs/optimization_guide/nncf/code/pruning_tf.py
:language: python
:fragment: [imports]

2. Create NNCF configuration
++++++++++++++++++++++++++++

Here, you should define NNCF configuration which consists of model-related parameters (`"input_info"` section) and parameters
of optimization methods (`"compression"` section).

.. tab:: PyTorch

.. doxygensnippet:: docs/optimization_guide/nncf/code/pruning_torch.py
:language: python
:fragment: [nncf_congig]

.. tab:: TensorFlow 2

.. doxygensnippet:: docs/optimization_guide/nncf/code/pruning_tf.py
:language: python
:fragment: [nncf_congig]

Here is a brief description of the required parameters of the Filter Pruning method. For full description refer to the
.. tab-set::

.. tab-item:: PyTorch
:sync: pytorch

.. doxygensnippet:: docs/optimization_guide/nncf/code/pruning_torch.py
:language: python
:fragment: [nncf_congig]

.. tab-item:: TensorFlow 2
:sync: tensorflow

.. doxygensnippet:: docs/optimization_guide/nncf/code/pruning_tf.py
:language: python
:fragment: [nncf_congig]

Here is a brief description of the required parameters of the Filter Pruning method. For a full description refer to the
`GitHub <https://github.com/openvinotoolkit/nncf/blob/develop/docs/compression_algorithms/Pruning.md>`__ page.

* ``pruning_init`` - initial pruning rate target. For example, value ``0.1`` means that at the begging of training, convolutions that can be pruned will have 10% of their filters set to zero.

* ``pruning_target`` - pruning rate target at the end of the schedule. For example, the value ``0.5`` means that at the epoch with the number of ``num_init_steps + pruning_steps``, convolutions that can be pruned will have 50% of their filters set to zero.

* ``pruning_steps` - the number of epochs during which the pruning rate target is increased from ``pruning_init` to ``pruning_target`` value. We recommend to keep the highest learning rate during this period.
* ``pruning_steps` - the number of epochs during which the pruning rate target is increased from ``pruning_init` to ``pruning_target`` value. We recommend keeping the highest learning rate during this period.


3. Apply optimization methods
Expand All @@ -86,39 +94,44 @@ that can be used the same way as the original model. It is worth noting that opt
so that the model undergoes a set of corresponding transformations and can contain additional operations required for the
optimization.

.. tab-set::

.. tab:: PyTorch

.. doxygensnippet:: docs/optimization_guide/nncf/code/pruning_torch.py
:language: python
:fragment: [wrap_model]

.. tab:: TensorFlow 2

.. doxygensnippet:: docs/optimization_guide/nncf/code/pruning_tf.py
:language: python
:fragment: [wrap_model]
.. tab-item:: PyTorch
:sync: pytorch

.. doxygensnippet:: docs/optimization_guide/nncf/code/pruning_torch.py
:language: python
:fragment: [wrap_model]

.. tab-item:: TensorFlow 2
:sync: tensorflow

.. doxygensnippet:: docs/optimization_guide/nncf/code/pruning_tf.py
:language: python
:fragment: [wrap_model]

4. Fine-tune the model
++++++++++++++++++++++

This step assumes that you will apply fine-tuning to the model the same way as it is done for the baseline model. In the case
of Filter Pruning method we recommend using the training schedule and learning rate similar to what was used for the training
of original model.

of the original model.

.. tab:: PyTorch
.. tab-set::

.. doxygensnippet:: docs/optimization_guide/nncf/code/pruning_torch.py
:language: python
:fragment: [tune_model]
.. tab-item:: PyTorch
:sync: pytorch

.. doxygensnippet:: docs/optimization_guide/nncf/code/pruning_torch.py
:language: python
:fragment: [tune_model]

.. tab-item:: TensorFlow 2
:sync: tensorflow

.. tab:: TensorFlow 2

.. doxygensnippet:: docs/optimization_guide/nncf/code/pruning_tf.py
:language: python
:fragment: [tune_model]
.. doxygensnippet:: docs/optimization_guide/nncf/code/pruning_tf.py
:language: python
:fragment: [tune_model]


5. Multi-GPU distributed training
Expand All @@ -127,38 +140,43 @@ of original model.
In the case of distributed multi-GPU training (not DataParallel), you should call ``compression_ctrl.distributed()`` before the
fine-tuning that will inform optimization methods to do some adjustments to function in the distributed mode.


.. tab:: PyTorch

.. doxygensnippet:: docs/optimization_guide/nncf/code/pruning_torch.py
:language: python
:fragment: [distributed]

.. tab:: TensorFlow 2

.. doxygensnippet:: docs/optimization_guide/nncf/code/pruning_tf.py
:language: python
:fragment: [distributed]


.. tab-set::

.. tab-item:: PyTorch
:sync: pytorch

.. doxygensnippet:: docs/optimization_guide/nncf/code/pruning_torch.py
:language: python
:fragment: [distributed]

.. tab-item:: TensorFlow 2
:sync: tensorflow

.. doxygensnippet:: docs/optimization_guide/nncf/code/pruning_tf.py
:language: python
:fragment: [distributed]

6. Export quantized model
+++++++++++++++++++++++++

When fine-tuning finishes, the quantized model can be exported to the corresponding format for further inference: ONNX in
the case of PyTorch and frozen graph - for TensorFlow 2.

.. tab-set::

.. tab:: PyTorch

.. doxygensnippet:: docs/optimization_guide/nncf/code/pruning_torch.py
:language: python
:fragment: [export]

.. tab:: TensorFlow 2
.. tab-item:: PyTorch
:sync: pytorch

.. doxygensnippet:: docs/optimization_guide/nncf/code/pruning_torch.py
:language: python
:fragment: [export]

.. tab-item:: TensorFlow 2
:sync: tensorflow

.. doxygensnippet:: docs/optimization_guide/nncf/code/pruning_tf.py
:language: python
:fragment: [export]
.. doxygensnippet:: docs/optimization_guide/nncf/code/pruning_tf.py
:language: python
:fragment: [export]


These were the basic steps to applying the QAT method from the NNCF. However, it is required in some cases to save/load model
Expand All @@ -170,57 +188,63 @@ checkpoints during the training. Since NNCF wraps the original model with its ow

To save model checkpoint use the following API:

.. tab-set::

.. tab:: PyTorch

.. doxygensnippet:: docs/optimization_guide/nncf/code/pruning_torch.py
:language: python
:fragment: [save_checkpoint]

.. tab:: TensorFlow 2

.. doxygensnippet:: docs/optimization_guide/nncf/code/pruning_tf.py
:language: python
:fragment: [save_checkpoint]
.. tab-item:: PyTorch
:sync: pytorch

.. doxygensnippet:: docs/optimization_guide/nncf/code/pruning_torch.py
:language: python
:fragment: [save_checkpoint]

.. tab-item:: TensorFlow 2
:sync: tensorflow

.. doxygensnippet:: docs/optimization_guide/nncf/code/pruning_tf.py
:language: python
:fragment: [save_checkpoint]


8. (Optional) Restore from checkpoint
+++++++++++++++++++++++++++++++++++++

To restore the model from checkpoint you should use the following API:

.. tab:: PyTorch

.. doxygensnippet:: docs/optimization_guide/nncf/code/pruning_torch.py
:language: python
:fragment: [load_checkpoint]

.. tab:: TensorFlow 2
.. tab-set::

.. doxygensnippet:: docs/optimization_guide/nncf/code/pruning_tf.py
:language: python
:fragment: [load_checkpoint]
.. tab-item:: PyTorch
:sync: pytorch

.. doxygensnippet:: docs/optimization_guide/nncf/code/pruning_torch.py
:language: python
:fragment: [load_checkpoint]

.. tab-item:: TensorFlow 2
:sync: tensorflow

.. doxygensnippet:: docs/optimization_guide/nncf/code/pruning_tf.py
:language: python
:fragment: [load_checkpoint]

For more details on saving/loading checkpoints in the NNCF, see the following
`documentation <https://github.com/openvinotoolkit/nncf/blob/develop/docs/Usage.md#saving-and-loading-compressed-models>`__.

Deploying pruned model
######################

The pruned model requres an extra step that should be done to get performance improvement. This step involves removal of the
zero filters from the model. This is done at the model conversion step using :doc:`Model Optimizer <openvino_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide>` tool when model is converted from the framework representation (ONNX, TensorFlow, etc.) to OpenVINO Intermediate Representation.
The pruned model requires an extra step that should be done to get a performance improvement. This step involves the removal of the
zero filters from the model. This is done at the model conversion step using :doc:`Model Optimizer <openvino_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide>` tool when the model is converted from the framework representation (ONNX, TensorFlow, etc.) to OpenVINO Intermediate Representation.

* To remove zero filters from the pruned model add the following parameter to the model convertion command: ``--transform=Pruning``
* To remove zero filters from the pruned model add the following parameter to the model conversion command: ``--transform=Pruning``

After that the model can be deployed with OpenVINO in the same way as the baseline model.
After that, the model can be deployed with OpenVINO in the same way as the baseline model.
For more details about model deployment with OpenVINO, see the corresponding :doc:`documentation <openvino_docs_OV_UG_OV_Runtime_User_Guide>`.


Examples
####################

* `PyTorch Image Classiication example <https://github.com/openvinotoolkit/nncf/blob/develop/examples/torch/classification>`__
* `PyTorch Image Classification example <https://github.com/openvinotoolkit/nncf/blob/develop/examples/torch/classification>`__

* `TensorFlow Image Classification example <https://github.com/openvinotoolkit/nncf/tree/develop/examples/tensorflow/classification>`__

Expand Down
Loading