Skip to content

Commit

Permalink
[DOCS] Running inference comments (#25285)
Browse files Browse the repository at this point in the history
### Details:
 - *item1*
 - *...*

### Tickets:
 - 145078

---------

Co-authored-by: Karol Blaszczak <[email protected]>
  • Loading branch information
tsavina and kblaszczak-intel authored Jul 4, 2024
1 parent bbb32b3 commit bf84cec
Show file tree
Hide file tree
Showing 5 changed files with 62 additions and 2 deletions.
12 changes: 12 additions & 0 deletions docs/articles_en/assets/snippets/compile_model_npu.cpp
Original file line number Diff line number Diff line change
@@ -0,0 +1,12 @@
#include <openvino/runtime/core.hpp>

int main() {
{
//! [compile_model_default_npu]
ov::Core core;
auto model = core.read_model("model.xml");
auto compiled_model = core.compile_model(model, "NPU");
//! [compile_model_default_npu]
}
return 0;
}
18 changes: 18 additions & 0 deletions docs/articles_en/assets/snippets/compile_model_npu.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,18 @@
# Copyright (C) 2018-2024 Intel Corporation
# SPDX-License-Identifier: Apache-2.0

import openvino as ov
from snippets import get_model


def main():
model = get_model()

core = ov.Core()
if "NPU" not in core.available_devices:
return 0

#! [compile_model_default_npu]
core = ov.Core()
compiled_model = core.compile_model(model, "NPU")
#! [compile_model_default_npu]
Original file line number Diff line number Diff line change
Expand Up @@ -36,7 +36,7 @@ different conditions:
| :doc:`Heterogeneous Execution (HETERO) <inference-devices-and-modes/hetero-execution>`
| :doc:`Automatic Batching Execution (Auto-batching) <inference-devices-and-modes/automatic-batching>`

To learn how to change the device configuration, read the :doc:`Query device properties article <inference-devices-and-modes/query-device-properties>`.

Enumerating Available Devices
#######################################
Expand Down Expand Up @@ -83,3 +83,10 @@ Accordingly, the code that loops over all available devices of the "GPU" type on
:language: cpp
:fragment: [part3]

Additional Resources
####################

* `OpenVINO™ Runtime API Tutorial <./../../notebooks/openvino-api-with-output.html>`__
* `AUTO Device Tutorial <./../../notebooks/auto-device-with-output.html>`__
* `GPU Device Tutorial <./../../notebooks/gpu-device-with-output.html>`__
* `NPU Device Tutorial <./../../notebooks/hello-npu-with-output.html>`__
Original file line number Diff line number Diff line change
Expand Up @@ -30,6 +30,25 @@ of the model into a proprietary format. The compiler included in the user mode d
platform specific optimizations in order to efficiently schedule the execution of network layers and
memory transactions on various NPU hardware submodules.

To use NPU for inference, pass the device name to the ``ov::Core::compile_model()`` method:

.. tab-set::

.. tab-item:: Python
:sync: py

.. doxygensnippet:: docs/articles_en/assets/snippets/compile_model_npu.py
:language: py
:fragment: [compile_model_default_npu]

.. tab-item:: C++
:sync: cpp

.. doxygensnippet:: docs/articles_en/assets/snippets/compile_model_npu.cpp
:language: cpp
:fragment: [compile_model_default_npu]


Model Caching
#############################

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -226,9 +226,12 @@ Compile the model for a specific device using ``ov::Core::compile_model()``:
The ``ov::Model`` object represents any models inside the OpenVINO™ Runtime.
For more details please read article about :doc:`OpenVINO™ Model representation <integrate-openvino-with-your-application/model-representation>`.

OpenVINO includes experimental support for NPU, learn more in the
:doc:`NPU Device section <./inference-devices-and-modes/npu-device>`

The code above creates a compiled model associated with a single hardware device from the model object.
It is possible to create as many compiled models as needed and use them simultaneously (up to the limitation of the hardware).
To learn how to change the device configuration, read the :doc:`Query device properties <inference-devices-and-modes/query-device-properties>` article.
To learn more about supported devices and inference modes, read the :doc:`Inference Devices and Modes <./inference-devices-and-modes>` article.

Step 3. Create an Inference Request
###################################
Expand Down Expand Up @@ -432,6 +435,7 @@ To build your project using CMake with the default build tools currently availab
Additional Resources
####################

* `OpenVINO™ Runtime API Tutorial <./../../notebooks/openvino-api-with-output.html>`__
* See the :doc:`OpenVINO Samples <../../learn-openvino/openvino-samples>` page for specific examples of how OpenVINO pipelines are implemented for applications like image classification, text prediction, and many others.
* Models in the OpenVINO IR format on `Hugging Face <https://huggingface.co/models>`__.
* :doc:`OpenVINO™ Runtime Preprocessing <optimize-inference/optimize-preprocessing>`
Expand Down

0 comments on commit bf84cec

Please sign in to comment.