diff --git a/notebooks/yolo-detection-classification/README.md b/notebooks/yolo-detection-classification/README.md
new file mode 100644
index 00000000000..6b748fc679d
--- /dev/null
+++ b/notebooks/yolo-detection-classification/README.md
@@ -0,0 +1,60 @@
+# Object Detection and Classification Pipeline with YOLO and OpenVINO™
+
+This notebook demonstrates how to build a complete object detection and classification pipeline using YOLO models with OpenVINO™. The pipeline includes object detection using YOLOv11, cropping detected objects, classification using YOLO classification model, and performance comparison across different devices (CPU, GPU, NPU).
+
+
+
+## Notebook Contents
+
+The notebook is organized into the following sections:
+
+1. **Prerequisites**
+ Install required packages for the notebook.
+
+2. **Imports**
+ Import necessary Python libraries and initialize OpenVINO.
+
+3. **Download Models**
+ Download YOLOv11n for object detection and YOLOv11n (classification variant) for classification.
+
+4. **Basic Inference without OpenVINO**
+ Run detection using the PyTorch model to establish a baseline.
+
+5. **Convert to OpenVINO Format**
+ - Convert Detection Model to OpenVINO IR format
+ - Convert Classification Model to OpenVINO IR format
+
+6. **Select Inference Device**
+ Choose the hardware device (CPU, GPU, NPU) for inference.
+
+7. **Run Object Detection**
+ Perform object detection using OpenVINO on the selected device.
+
+8. **Extract Detected Objects**
+ Crop detected objects from the original image.
+
+9. **Classify Detected Objects**
+ Run classification on each cropped object using the YOLO classification model.
+
+10. **Complete Pipeline**
+ Combine detection, cropping, and classification into a single optimized pipeline.
+
+11. **Performance Comparison**
+ Compare pipeline performance across different device configurations.
+
+## Key Features
+
+- **Device Flexibility**: Run inference on CPU, GPU, or NPU
+- **Model Conversion**: Convert YOLO models to OpenVINO IR format for optimized performance
+- **Complete Pipeline**: Demonstrates end-to-end object detection and classification workflow
+- **Performance Analysis**: Measure and compare inference times across different hardware accelerators
+- **Interactive Widgets**: Use dropdown menus to easily select inference devices
+
+## Installation Instructions
+
+This is a self-contained example that relies solely on its own code.
+
+We recommend running the notebook in a virtual environment. You only need a Jupyter server to start.
+For details, please refer to [Installation Guide](../../README.md).
+
+
diff --git a/notebooks/yolo-detection-classification/grocery.jpeg b/notebooks/yolo-detection-classification/grocery.jpeg
new file mode 100644
index 00000000000..713a52cabc0
Binary files /dev/null and b/notebooks/yolo-detection-classification/grocery.jpeg differ
diff --git a/notebooks/yolo-detection-classification/grocery_detect.jpg b/notebooks/yolo-detection-classification/grocery_detect.jpg
new file mode 100644
index 00000000000..4b3fffba79d
Binary files /dev/null and b/notebooks/yolo-detection-classification/grocery_detect.jpg differ
diff --git a/notebooks/yolo-detection-classification/yolo-detection-classification.ipynb b/notebooks/yolo-detection-classification/yolo-detection-classification.ipynb
new file mode 100644
index 00000000000..aa8c3084e20
--- /dev/null
+++ b/notebooks/yolo-detection-classification/yolo-detection-classification.ipynb
@@ -0,0 +1,670 @@
+{
+ "cells": [
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "# Object Detection and Classification Pipeline with YOLO and OpenVINO™\n",
+ "\n",
+ "This notebook demonstrates how to build a complete object detection and classification pipeline using YOLO models with OpenVINO™. The pipeline includes:\n",
+ "\n",
+ "1. Object detection using YOLOv11\n",
+ "2. Cropping detected objects\n",
+ "3. Classification of cropped objects using YOLO classification model\n",
+ "4. Performance comparison across different devices (CPU, GPU, NPU)\n",
+ "\n",
+ "The notebook showcases device selection capabilities and demonstrates how to optimize inference performance by running different parts of the pipeline on different hardware accelerators.\n",
+ "\n",
+ "#### Table of contents:\n",
+ "\n",
+ "- [Prerequisites](#Prerequisites)\n",
+ "- [Imports](#Imports)\n",
+ "- [Download Models](#Download-Models)\n",
+ "- [Basic Inference without OpenVINO](#Basic-Inference-without-OpenVINO)\n",
+ "- [Convert to OpenVINO Format](#Convert-to-OpenVINO-Format)\n",
+ " - [Convert Detection Model](#Convert-Detection-Model)\n",
+ " - [Convert Classification Model](#Convert-Classification-Model)\n",
+ "- [Select Inference Device](#Select-Inference-Device)\n",
+ "- [Run Object Detection](#Run-Object-Detection)\n",
+ "- [Extract Detected Objects](#Extract-Detected-Objects)\n",
+ "- [Classify Detected Objects](#Classify-Detected-Objects)\n",
+ "- [Complete Pipeline](#Complete-Pipeline)\n",
+ "- [Performance Comparison](#Performance-Comparison)\n",
+ "\n",
+ "### Installation Instructions\n",
+ "\n",
+ "This is a self-contained example that relies solely on its own code.\n",
+ "\n",
+ "We recommend running the notebook in a virtual environment. You only need a Jupyter server to start.\n",
+ "For details, please refer to [Installation Guide](https://github.com/openvinotoolkit/openvino_notebooks/blob/latest/README.md#-installation-guide).\n",
+ "\n",
+ "
\n",
+ "### Installation Instructions\n",
+ "\n",
+ "This is a self-contained example that relies solely on its own code.\n",
+ "\n",
+ "We recommend running the notebook in a virtual environment. You only need a Jupyter server to start.\n",
+ "For details, please refer to [Installation Guide](https://github.com/openvinotoolkit/openvino_notebooks/blob/latest/README.md#-installation-guide)."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "## Prerequisites\n",
+ "\n",
+ "[back to top ⬆️](#Table-of-contents:)\n",
+ "\n",
+ "Install required packages."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "%pip install -q \"openvino>=2023.1.0\" \"ultralytics==8.3.0\" opencv-python matplotlib Pillow ipywidgets --extra-index-url https://download.pytorch.org/whl/cpu"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "## Imports\n",
+ "\n",
+ "[back to top ⬆️](#Table-of-contents:)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "import time\n",
+ "from pathlib import Path\n",
+ "\n",
+ "import cv2\n",
+ "import matplotlib.pyplot as plt\n",
+ "import numpy as np\n",
+ "import openvino as ov\n",
+ "import torch\n",
+ "from PIL import Image\n",
+ "from ultralytics import YOLO\n",
+ "from IPython.display import display, HTML\n",
+ "\n",
+ "# Fetch `notebook_utils` module\n",
+ "import requests\n",
+ "\n",
+ "if not Path(\"notebook_utils.py\").exists():\n",
+ " r = requests.get(\n",
+ " url=\"https://raw.githubusercontent.com/openvinotoolkit/openvino_notebooks/latest/utils/notebook_utils.py\",\n",
+ " )\n",
+ " open(\"notebook_utils.py\", \"w\").write(r.text)\n",
+ "\n",
+ "from notebook_utils import device_widget\n",
+ "\n",
+ "# Read more about telemetry collection at https://github.com/openvinotoolkit/openvino_notebooks?tab=readme-ov-file#-telemetry\n",
+ "from notebook_utils import collect_telemetry\n",
+ "\n",
+ "collect_telemetry(\"yolo-detection-classification.ipynb\")"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "## Download Models\n",
+ "\n",
+ "[back to top ⬆️](#Table-of-contents:)\n",
+ "\n",
+ "This tutorial uses YOLOv11n for object detection and YOLOv11n-cls for classification. The models will be automatically downloaded from Ultralytics."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# Create models directory\n",
+ "models_dir = Path(\"./models\")\n",
+ "models_dir.mkdir(exist_ok=True)\n",
+ "\n",
+ "# Define model names\n",
+ "DET_MODEL_NAME = \"yolo11n\"\n",
+ "CLASS_MODEL_NAME = \"yolo11n-cls\"\n",
+ "\n",
+ "# Download detection model\n",
+ "det_model = YOLO(models_dir / f\"{DET_MODEL_NAME}.pt\")\n",
+ "label_map = det_model.model.names\n",
+ "\n",
+ "print(f\"Detection model downloaded: {DET_MODEL_NAME}\")\n",
+ "print(f\"Number of classes: {len(label_map)}\")"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "## Basic Inference without OpenVINO\n",
+ "\n",
+ "[back to top ⬆️](#Table-of-contents:)\n",
+ "\n",
+ "First, let's run object detection using the PyTorch model to establish a baseline."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# Define test image path\n",
+ "IMAGE_PATH = Path(\"grocery.jpeg\")\n",
+ "\n",
+ "# Run inference with PyTorch model\n",
+ "res = det_model(IMAGE_PATH)\n",
+ "\n",
+ "# Display results\n",
+ "Image.fromarray(res[0].plot()[:, :, ::-1])"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "## Convert to OpenVINO Format\n",
+ "\n",
+ "[back to top ⬆️](#Table-of-contents:)\n",
+ "\n",
+ "To benefit from OpenVINO optimizations and device flexibility, we need to convert both models to OpenVINO IR format."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "### Convert Detection Model\n",
+ "\n",
+ "[back to top ⬆️](#Table-of-contents:)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# Convert detection model to OpenVINO format\n",
+ "det_model_path = models_dir / f\"{DET_MODEL_NAME}_openvino_model/{DET_MODEL_NAME}.xml\"\n",
+ "\n",
+ "if not det_model_path.exists():\n",
+ " # Export the model to OpenVINO format\n",
+ " det_model.export(format=\"openvino\", dynamic=True, half=True)\n",
+ " print(f\"Detection model exported to {det_model_path}\")\n",
+ "else:\n",
+ " print(f\"Detection model already exists at {det_model_path}\")\n",
+ "\n",
+ "# Initialize OpenVINO Core\n",
+ "core = ov.Core()\n",
+ "\n",
+ "# Read the detection model\n",
+ "det_ov_model = core.read_model(det_model_path)\n",
+ "print(f\"\\nDetection model input shape: {det_ov_model.input().partial_shape}\")\n",
+ "print(f\"Detection model output shape: {det_ov_model.output().partial_shape}\")"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "### Convert Classification Model\n",
+ "\n",
+ "[back to top ⬆️](#Table-of-contents:)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# Download classification model\n",
+ "class_model = YOLO(models_dir / f\"{CLASS_MODEL_NAME}.pt\")\n",
+ "\n",
+ "# Convert classification model to OpenVINO format\n",
+ "class_model_path = models_dir / f\"{CLASS_MODEL_NAME}_openvino_model/\"\n",
+ "\n",
+ "if not class_model_path.exists():\n",
+ " class_model.export(format=\"openvino\")\n",
+ " print(f\"Classification model exported to {class_model_path}\")\n",
+ "else:\n",
+ " print(f\"Classification model already exists at {class_model_path}\")"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "## Select Inference Device\n",
+ "\n",
+ "[back to top ⬆️](#Table-of-contents:)\n",
+ "\n",
+ "Select the device for running object detection inference. Available options include CPU, GPU, and NPU (if available on your system)."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "detect_device = device_widget()\n",
+ "\n",
+ "detect_device"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "## Run Object Detection\n",
+ "\n",
+ "[back to top ⬆️](#Table-of-contents:)\n",
+ "\n",
+ "Perform object detection using the OpenVINO model on the selected device."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# Compile the detection model for the selected device\n",
+ "ov_config = {}\n",
+ "if detect_device.value != \"CPU\":\n",
+ " det_ov_model.reshape({0: [1, 3, 640, 640]})\n",
+ " if \"GPU\" in detect_device.value:\n",
+ " ov_config = {\"GPU_DISABLE_WINOGRAD_CONVOLUTION\": \"YES\"}\n",
+ "\n",
+ "det_compiled_model = core.compile_model(det_ov_model, detect_device.value, ov_config)\n",
+ "\n",
+ "\n",
+ "# Set up the YOLO model to use OpenVINO for inference\n",
+ "def infer(*args):\n",
+ " result = det_compiled_model(args)\n",
+ " return torch.from_numpy(result[0])\n",
+ "\n",
+ "\n",
+ "det_model.predictor.inference = infer\n",
+ "det_model.predictor.model.pt = False\n",
+ "\n",
+ "# Perform detection\n",
+ "detect_res = det_model(IMAGE_PATH)\n",
+ "\n",
+ "# Print inference time\n",
+ "r = detect_res[0]\n",
+ "if hasattr(r, \"speed\") and r.speed is not None:\n",
+ " inference_time_ms = r.speed.get(\"inference\", float(\"nan\"))\n",
+ " print(f\"Detection Inference Time: {inference_time_ms:.2f} ms\")\n",
+ "\n",
+ "# Display detection results\n",
+ "Image.fromarray(detect_res[0].plot()[:, :, ::-1])"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "## Extract Detected Objects\n",
+ "\n",
+ "[back to top ⬆️](#Table-of-contents:)\n",
+ "\n",
+ "Extract and save cropped images of detected objects for classification."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# Load the original image\n",
+ "image = cv2.imread(str(IMAGE_PATH))\n",
+ "\n",
+ "# Extract cropped images from detections\n",
+ "cropped_images = []\n",
+ "confidence_threshold = 0.5\n",
+ "\n",
+ "for result in detect_res:\n",
+ " boxes = result.boxes\n",
+ " if len(boxes) > 0:\n",
+ " for box in boxes:\n",
+ " confidence = box.conf.item()\n",
+ " if confidence > confidence_threshold:\n",
+ " # Get bounding box coordinates\n",
+ " x1, y1, x2, y2 = box.xyxy[0]\n",
+ " # Crop the image\n",
+ " cropped_image = image[int(y1) : int(y2), int(x1) : int(x2)]\n",
+ " cropped_images.append(cropped_image)\n",
+ "\n",
+ "print(f\"Extracted {len(cropped_images)} objects from the image\")"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "### Visualize Cropped Objects\n",
+ "\n",
+ "[back to top ⬆️](#Table-of-contents:)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# Display cropped images\n",
+ "for i, cropped_image in enumerate(cropped_images[:5]): # Show first 5 for brevity\n",
+ " # Convert BGR to RGB\n",
+ " rgb_image = cv2.cvtColor(cropped_image, cv2.COLOR_BGR2RGB)\n",
+ "\n",
+ " # Create a new figure for each image\n",
+ " plt.figure(figsize=(4, 4))\n",
+ " plt.imshow(rgb_image)\n",
+ " plt.title(f\"Detected Object {i+1}\")\n",
+ " plt.axis(\"off\")\n",
+ " plt.show()"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "## Classify Detected Objects\n",
+ "\n",
+ "[back to top ⬆️](#Table-of-contents:)\n",
+ "\n",
+ "Now, let's classify each detected object using the YOLO classification model."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "### Select Classification Device\n",
+ "\n",
+ "[back to top ⬆️](#Table-of-contents:)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "classify_device = device_widget()\n",
+ "\n",
+ "classify_device"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "### Run Classification\n",
+ "\n",
+ "[back to top ⬆️](#Table-of-contents:)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# Load the classification model\n",
+ "class_ov_model = YOLO(str(class_model_path))\n",
+ "\n",
+ "# Determine device string for Ultralytics\n",
+ "if classify_device.value == \"GPU\":\n",
+ " class_device_to_use = \"intel:gpu\"\n",
+ "else:\n",
+ " class_device_to_use = \"CPU\"\n",
+ "\n",
+ "# Classify each cropped object\n",
+ "if not cropped_images:\n",
+ " print(\"No cropped images to classify.\")\n",
+ "else:\n",
+ " print(f\"Classifying {len(cropped_images)} objects...\\n\")\n",
+ "\n",
+ " for i, img_data in enumerate(cropped_images):\n",
+ " try:\n",
+ " # Run classification\n",
+ " class_results = class_ov_model.predict(source=img_data, conf=0.5, imgsz=224, device=class_device_to_use, verbose=False)\n",
+ "\n",
+ " if class_results:\n",
+ " r = class_results[0]\n",
+ "\n",
+ " if hasattr(r, \"probs\") and r.probs is not None:\n",
+ " top1_index = r.probs.top1\n",
+ " top1_confidence = r.probs.top1conf.item()\n",
+ " class_names = r.names\n",
+ " top_class_name = class_names[top1_index]\n",
+ "\n",
+ " print(f\"Object {i+1}: {top_class_name} (Confidence: {top1_confidence:.4f})\")\n",
+ "\n",
+ " # Print inference time if available\n",
+ " if hasattr(r, \"speed\") and r.speed is not None:\n",
+ " inference_time_ms = r.speed.get(\"inference\", float(\"nan\"))\n",
+ " print(f\" Inference Time: {inference_time_ms:.2f} ms\\n\")\n",
+ " else:\n",
+ " print(f\"Object {i+1}: No classification results\\n\")\n",
+ "\n",
+ " except Exception as e:\n",
+ " print(f\"Error classifying object {i+1}: {e}\\n\")"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "## Complete Pipeline\n",
+ "\n",
+ "[back to top ⬆️](#Table-of-contents:)\n",
+ "\n",
+ "Now let's combine detection and classification into a complete pipeline and measure its performance."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "### Select Devices for Pipeline\n",
+ "\n",
+ "[back to top ⬆️](#Table-of-contents:)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "pipeline_detect_device = device_widget()\n",
+ "pipeline_classify_device = device_widget()\n",
+ "\n",
+ "display(pipeline_detect_device, pipeline_classify_device)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "### Run Complete Pipeline with Performance Measurement\n",
+ "\n",
+ "[back to top ⬆️](#Table-of-contents:)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# Prepare Detection Model\n",
+ "ov_config = {}\n",
+ "if pipeline_detect_device.value != \"CPU\":\n",
+ " det_ov_model.reshape({0: [1, 3, 640, 640]})\n",
+ " if \"GPU\" in pipeline_detect_device.value:\n",
+ " ov_config = {\"GPU_DISABLE_WINOGRAD_CONVOLUTION\": \"YES\"}\n",
+ "\n",
+ "det_compiled_model = core.compile_model(det_ov_model, pipeline_detect_device.value, ov_config)\n",
+ "\n",
+ "\n",
+ "# Update inference function\n",
+ "def infer(*args):\n",
+ " result = det_compiled_model(args)\n",
+ " return torch.from_numpy(result[0])\n",
+ "\n",
+ "\n",
+ "det_model.predictor.inference = infer\n",
+ "det_model.predictor.model.pt = False\n",
+ "\n",
+ "# Prepare Classification Model\n",
+ "class_ov_model = YOLO(str(class_model_path))\n",
+ "\n",
+ "if pipeline_classify_device.value == \"GPU\":\n",
+ " pipeline_class_device_to_use = \"intel:gpu\"\n",
+ " # Warm up GPU by running a dummy prediction\n",
+ " dummy_img = Image.new(\"RGB\", (224, 224), color=\"red\")\n",
+ " dummy_img_np = np.array(dummy_img)\n",
+ " class_ov_model.predict(source=dummy_img_np, conf=0.5, imgsz=224, device=pipeline_class_device_to_use, verbose=False)\n",
+ "else:\n",
+ " pipeline_class_device_to_use = \"CPU\"\n",
+ "\n",
+ "# Start Timer\n",
+ "start_time = time.perf_counter()\n",
+ "\n",
+ "# Step 1: Object Detection\n",
+ "detect_res = det_model(IMAGE_PATH)\n",
+ "\n",
+ "# Step 2: Extract Cropped Images\n",
+ "image = cv2.imread(str(IMAGE_PATH))\n",
+ "cropped_images = []\n",
+ "confidence_threshold = 0.5\n",
+ "\n",
+ "for result in detect_res:\n",
+ " boxes = result.boxes\n",
+ " if len(boxes) > 0:\n",
+ " for box in boxes:\n",
+ " confidence = box.conf.item()\n",
+ " if confidence > confidence_threshold:\n",
+ " x1, y1, x2, y2 = box.xyxy[0]\n",
+ " cropped_image = image[int(y1) : int(y2), int(x1) : int(x2)]\n",
+ " cropped_images.append(cropped_image)\n",
+ "\n",
+ "# Step 3: Classification\n",
+ "if cropped_images:\n",
+ " for i, img_data in enumerate(cropped_images):\n",
+ " try:\n",
+ " class_results = class_ov_model.predict(source=img_data, conf=0.5, imgsz=224, device=pipeline_class_device_to_use, verbose=False)\n",
+ " except Exception as e:\n",
+ " print(f\"Error during classification of object {i+1}: {e}\")\n",
+ "\n",
+ "# End Timer\n",
+ "end_time = time.perf_counter()\n",
+ "elapsed_time = end_time - start_time\n",
+ "\n",
+ "# Print Results\n",
+ "print(f\"\\nNumber of objects detected: {len(cropped_images)}\")\n",
+ "print(f\"Total Pipeline Time: {elapsed_time:.4f} seconds ({elapsed_time*1000:.2f} ms)\")\n",
+ "print(f\"Detection Device: {pipeline_detect_device.value}\")\n",
+ "print(f\"Classification Device: {pipeline_classify_device.value}\")"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "## Performance Comparison\n",
+ "\n",
+ "[back to top ⬆️](#Table-of-contents:)\n",
+ "\n",
+ "Display the pipeline performance in a formatted way."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# Display formatted results\n",
+ "display(HTML(\"
Detection Device: {pipeline_detect_device.value}
\"))\n", + "display(HTML(f\"Classification Device: {pipeline_classify_device.value}
\"))\n", + "display(HTML(f\"Objects Detected: {len(cropped_images)}
\"))\n", + "display(HTML(f\"Total Pipeline Time: {elapsed_time*1000:.2f} ms
\"))" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Conclusion\n", + "\n", + "[back to top ⬆️](#Table-of-contents:)\n", + "\n", + "This notebook demonstrated how to:\n", + "1. Convert YOLO models to OpenVINO format\n", + "2. Run object detection and classification on different devices\n", + "3. Build a complete detection and classification pipeline\n", + "4. Measure and compare performance across devices\n", + "\n", + "By leveraging OpenVINO, you can achieve better performance and flexibility in deploying your computer vision models across various hardware platforms." + ] + } + ], + "metadata": { + "kernelspec": { + "display_name": "Python 3", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.8.0" + }, + "openvino_notebooks": { + "imageUrl": "https://github.com/openvinotoolkit/openvino_notebooks/blob/latest/notebooks/yolo-detection-classification/grocery_detect.jpg?raw=true", + "tags": { + "categories": [ + "Model Demos", + "Live Demos" + ], + "libraries": [], + "other": [], + "tasks": [ + "Object Detection", + "Image Classification" + ] + } + } + }, + "nbformat": 4, + "nbformat_minor": 4 +} diff --git a/notebooks/yolov10-optimization/yolov10-optimization.ipynb b/notebooks/yolov10-optimization/yolov10-optimization.ipynb index 9484eecf086..515633a6c1d 100644 --- a/notebooks/yolov10-optimization/yolov10-optimization.ipynb +++ b/notebooks/yolov10-optimization/yolov10-optimization.ipynb @@ -1494,7 +1494,7 @@ "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", - "version": "3.11.4" + "version": "3.13.6" }, "openvino_notebooks": { "imageUrl": "https://github.com/openvinotoolkit/openvino_notebooks/assets/29454499/81ff3233-9c8d-4fe8-ab21-baf9ce530cff",