OpenVino Runtime Exception. Unexpected: CPU plug-in doesn't support If operation with dynamic rank. Operation name: input.15 #23757
Labels
ep:OpenVINO
issues related to OpenVINO execution provider
performance
issues related to performance regressions
Describe the issue
I built ONNX from source using OpenVino 2024.6. My model was converted from Pytorch to ONNX and it runs well with the default CPU execution provider. I am trying to use OpenVino to increase inference speed. When trying to run my model for inferencing I am getting the following error:
[OpenVINO-EP] Exception while Loading Network for graph: OpenVINOExecutionProvider_OpenVINO-EP-subgraph_1_0Exception from src/inference/src/cpp/core.cpp:104: Exception from src/inference/src/dev/plugin.cpp:53: Exception from src/plugins/intel_cpu/src/node.cpp:87: Unexpected: CPU plug-in doesn't support If operation with dynamic rank. Operation name: input.15
I am wondering if it is possible to enable OpenVino to fall back to CPU execution provider for unsupported operations or if there is any other solution to this error.
Thanks.
To reproduce
ONNX library was built using this line:
./build.sh --config Release --use_openvino CPU --build_shared_lib --compile_no_warning_as_error --parallel --skip_submodule_sync
I am using the following OpenVino options to run on one thread and one core:
std::unordered_map<std::string, std::string> options; options["device_type"] = "CPU"; options["precision"] = "FP32"; options["num_of_threads"] = "1"; options["num_streams"] = "1"; options["enable_opencl_throttling"] = "false"; session_options_.AppendExecutionProvider("OpenVINO", options);
Urgency
No response
Platform
Linux
OS Version
Centos 7
ONNX Runtime Installation
Built from Source
ONNX Runtime Version or Commit ID
1.21.0
ONNX Runtime API
C++
Architecture
X64
Execution Provider
OpenVINO
Execution Provider Library Version
2024.6
Model File
No response
Is this a quantized model?
No
The text was updated successfully, but these errors were encountered: