-
Notifications
You must be signed in to change notification settings - Fork 2.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
openvino used by onnxruntime, build successfully, run core dump #851
Comments
There is another question. For my own model, I convert model torch->onnx->openvino model, and I get *.bin and *.xml. How can I do model inference using C++ api using openvino only? Is there any examples for user-defined model with any possible model structure. For example, my model has conv and lstm layers, with input[batch_size, input_len, feat_dim] and output[batch_size, 1, output_num]. My input is not Layout::NCHW or any other type in Layout. |
Is it applicable to use one of the predefined samples from https://github.com/openvinotoolkit/openvino/tree/master/inference-engine/samples? E.g. we have generic app like |
@Liujingxiu23 do you have any more information about the segmentation fault? A core dump/stack trace perhaps? You can re-run under gdb to obtain it. There are plenty of possible reasons of this segfault but without a stack trace it's going to be virtually impossible to help. The differences between the code in github and the tgz packages:
Let me know if this helps. |
I used the master branch of openvino and onnxruntime from git. Which branchs are more suitable? For the gdb backtrace: |
I build onnruntime again, and found the tests are not all passed, 4/5 passed, 1 failed. And the related log are: 1 1818 1: [----------] 1 test from ParallelExecutor |
Can you try if your model works through OpenVINO API? |
I want to use ort since the api is really clear and simple , like torch and tensorflow. The example benchmark_app , it seems that the usage is quiet complicated for me. |
Perhaps issue should be addressed to onnxruntime then? |
@Liujingxiu23 thanks for the backtrace. The segfault happens when the ONNX model is imported from ONNXRuntime to OV provider. The backtrace indicates that it crashes when one of the inputs gets translated to ngraph::Parameter, which segfaults in the constructor. The whole segfault looks very model-specific to me. Can you also share which model you've used for this test and if it's publically available? |
@tomdol I'll try to install glibc-2.17-222.el7.x86_64. |
Just a suggestion - can you edit the model and instead of -1 use a dimension variable? Just replace This is how we(in the onnx importer) expect dynamic dimensions to be defined in ONNX models and it adheres to this specification: https://github.com/onnx/onnx/blob/master/docs/IR.md#static-tensor-shapes |
@tomdol Thank you for you relpy! I will try to reset the batch-size, and I will try a more common model , for example ResNet 18 , to verify if the current problem only occur on my own model. |
I rebuild onnxruntime-ret-1.3.0 using l_openvino_toolkit_p_2020.2.120.tgz and , model predict can be done successfully. Maybe core dump happened because I used wrong way of building openvino or wrong release version of onnxruntime. Thank you very much for your help! @tomdol |
No problem, I'm glad it worked for you :) BTW. Can we close this ticket? |
I want to use onnxruntime with openvino. I downloaded "l_openvino_toolkit_p_***.tgz" from https://docs.openvinotoolkit.org/ .But the install must with sudo authority,but I do not have sudo authority.
Then I downloaed from github, and compile openvino from sourcecode , build done successfully.
Then I build onnxruntime with openvino from sourcecode, build done successfully too.
Then I test C++ model inference. When I did not use openvino as EP, everything was right. When using openvino as EP, model load seemed ok, the input dims could be printed successfully, but core dump happened during Run.
[WARN] 2020-06-08T03:09:14z src/ngraph/frontend/onnx_import/ops_bridge.cpp 190 Domain 'com.microsoft.nchwc' not recognized by nGraph
.......
[WARN] 2020-06-08T03:09:14z src/ngraph/frontend/onnx_import/ops_bridge.cpp 190 Domain 'ai.onnx.ml' not recognized by nGraph
Segmentation fault (core dumped)
My questions are:
The text was updated successfully, but these errors were encountered: