-
Notifications
You must be signed in to change notification settings - Fork 2.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
TensorRT EP failed to create engine from network. #21415
Comments
Could you please help me? |
Hi @xlg-go |
A demo will be provided later. thank you very much~ |
Hi @yf711 here is src demo, thanks a lot! The *.so file is very large, would it be convenient to use yours? or py code: import onnxruntime as ort
import numpy as np
def main():
# ort.set_default_logger_severity(0)
trt_ep_options = {
"device_id": 0,
"trt_engine_cache_enable": True,
"trt_engine_cache_path": './trt_catch_dir',
}
sess = ort.InferenceSession(
"./mlmodel/red_hw_score.onnx",
providers=[
("TensorrtExecutionProvider", trt_ep_options),
"CUDAExecutionProvider",
],
)
batch_size, chanel, height, width = 1, 3, 128, 256
inputs = {
"x": np.zeros(
(batch_size, chanel, height, width), dtype=np.float32
),
}
# while True:
outputs = sess.run(None, inputs)
print(outputs)
if __name__ == '__main__':
main() |
Quick question, are you using pascal GPU, such as GTX 1080? Pascal GPU architecture is deprecated on TRT8.6 NVIDIA/TensorRT#3826 this issue matched your error code |
Thank you for taking the time to address my query. |
Describe the issue
I have a YOLOv8 detection model deployed using .NET with TensorRT as the provider. However, when I execute it, I encounter an error. Previously, when using TensorRT version 8, there were no issues.
[ErrorCode:ShapeInferenceNotRegistered] Non-zero status code returned while running TRTKernel_graph_torch_jit_4528351051880633562_0 node. Name:'TensorrtExecutionProvider_TRTKernel_graph_torch_jit_4528351051880633562_0_0' Status Message: TensorRT EP failed to create engine from network.
To reproduce
Urgency
I am very anxious.
Platform
Linux
OS Version
Ubuntu20.04
ONNX Runtime Installation
Built from Source
ONNX Runtime Version or Commit ID
Microsoft.ML.OnnxRuntime.TensorRT.1.19.0-dev-20240719-0951-9140d9b1ff.nupkg
ONNX Runtime API
C#
Architecture
X64
Execution Provider
TensorRT
Execution Provider Library Version
cuda12.5.1, cudnn9.2.1.18, tensorrt10.2.0.19
GPU
NVIDIA Quadro P6000 (Pascal)
The text was updated successfully, but these errors were encountered: