Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

cannot convert yoloV8 model to int8 #545

Open
fdarvas opened this issue Jun 6, 2024 · 2 comments
Open

cannot convert yoloV8 model to int8 #545

fdarvas opened this issue Jun 6, 2024 · 2 comments

Comments

@fdarvas
Copy link

fdarvas commented Jun 6, 2024

I am trying to follow the instructions for quantization with the yoloV8 model and after reading the 1000 calibration images, tensort errros out with:

ERROR: [TRT]: 10: Could not find any implementation for node /0/model.22/Range.
ERROR: [TRT]: 10: [optimizer.cpp::computeCosts::3869] Error Code 10: Internal Error (Could not find any implementation for node /0/model.22/Range.)

Did anyone run across this issue? Are there any solutions for it?

@marcoslucianops
Copy link
Owner

I don't have this issue here.

@marcoslucianops
Copy link
Owner

Try to export the ONNX model without --dynamic.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants