You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
version:v22.02
evn:android
platform:armv8.2 cpu
model: onnx2armnn
input: the same data
diff:armnn inference outtput has a big error and onnxruntime inference result correctly.
armnn inference output has error compard with onnxruntime using onnx model. how to debug every layer output to find the error.
The text was updated successfully, but these errors were encountered:
There are a pair of Optimizer options to print intermediate outputs to file:
OptimizerOptionsOpaque.SetDebugEnabled(bool) and OptimizerOptionsOpaque.SetDebugToFileEnabled(bool). How you set these will depend on how you are executing inferences. If you are using ExecuteNetwork then the -p parameter will do it.
Please note that our Onnx parser hasn't had much attention in many years and there are many constructs that it does not support. The choice of backend may also effect the results. For example, using CpuRef with int8 data often leads to differences in inference results.
version:v22.02
evn:android
platform:armv8.2 cpu
model: onnx2armnn
input: the same data
diff:armnn inference outtput has a big error and onnxruntime inference result correctly.
armnn inference output has error compard with onnxruntime using onnx model.
how to debug every layer output to find the error.
The text was updated successfully, but these errors were encountered: