Replies: 10 comments
-
Hello, the usage of TensorRT is described in https://github.com/apache/incubator-mxnet/pull/12548/files The tutorial will be available on our website soon. We are currently encountering some issues with the website and thus paused all publishing of new tutorials. |
Beta Was this translation helpful? Give feedback.
-
Thanks for opening this issue @xuzhenqi |
Beta Was this translation helpful? Give feedback.
-
There is a TensorRT tutorial on the website. It wasn't cherry-picked for the v1.3.0 release branch so it doesn't show up there... only in master. |
Beta Was this translation helpful? Give feedback.
-
Could you show where you found that? Besides with a direct link |
Beta Was this translation helpful? Give feedback.
-
The tensorrt compiling process is slow, is it common? |
Beta Was this translation helpful? Give feedback.
-
@marcoabreu It wasn't added to the tutorials index. #12587 fixed that. |
Beta Was this translation helpful? Give feedback.
-
@lizhen2017 it can take some time to compile as it does some auto-tuning during engine creation. Would you like to see a feature that would allow you to cache the engine creation? If so I can add it to our backlog. |
Beta Was this translation helpful? Give feedback.
-
@KellenSunderland |
Beta Was this translation helpful? Give feedback.
-
By the way, how to install mxnet-tensorrt-cu90 from sources, because I find that mxnet-tensorrt pip-installed version just support forwarding well, fucntions such as datasets processing and so on , can't be access to. |
Beta Was this translation helpful? Give feedback.
-
Is there any example on how i can use tensorrt on mxnet using c++ ? |
Beta Was this translation helpful? Give feedback.
-
Mxnet1.3.0 supports tensorrt for inference, but I do not find any tutorials or examples to show how to inference a model in FP16 or Int8 mode. So is there a way to do it?
Beta Was this translation helpful? Give feedback.
All reactions