Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Added TensorRT v8.0.1 #4347

Merged
merged 3 commits into from
Mar 16, 2022
Merged

Conversation

stemann
Copy link
Contributor

@stemann stemann commented Jan 31, 2022

Added NVIDIA TensorRT - adapted from CUDNN build script.

@stemann stemann marked this pull request as ready for review January 31, 2022 21:42
@stemann
Copy link
Contributor Author

stemann commented Jan 31, 2022

@maleadt Will a cuda label on this PR allow the download from Nvidia's secured servers?

@maleadt
Copy link
Contributor

maleadt commented Feb 8, 2022

This needs separate entries for CUDA 11.0+: https://developer.nvidia.com/compute/machine-learning/tensorrt/secure/8.2.3.0/tars/tensorrt-8.2.3.0.linux.x86_64-gnu.cuda-11.4.cudnn8.2.tar.gz

Also, why are you using TensorRT 8.0 while there's 8.2 Update 2 already?

I can put the TensorRT tarballs on the Yggdrasil server, if you want.

@stemann
Copy link
Contributor Author

stemann commented Feb 9, 2022

This needs separate entries for CUDA 11.0+: https://developer.nvidia.com/compute/machine-learning/tensorrt/secure/8.2.3.0/tars/tensorrt-8.2.3.0.linux.x86_64-gnu.cuda-11.4.cudnn8.2.tar.gz

What do you mean here?

To do multiple builds for each cuda version like in the CUDNN recipe?

Also, why are you using TensorRT 8.0 while there's 8.2 Update 2 already?

My intent is to add recipies for all GA-versions of TensorRT from 8.0 to 8.2 Update 2 due to:

  1. Additional platforms supported by earlier versions (powerpc64le) - and/or versions of CUDA.
  2. A parallel effort to get ONNXRuntime (Added ONNXRuntime (cpu) #4369) available with CUDA ([ONNXRuntime] Added builds with CUDA and TensorRT Execution Providers #4386) and TensorRT - on CUDA 10.2 on aarch64-linux-gnu ([CUDA_full] Added CUDA 10.2 for aarch64-linux-gnu #4349) officially last version supported on Jetson Nano).

I can put the TensorRT tarballs on the Yggdrasil server, if you want.

That would be great - feel free to add all GA-versions if you think the aim above makes sense :-)

@maleadt
Copy link
Contributor

maleadt commented Feb 9, 2022

To do multiple builds for each cuda version like in the CUDNN recipe?

Yes, if there's separate builds for different CUDA versions we need to take those into account.

@maleadt
Copy link
Contributor

maleadt commented Feb 14, 2022

I've uploaded the following:

003cd632d978205de8b3140da743a9d39647ccb9959a1c219d34201d75a0a49e-TensorRT-8.0.1.6.Windows10.x86_64.cuda-10.2.cudnn8.2.zip
110bbfd69fe27e298e1ad1bc35300569069ffeb8b691f48bcaf34703e1bafb96-TensorRT-8.0.1.6.Linux.x86_64-gnu.cuda-10.2.cudnn8.2.tar.gz
def6a5ee50bed25a68a9c9e22ec671a8f29ee5414bde47c5767bd279e5596f88-TensorRT-8.0.1.6.Linux.x86_64-gnu.cuda-11.3.cudnn8.2.tar.gz
e51b382e931ae9032e431fff218cd2cf2d2b7a7c66c7a6bdf453557612466ae1-TensorRT-8.0.1.6.Windows10.x86_64.cuda-11.3.cudnn8.2.zip
ea322da72b1b1ca6b8d0715ab14668c54f7d00ad22695d41a85a7055df9f63e1-TensorRT-8.0.1.6.Ubuntu-20.04.aarch64-gnu.cuda-11.3.cudnn8.2.tar.gz
fd33a32085c468f638505e2603936fa4e3f2a3fa46989570fa0b9e31a9e6914a-TensorRT-8.0.1.6.CentOS-8.3.ppc64le-gnu.cuda-11.3.cudnn8.2.tar.gz

The job currently fails because of incorrect capitalization of these filenames in the recipe.

@stemann stemann marked this pull request as draft February 14, 2022 11:22
@stemann
Copy link
Contributor Author

stemann commented Mar 9, 2022

Awaiting #4483 - will add support for CUDA 10.2 on aarch64-linux-gnu once #4483 is merged.

@stemann stemann force-pushed the stemann/tensorrt branch 3 times, most recently from 6856157 to 06f6b9d Compare March 11, 2022 09:02
@stemann
Copy link
Contributor Author

stemann commented Mar 11, 2022

@stemann stemann marked this pull request as ready for review March 12, 2022 10:40
@stemann
Copy link
Contributor Author

stemann commented Mar 12, 2022

@maleadt If you can upload archives for #4602, #4603, #4604, and #4605 as well, the GA-versions of TensorRT v8 can be complete.

@maleadt
Copy link
Contributor

maleadt commented Mar 14, 2022

TensorRT-8.0.3.4.Linux.x86_64-gnu.cuda-10.2.cudnn8.2.tar.gz: 2f17178307b538245fc03b04b0d2c891e36c39cc772ae1794a3fa0d9d63a583d
TensorRT-8.0.3.4.Linux.x86_64-gnu.cuda-11.3.cudnn8.2.tar.gz: 3177435024ff4aa5a6dba8c1ed06ab11cc0e1bf3bb712dfa63a43422f41313f3
TensorRT-8.0.3.4.Windows10.x86_64.cuda-10.2.cudnn8.2.zip: 315c2bd6a2257f4fef8662d0cc4c73ae41e6641f6a3ef6227eb43b0f89abf68a
TensorRT-8.0.3.4.Windows10.x86_64.cuda-11.3.cudnn8.2.zip: a347d6e7981d0497ba60c5de78716101d73105946e1ff745f0f426f51ea691b0
TensorRT-8.2.1.8.Linux.x86_64-gnu.cuda-10.2.cudnn8.2.tar.gz: 96160493b88526f4eb136b29399c3c8bb2ef5e2dd4f8325a44104add10edd35b
TensorRT-8.2.1.8.Linux.x86_64-gnu.cuda-11.4.cudnn8.2.tar.gz: 3e9a9cc4ad0e5ae637317d924dcddf66381f4db04e2571f0f2e6ed5a2a51f247
TensorRT-8.2.1.8.Ubuntu-20.04.aarch64-gnu.cuda-11.4.cudnn8.2.tar.gz: 7c21312bf552904339d5f9270dc40c39321558e5993d93e4f94a0ed47d9a8a79
TensorRT-8.2.1.8.Windows10.x86_64.cuda-10.2.cudnn8.2.zip: d00f9d6f0d75d572f4b5a0041408650138f4f3aac76902cbfd1580448f75ee47
TensorRT-8.2.1.8.Windows10.x86_64.cuda-11.4.cudnn8.2.zip: a900840f3839ae14fbd9dc837eb6335d3cb4f217f1f29604ef72fa88e8994bcd
TensorRT-8.2.2.1.Linux.x86_64-gnu.cuda-10.2.cudnn8.2.tar.gz: 3be2461e5ad89af6ea5aae9d431fc4671b955fce639d028d250c7a24869b3324
TensorRT-8.2.2.1.Linux.x86_64-gnu.cuda-11.4.cudnn8.2.tar.gz: da130296ac6636437ff8465812eb55dbab0621747d82dc4fe9b9376f00d214af
TensorRT-8.2.2.1.Ubuntu-20.04.aarch64-gnu.cuda-11.4.cudnn8.2.tar.gz: ed3bea21f44da7b43e93803a1e0ce0f4d68678afe5c7c0393b3e41d5c099555c
TensorRT-8.2.2.1.Windows10.x86_64.cuda-10.2.cudnn8.2.zip: 1a3aaeea1db86937bfb2b299c90ed4aae7cd9f7544ba34947cd9ba0d4200a8cf
TensorRT-8.2.2.1.Windows10.x86_64.cuda-11.4.cudnn8.2.zip: 9efd246b1f518314f8912c6997fe8064ee9d557ae01665c2c4cb4f1a11ed8865
TensorRT-8.2.3.0.Linux.x86_64-gnu.cuda-10.2.cudnn8.2.tar.gz: 394dcfa39c8f4cfbcab069e81c5d4ae8c10d64ac3ec70ddc2468a67c930e222b
TensorRT-8.2.3.0.Linux.x86_64-gnu.cuda-11.4.cudnn8.2.tar.gz: 207c0c4820e5acf471925b7da4c59d48c58c265a27d88287c4263038c389e106
TensorRT-8.2.3.0.Ubuntu-20.04.aarch64-gnu.cuda-11.4.cudnn8.2.tar.gz: 6f18651b153d2ce97ccc4556b8dd11847bde177336767487e1a22095e3c16c08
TensorRT-8.2.3.0.Windows10.x86_64.cuda-10.2.cudnn8.2.zip: dc0e70414a11fdc459d338d78b222104198cc1c10789ebefc8aac9de15d9cc3f
TensorRT-8.2.3.0.Windows10.x86_64.cuda-11.4.cudnn8.2.zip: f3aa6ebe5d554b10e5b7bb4db9357b25746a600b95e17f3cf49686cfeeddb0ff

@stemann
Copy link
Contributor Author

stemann commented Mar 14, 2022

Looks ready :-)

@giordano
Copy link
Member

@maleadt good to go? 🙂

@stemann
Copy link
Contributor Author

stemann commented Mar 15, 2022

@maleadt good to go? 🙂

From my side: Yes - if it's ok with the mv to install :-)

@maleadt maleadt merged commit 3471774 into JuliaPackaging:master Mar 16, 2022
@stemann stemann deleted the stemann/tensorrt branch March 16, 2022 08:00
@stemann
Copy link
Contributor Author

stemann commented Mar 16, 2022

@maleadt Hmm... the CI (on Julia 1: 1.7.2 on x86_64-linux) for the General registry failed: JuliaRegistries/General#56696
It looks like it gets a CUDNN_jll as intended, but it fails to find a libcudnn.so.8 to load. Any idea what went wrong?

@stemann
Copy link
Contributor Author

stemann commented Mar 16, 2022

@maleadt Hmm... the CI (on Julia 1: 1.7.2 on x86_64-linux) for the General registry failed: JuliaRegistries/General#56696
It looks like it gets a CUDNN_jll as intended, but it fails to find a libcudnn.so.8 to load. Any idea what went wrong?

@giordano @maleadt : Any ideas? Importing the TensorRT JLL fails on first import and succeeds on the 2nd:

$ docker run -it --rm julia:1.7.2
               _
   _       _ _(_)_     |  Documentation: https://docs.julialang.org
  (_)     | (_) (_)    |
   _ _   _| |_  __ _   |  Type "?" for help, "]?" for Pkg help.
  | | | | | | |/ _` |  |
  | | |_| | | | (_| |  |  Version 1.7.2 (2022-02-06)
 _/ |\__'_|_|_|\__'_|  |  Official https://julialang.org/ release
|__/                   |

(@v1.7) pkg> add https://github.com/JuliaBinaryWrappers/TensorRT_jll.jl
  Installing known registries into `~/.julia`
     Cloning git-repo `https://github.com/JuliaBinaryWrappers/TensorRT_jll.jl`
    Updating git-repo `https://github.com/JuliaBinaryWrappers/TensorRT_jll.jl`
    Updating registry at `~/.julia/registries/General.toml`
   Resolving package versions...
   Installed Preferences ───── v1.2.5
   Installed CUDNN_jll ─────── v8.3.2+0
   Installed CUDA_loader_jll ─ v0.2.1+4
   Installed JLLWrappers ───── v1.4.1
    Updating `~/.julia/environments/v1.7/Project.toml`
  [2eaff018] + TensorRT_jll v8.0.1+0 `https://github.com/JuliaBinaryWrappers/TensorRT_jll.jl#main`
    Updating `~/.julia/environments/v1.7/Manifest.toml`
...
Precompiling project...
  8 dependencies successfully precompiled in 114 seconds
(@v1.7) pkg>
julia> import TensorRT_jll
  Downloaded artifact: TensorRT
 Downloading artifact: TensorRT
...
  Downloaded artifact: TensorRT
ERROR: InitError: could not load library "/root/.julia/artifacts/609a96687dab79b062bb0c63657f8e5da60a82b9/lib/libnvinfer_plugin.so"
libcudnn.so.8: cannot open shared object file: No such file or directory
Stacktrace:
 [1] dlopen(s::String, flags::UInt32; throw_error::Bool)
   @ Base.Libc.Libdl ./libdl.jl:117
 [2] dlopen(s::String, flags::UInt32)
   @ Base.Libc.Libdl ./libdl.jl:117
 [3] macro expansion
   @ ~/.julia/packages/JLLWrappers/QpMQW/src/products/library_generators.jl:54 [inlined]
 [4] __init__()
   @ TensorRT_jll ~/.julia/packages/TensorRT_jll/ebuib/src/wrappers/x86_64-linux-gnu-cuda+11.3.jl:19
 [5] _include_from_serialized(path::String, depmods::Vector{Any})
   @ Base ./loading.jl:768
 [6] _require_search_from_serialized(pkg::Base.PkgId, sourcepath::String)
   @ Base ./loading.jl:854
 [7] _require(pkg::Base.PkgId)
   @ Base ./loading.jl:1097
 [8] require(uuidkey::Base.PkgId)
   @ Base ./loading.jl:1013
 [9] require(into::Module, mod::Symbol)
   @ Base ./loading.jl:997
during initialization of module TensorRT_jll

julia> import TensorRT_jll

julia>

Importing/loading CUDNN_jll prior to TensorRT_jll does not seem to have an effect.

@giordano
Copy link
Member

The library product should probably be not dlopened automatically? https://docs.binarybuilder.org/stable/reference/#BinaryBuilderBase.LibraryProduct

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
cuda 🕹️ Builders related to Nvidia CUDA
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants