Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[ONNXRuntime] Added builds with CUDA and TensorRT Execution Providers #4386

Merged

Conversation

stemann
Copy link
Contributor

@stemann stemann commented Feb 6, 2022

Likely dependent on #4369

@stemann stemann force-pushed the stemann/onnxruntime_providers_cuda branch 9 times, most recently from 93cf3f8 to 82a33f6 Compare February 7, 2022 15:23
@stemann stemann mentioned this pull request Feb 9, 2022
@stemann stemann force-pushed the stemann/onnxruntime_providers_cuda branch 5 times, most recently from e1db9ad to c2cdf0c Compare February 14, 2022 01:03
@stemann stemann changed the title WIP: Added onnxruntime_providers_cuda Added ONNXRuntimeProvidersCUDA Feb 14, 2022
@stemann stemann marked this pull request as ready for review February 14, 2022 01:06
@stemann stemann marked this pull request as draft February 15, 2022 06:48
@stemann stemann force-pushed the stemann/onnxruntime_providers_cuda branch 3 times, most recently from b245017 to 6b2963b Compare March 6, 2022 08:45
@stemann stemann force-pushed the stemann/onnxruntime_providers_cuda branch from 6b2963b to c2cdf0c Compare March 6, 2022 14:55
@stemann stemann force-pushed the stemann/onnxruntime_providers_cuda branch 4 times, most recently from 7857a03 to f84461d Compare April 30, 2022 11:14
@giordano giordano requested a review from maleadt May 3, 2022 18:39
@stemann stemann marked this pull request as draft August 3, 2022 07:28
@stemann stemann force-pushed the stemann/onnxruntime_providers_cuda branch 2 times, most recently from f3dbc70 to 0e641b2 Compare August 24, 2022 11:29
@stemann stemann force-pushed the stemann/onnxruntime_providers_cuda branch from 0e641b2 to 17c64ca Compare August 24, 2022 11:33
@stemann stemann marked this pull request as ready for review August 24, 2022 11:48
@stemann stemann marked this pull request as draft August 24, 2022 11:51
@stemann stemann marked this pull request as ready for review August 24, 2022 11:52
@stemann
Copy link
Contributor Author

stemann commented Sep 1, 2022

@maleadt When you have time, could you take a look?

Copy link
Contributor

@maleadt maleadt left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM. I mean, this isn't the kind of generic recipe we'd want once everything is figured out (for that it would need to support multiple versions of CUDA), but we haven't figured any of that out yet, so in the meantime this seems OK if you can use the generated artifacts.

O/ONNXRuntime/build_tarballs.jl Outdated Show resolved Hide resolved
O/ONNXRuntime/build_tarballs.jl Show resolved Hide resolved
@stemann stemann marked this pull request as draft September 10, 2022 11:59
@stemann stemann force-pushed the stemann/onnxruntime_providers_cuda branch from da3e36d to be022d6 Compare September 10, 2022 12:20
@stemann stemann marked this pull request as ready for review September 10, 2022 12:20
@stemann stemann marked this pull request as draft September 11, 2022 15:13
@stemann stemann force-pushed the stemann/onnxruntime_providers_cuda branch from 0b45c10 to ce62257 Compare September 11, 2022 15:14
@stemann stemann force-pushed the stemann/onnxruntime_providers_cuda branch from ce62257 to cfadc5c Compare September 12, 2022 13:46
@stemann stemann marked this pull request as ready for review September 12, 2022 13:46
@giordano
Copy link
Member

I'm taking as good Tim's approval. Thanks @stemann for the hard work!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants