-
Notifications
You must be signed in to change notification settings - Fork 3.7k
Enable CUDA provider option configuration for C# #10188
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merged
Merged
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
hariharans29
commented
Jan 4, 2022
hariharans29
commented
Jan 4, 2022
hariharans29
commented
Jan 4, 2022
pranavsharma
reviewed
Jan 5, 2022
Contributor
pranavsharma
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM 👍
include/onnxruntime/core/providers/cuda/cuda_provider_options.h
Outdated
Show resolved
Hide resolved
skottmckay
reviewed
Jan 5, 2022
yuslepukhin
reviewed
Jan 5, 2022
yuslepukhin
reviewed
Jan 5, 2022
pranavsharma
previously approved these changes
Jan 5, 2022
pranavsharma
approved these changes
Jan 6, 2022
This was referenced Jan 6, 2022
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Description:
This is the CUDA EP equivalent of the PR #7808 (All the design choices have already been discussed there).
This change makes the
OrtCUDAProviderOptionsstruct opaque and supports setting options that can be converted to a string. Just like the TensorRT PR, an API to set options that can't be strings (user_compute_streamanddefault_memory_arena_cfg) is not yet supported and will be added incrementally. Since these are not very commonly used, it is okay to leave them unsupported for now. For users who still need them, they can still use the legacy struct and its corresponding ORT APIs.Motivation and Context
Enables basic configuration of the CUDA EP via C# (Resolves C# adding option for cudnn_conv_algo_search : DEFAULT #9730 and How to apply "gpu_mem_limit" to CUDA Execution Provider in C#? #8995)
Introduces a new C API CUDA EP configuration -
cudnn_conv_use_max_workspace. This was already available via Python but not available for C API users. This configuration is particularly important for fp16 Conv heavy models. See Resnet - converted Onnx model is 2.9X slower than pyTorch model in V100 gpu #9162 and [Doc] Performance tuning of convolution heavy models with the CUDA EP #9916 for details as to why. By this reasoning, this PR should resolve this perf complaint - Why Float16 model takes twice as long as Float under GPU #9420.Addresses a backlog item to make the
OrtCUDAProviderOptionsopaque