This simple easy to use standalone class that can be used to set Openvino Execution Provider as the provider for ONNX models whenever the target inferencing device is from Intel. If the Inferencing device is a non Intel GPU, DML execution Provider is set. CPU Execution Provider is the fall back.
-
Notifications
You must be signed in to change notification settings - Fork 0
License
fredrickomondi/ExecutionProviderManager
Folders and files
Name | Name | Last commit message | Last commit date | |
---|---|---|---|---|
Repository files navigation
About
No description, website, or topics provided.
Resources
License
Stars
Watchers
Forks
Releases
No releases published
Packages 0
No packages published