-
Notifications
You must be signed in to change notification settings - Fork 3.7k
Update information for ROCm 5.4 for MIGraphX and ROCm Builds #13813
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Changes from all commits
06bb522
3bbe68f
17e8c53
b86b621
c23f9d1
d17c536
e5511e1
a6fb305
d631e13
361a2cb
40df04b
fc925bd
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -1,54 +1,90 @@ | ||
| --- | ||
| title: AMD - MIGraphX | ||
| description: Instructions to execute ONNX Runtime with the AMD MIGraphX execution provider | ||
| grand_parent: Execution Providers | ||
| parent: Community-maintained | ||
| nav_order: 4 | ||
| parent: Execution Providers | ||
| nav_order: 10 | ||
| redirect_from: /docs/reference/execution-providers/MIGraphX-ExecutionProvider | ||
| --- | ||
|
|
||
| # MIGraphX Execution Provider | ||
| {: .no_toc } | ||
|
|
||
| The [MIGraphX](https://github.com/ROCmSoftwarePlatform/AMDMIGraphX/) execution provider uses AMD's Deep Learning graph optimization engine to accelerate ONNX model on AMD GPUs. | ||
| The [MIGraphX](https://github.com/ROCmSoftwarePlatform/AMDMIGraphX/) execution provider uses AMD's Deep Learning graph optimization engine to accelerate ONNX model on AMD GPUs. | ||
|
|
||
| ## Contents | ||
| {: .no_toc } | ||
|
|
||
| * TOC placeholder | ||
faxu marked this conversation as resolved.
Show resolved
Hide resolved
|
||
| {:toc} | ||
| ## Install | ||
|
|
||
| **NOTE** Please make sure to install the proper version of Pytorch specified here [PyTorch Version](../install/#training-install-table-for-all-languages). | ||
|
|
||
| For Nightly PyTorch builds please see [Pytorch home](https://pytorch.org/) and select ROCm as the Compute Platform. | ||
|
|
||
| Pre-built binaries of ONNX Runtime with MIGraphX EP are published for most language bindings. Please reference [Install ORT](../install). | ||
|
|
||
| ## Requirements | ||
|
|
||
|
|
||
| |ONNX Runtime|MIGraphX| | ||
| |---|---| | ||
| |main|5.4| | ||
| |1.13|5.4| | ||
| |1.13|5.3.2| | ||
| |1.12|5.2.3| | ||
| |1.12|5.2| | ||
|
|
||
|
|
||
| ## Build | ||
| For build instructions, please see the [BUILD page](../../build/eps.md#amd-migraphx). | ||
| For build instructions, please see the [BUILD page](../build/eps.md#amd-migraphx). | ||
|
|
||
| ## Usage | ||
|
|
||
| ### C/C++ | ||
|
|
||
| ```c++ | ||
| Ort::Env env = Ort::Env{ORT_LOGGING_LEVEL_ERROR, "Default"}; | ||
| Ort::SessionOptions sf; | ||
| Ort::SessionOptions so; | ||
| int device_id = 0; | ||
| Ort::ThrowOnError(OrtSessionOptionsAppendExecutionProvider_MiGraphX(sf, device_id)); | ||
| Ort::ThrowOnError(OrtSessionOptionsAppendExecutionProvider_MIGraphX(so, device_id)); | ||
| ``` | ||
|
|
||
| You can check [here](https://github.com/scxiao/ort_test/tree/master/char_rnn) for a specific c/c++ program. | ||
|
|
||
| The C API details are [here](../../get-started/with-c.md). | ||
| The C API details are [here](../get-started/with-c.md). | ||
|
|
||
| ### Python | ||
|
|
||
| When using the Python wheel from the ONNX Runtime build with MIGraphX execution provider, it will be automatically | ||
| prioritized over the default GPU or CPU execution providers. There is no need to separately register the execution | ||
| provider. Python APIs details are [here](../../api/python/api_summary.html). | ||
| provider. | ||
|
|
||
| Python APIs details are [here](https://onnxruntime.ai/docs/api/python/api_summary.html). | ||
|
|
||
| *Note that the next release (ORT 1.10) will require explicitly setting the providers parameter if you want to use execution provider other than the default CPU provider when instantiating InferenceSession.* | ||
|
|
||
| You can check [here](https://github.com/scxiao/ort_test/tree/master/python/run_onnx) for a python script to run an | ||
| model on either the CPU or MIGraphX Execution Provider. | ||
|
|
||
|
|
||
| ## Configuration Options | ||
| MIGraphX providers an environment variable ORT_MIGRAPHX_FP16_ENABLE to enable the FP16 mode. | ||
|
|
||
| ## Performance Tuning | ||
| For performance tuning, please see guidance on this page: [ONNX Runtime Perf Tuning](../../performance/tune-performance.md) | ||
| For performance tuning, please see guidance on this page: [ONNX Runtime Perf Tuning](../performance/tune-performance.md) | ||
|
|
||
| ## Samples | ||
|
|
||
| ### Python | ||
|
|
||
| When/if using [onnxruntime_perf_test](https://github.com/microsoft/onnxruntime/tree/main/onnxruntime/test/perftest#onnxruntime-performance-test), use the flag `-e migraphx` | ||
| ```python | ||
| import onnxruntime as ort | ||
|
|
||
| model_path = '<path to model>' | ||
|
|
||
| providers = [ | ||
| 'MIGraphXExecutionProvider', | ||
| 'CPUExecutionProvider', | ||
| ] | ||
|
|
||
| session = ort.InferenceSession(model_path, providers=providers) | ||
| ``` | ||
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -31,11 +31,11 @@ ONNX Runtime supports many different execution providers today. Some of the EPs | |
| |Default CPU|[NVIDIA CUDA](../execution-providers/CUDA-ExecutionProvider.md)|[Intel OpenVINO](../execution-providers/OpenVINO-ExecutionProvider.md)|[Rockchip NPU](../execution-providers/community-maintained/RKNPU-ExecutionProvider.md) (*preview*)| | ||
| |[Intel DNNL](../execution-providers/oneDNN-ExecutionProvider.md)|[NVIDIA TensorRT](../execution-providers/TensorRT-ExecutionProvider.md)|[ARM Compute Library](../execution-providers/community-maintained/ACL-ExecutionProvider.md) (*preview*)|[Xilinx Vitis-AI](../execution-providers/community-maintained/Vitis-AI-ExecutionProvider.md) (*preview*)| | ||
| |[TVM](../execution-providers/community-maintained/TVM-ExecutionProvider.md) (*preview*)|[DirectML](../execution-providers/DirectML-ExecutionProvider.md)|[Android Neural Networks API](../execution-providers/NNAPI-ExecutionProvider.md)|[Huawei CANN](../execution-providers/community-maintained/CANN-ExecutionProvider.md) (*preview*)| | ||
| |[Intel OpenVINO](../execution-providers/OpenVINO-ExecutionProvider.md)|[AMD MIGraphX](../execution-providers/community-maintained/MIGraphX-ExecutionProvider.md) (*preview*)|[ARM-NN](../execution-providers/community-maintained/ArmNN-ExecutionProvider.md) (*preview*)|[Azure](../execution-providers/Azure-ExecutionProvider.md) (*preview*)| | ||
| ||[AMD ROCm](../execution-providers/ROCm-ExecutionProvider.md) (*preview*)|[CoreML](../execution-providers/CoreML-ExecutionProvider.md) (*preview*)| | ||
| ||[TVM](../execution-providers/community-maintained/TVM-ExecutionProvider.md) (*preview*)|[TVM](../execution-providers/community-maintained/TVM-ExecutionProvider.md) (*preview*)| | ||
| ||[Intel OpenVINO](../execution-providers/OpenVINO-ExecutionProvider.md)|[Qualcomm SNPE](../execution-providers/SNPE-ExecutionProvider.md)| | ||
| |[XNNPACK](../execution-providers/Xnnpack-ExecutionProvider.md)||[XNNPACK](../execution-providers/Xnnpack-ExecutionProvider.md)| | ||
| |[Intel OpenVINO](../execution-providers/OpenVINO-ExecutionProvider.md)|[AMD MIGraphX](../execution-providers/MIGraphX-ExecutionProvider.md)|[ARM-NN](../execution-providers/community-maintained/ArmNN-ExecutionProvider.md) (*preview*)| | ||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Same comment about "preview" as on the main matrix page.
Contributor
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Sure let me add this in
Contributor
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Done let me know if it looks good
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. It looks like the XNNPACK originally listed in the IoT/Edge/Mobile column got shifted accidentally. Can you fix?
Contributor
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Updated |
||
| |[XNNPACK](../execution-providers/Xnnpack-ExecutionProvider.md)|[Intel OpenVINO](../execution-providers/OpenVINO-ExecutionProvider.md)|[CoreML](../execution-providers/CoreML-ExecutionProvider.md) (*preview*)| | ||
| ||[AMD ROCm](../execution-providers/ROCm-ExecutionProvider.md)|[TVM](../execution-providers/community-maintained/TVM-ExecutionProvider.md) (*preview*)| | ||
| ||[TVM](../execution-providers/community-maintained/TVM-ExecutionProvider.md) (*preview*)|[Qualcomm SNPE](../execution-providers/SNPE-ExecutionProvider.md)| | ||
| |||[XNNPACK](../execution-providers/Xnnpack-ExecutionProvider.md)| | ||
|
|
||
| ### Add an Execution Provider | ||
|
|
||
|
|
||
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -245,12 +245,8 @@ The _location_ needs to be specified for any specific version other than the def | |
|
|
||
| ||Official build (location)|Nightly build (location)| | ||
| |---|---|---| | ||
| |PyTorch 1.8.1 (CUDA 10.2)|[**onnxruntime_stable_torch181.cu102**](https://onnxruntimepackages.z14.web.core.windows.net/onnxruntime_stable_torch181.cu102.html)|[onnxruntime_nightly_torch181.cu102](https://onnxruntimepackages.z14.web.core.windows.net/onnxruntime_nightly_torch181.cu102.html)| | ||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Is this intended to remove the CUDA builds? @ytaous / @PeixuanZuo |
||
| |PyTorch 1.8.1 (CUDA 11.1)|[**onnxruntime_stable_torch181.cu111**](https://onnxruntimepackages.z14.web.core.windows.net/onnxruntime_stable_torch181.cu111.html )|[onnxruntime_nightly_torch181.cu111](https://onnxruntimepackages.z14.web.core.windows.net/onnxruntime_nightly_torch181.cu111.html)| | ||
| |PyTorch 1.9 (CUDA 10.2) **Default**|[**onnxruntime-training**](https://pypi.org/project/onnxruntime-training/)|[onnxruntime_nightly_torch190.cu102](https://onnxruntimepackages.z14.web.core.windows.net/onnxruntime_nightly_torch190.cu102.html)| | ||
| |PyTorch 1.9 (CUDA 11.1)|[**onnxruntime_stable_torch190.cu111**](https://onnxruntimepackages.z14.web.core.windows.net/onnxruntime_stable_torch190.cu111.html)|[onnxruntime_nightly_torch190.cu111](https://onnxruntimepackages.z14.web.core.windows.net/onnxruntime_nightly_torch190.cu111.html)| | ||
| |[*Preview*] PyTorch 1.8.1 (ROCm 4.2)|[**onnxruntime_stable_torch181.rocm42**](https://onnxruntimepackages.z14.web.core.windows.net/onnxruntime_stable_torch181.rocm42.html)|[onnxruntime_nightly_torch181.rocm42](https://onnxruntimepackages.z14.web.core.windows.net/onnxruntime_nightly_torch181.rocm42.html)| | ||
| |[*Preview*] PyTorch 1.9 (ROCm 4.2)|[**onnxruntime_stable_torch190.rocm42**](https://onnxruntimepackages.z14.web.core.windows.net/onnxruntime_stable_torch190.rocm42.html)|[onnxruntime_nightly_torch190.rocm42](https://onnxruntimepackages.z14.web.core.windows.net/onnxruntime_nightly_torch190.rocm42.html)| | ||
| |[*Preview*] PyTorch 1.11 (ROCm 5.1.1)|[**onnxruntime_stable_torch1110.rocm511**](https://download.onnxruntime.ai/onnxruntime_stable_rocm511.html)|[onnxruntime_nightly_torch1110.rocm511](https://download.onnxruntime.ai/onnxruntime_nightly_rocm511.html)| | ||
| |[*Preview*] PyTorch 1.11 (ROCm 5.2)||[onnxruntime_nightly_torch1110.rocm52](https://download.onnxruntime.ai/onnxruntime_nightly_rocm511.html)| | ||
| |[*Preview*] PyTorch 1.12.1 (ROCm 5.2.3)||[onnxruntime_nightly_torch1121.rocm523](https://download.onnxruntime.ai/onnxruntime_nightly_rocm523.html)| | ||
| |PyTorch 1.11 (ROCm 5.2)||[onnxruntime_nightly_torch1110.rocm52](https://download.onnxruntime.ai/onnxruntime_stable_rocm52.html)| | ||
| |PyTorch 1.12.1 (ROCm 5.2.3)||[onnxruntime_nightly_torch1121.rocm523](https://download.onnxruntime.ai/onnxruntime_nightly_rocm523.html)| | ||
| |PyTorch 1.13 (ROCm 5.2.3)||[onnxruntime_nightly_torch1130.rocm523](https://download.onnxruntime.ai/onnxruntime_nightly_rocm523.html)| | ||
| |PyTorch 1.12.1 (ROCm 5.3.2)||[onnxruntime_nightly_torch1121.rocm532](https://download.onnxruntime.ai/onnxruntime_nightly_rocm532.html)| | ||
| |PyTorch 1.13.1(ROCm 5.4)||[onnxruntime_nightly_torch1131.rocm54](https://download.onnxruntime.ai/onnxruntime_nightly_rocm54.html)| | ||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There is already a page for this EP here: https://onnxruntime.ai/docs/execution-providers/community-maintained/MIGraphX-ExecutionProvider.html
Can/should it be consolidated?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I added this to mirror what was done for the ROCm EP documentation. Should we be removing the other then is what you're saying?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Consolidated info to this document page instead and removed the one under community-maintained.