diff --git a/docs/build/eps.md b/docs/build/eps.md index 083123415fcde..01aac4a075e56 100644 --- a/docs/build/eps.md +++ b/docs/build/eps.md @@ -629,13 +629,13 @@ The Vitis-AI execution provider is only supported on Linux. ## AMD MIGraphX -See more information on the MIGraphX Execution Provider [here](../execution-providers/community-maintained/MIGraphX-ExecutionProvider.md). +See more information on the MIGraphX Execution Provider [here](../execution-providers/MIGraphX-ExecutionProvider.md). ### Prerequisites {: .no_toc } -* Install [ROCM](https://rocmdocs.amd.com/en/latest/Installation_Guide/Installation-Guide.html) - * The MIGraphX execution provider for ONNX Runtime is built and tested with ROCM3.3 +* Install [ROCm](https://docs.amd.com/bundle/ROCm-Installation-Guide-v5.4/page/How_to_Install_ROCm.html#_How_to_Install) + * The MIGraphX execution provider for ONNX Runtime is built and tested with ROCm5.4 * Install [MIGraphX](https://github.com/ROCmSoftwarePlatform/AMDMIGraphX) * The path to MIGraphX installation must be provided via the `--migraphx_home parameter`. @@ -657,8 +657,8 @@ See more information on the ROCm Execution Provider [here](../execution-provider ### Prerequisites {: .no_toc } -* Install [ROCm](https://docs.amd.com/bundle/ROCm-Installation-Guide-v5.2.3/page/How_to_Install_ROCm.html#_How_to_Install) - * The ROCm execution provider for ONNX Runtime is built and tested with ROCm5.2.3 +* Install [ROCm](https://docs.amd.com/bundle/ROCm-Installation-Guide-v5.4/page/How_to_Install_ROCm.html#_How_to_Install) + * The ROCm execution provider for ONNX Runtime is built and tested with ROCm5.4 ### Build Instructions {: .no_toc } diff --git a/docs/execution-providers/community-maintained/MIGraphX-ExecutionProvider.md b/docs/execution-providers/MIGraphX-ExecutionProvider.md similarity index 51% rename from docs/execution-providers/community-maintained/MIGraphX-ExecutionProvider.md rename to docs/execution-providers/MIGraphX-ExecutionProvider.md index 8c9badc57d65f..16cd253259e4f 100644 --- a/docs/execution-providers/community-maintained/MIGraphX-ExecutionProvider.md +++ b/docs/execution-providers/MIGraphX-ExecutionProvider.md @@ -1,25 +1,43 @@ --- title: AMD - MIGraphX description: Instructions to execute ONNX Runtime with the AMD MIGraphX execution provider -grand_parent: Execution Providers -parent: Community-maintained -nav_order: 4 +parent: Execution Providers +nav_order: 10 redirect_from: /docs/reference/execution-providers/MIGraphX-ExecutionProvider --- # MIGraphX Execution Provider {: .no_toc } -The [MIGraphX](https://github.com/ROCmSoftwarePlatform/AMDMIGraphX/) execution provider uses AMD's Deep Learning graph optimization engine to accelerate ONNX model on AMD GPUs. +The [MIGraphX](https://github.com/ROCmSoftwarePlatform/AMDMIGraphX/) execution provider uses AMD's Deep Learning graph optimization engine to accelerate ONNX model on AMD GPUs. ## Contents {: .no_toc } * TOC placeholder {:toc} +## Install + +**NOTE** Please make sure to install the proper version of Pytorch specified here [PyTorch Version](../install/#training-install-table-for-all-languages). + +For Nightly PyTorch builds please see [Pytorch home](https://pytorch.org/) and select ROCm as the Compute Platform. + +Pre-built binaries of ONNX Runtime with MIGraphX EP are published for most language bindings. Please reference [Install ORT](../install). + +## Requirements + + +|ONNX Runtime|MIGraphX| +|---|---| +|main|5.4| +|1.13|5.4| +|1.13|5.3.2| +|1.12|5.2.3| +|1.12|5.2| + ## Build -For build instructions, please see the [BUILD page](../../build/eps.md#amd-migraphx). +For build instructions, please see the [BUILD page](../build/eps.md#amd-migraphx). ## Usage @@ -27,28 +45,46 @@ For build instructions, please see the [BUILD page](../../build/eps.md#amd-migra ```c++ Ort::Env env = Ort::Env{ORT_LOGGING_LEVEL_ERROR, "Default"}; -Ort::SessionOptions sf; +Ort::SessionOptions so; int device_id = 0; -Ort::ThrowOnError(OrtSessionOptionsAppendExecutionProvider_MiGraphX(sf, device_id)); +Ort::ThrowOnError(OrtSessionOptionsAppendExecutionProvider_MIGraphX(so, device_id)); ``` -You can check [here](https://github.com/scxiao/ort_test/tree/master/char_rnn) for a specific c/c++ program. - -The C API details are [here](../../get-started/with-c.md). +The C API details are [here](../get-started/with-c.md). ### Python + When using the Python wheel from the ONNX Runtime build with MIGraphX execution provider, it will be automatically prioritized over the default GPU or CPU execution providers. There is no need to separately register the execution -provider. Python APIs details are [here](../../api/python/api_summary.html). +provider. + +Python APIs details are [here](https://onnxruntime.ai/docs/api/python/api_summary.html). + *Note that the next release (ORT 1.10) will require explicitly setting the providers parameter if you want to use execution provider other than the default CPU provider when instantiating InferenceSession.* You can check [here](https://github.com/scxiao/ort_test/tree/master/python/run_onnx) for a python script to run an model on either the CPU or MIGraphX Execution Provider. + ## Configuration Options MIGraphX providers an environment variable ORT_MIGRAPHX_FP16_ENABLE to enable the FP16 mode. ## Performance Tuning -For performance tuning, please see guidance on this page: [ONNX Runtime Perf Tuning](../../performance/tune-performance.md) +For performance tuning, please see guidance on this page: [ONNX Runtime Perf Tuning](../performance/tune-performance.md) + +## Samples + +### Python -When/if using [onnxruntime_perf_test](https://github.com/microsoft/onnxruntime/tree/main/onnxruntime/test/perftest#onnxruntime-performance-test), use the flag `-e migraphx` +```python +import onnxruntime as ort + +model_path = '' + +providers = [ + 'MIGraphXExecutionProvider', + 'CPUExecutionProvider', +] + +session = ort.InferenceSession(model_path, providers=providers) +``` diff --git a/docs/execution-providers/ROCm-ExecutionProvider.md b/docs/execution-providers/ROCm-ExecutionProvider.md index aac4f7ba7aec4..57f35e6da953c 100644 --- a/docs/execution-providers/ROCm-ExecutionProvider.md +++ b/docs/execution-providers/ROCm-ExecutionProvider.md @@ -19,6 +19,10 @@ The ROCm Execution Provider enables hardware accelerated computation on AMD ROCm ## Install +**NOTE** Please make sure to install the proper version of Pytorch specified here [PyTorch Version](../install/#training-install-table-for-all-languages). + +For Nightly PyTorch builds please see [Pytorch home](https://pytorch.org/) and select ROCm as the Compute Platform. + Pre-built binaries of ONNX Runtime with ROCm EP are published for most language bindings. Please reference [Install ORT](../install). ## Requirements @@ -26,7 +30,9 @@ Pre-built binaries of ONNX Runtime with ROCm EP are published for most language |ONNX Runtime|ROCm| |---|---| -|main|5.2.3| +|main|5.4| +|1.13|5.4| +|1.13|5.3.2| |1.12|5.2.3| |1.12|5.2| diff --git a/docs/execution-providers/index.md b/docs/execution-providers/index.md index 66d21e35607f7..d1c1d8857e597 100644 --- a/docs/execution-providers/index.md +++ b/docs/execution-providers/index.md @@ -31,11 +31,11 @@ ONNX Runtime supports many different execution providers today. Some of the EPs |Default CPU|[NVIDIA CUDA](../execution-providers/CUDA-ExecutionProvider.md)|[Intel OpenVINO](../execution-providers/OpenVINO-ExecutionProvider.md)|[Rockchip NPU](../execution-providers/community-maintained/RKNPU-ExecutionProvider.md) (*preview*)| |[Intel DNNL](../execution-providers/oneDNN-ExecutionProvider.md)|[NVIDIA TensorRT](../execution-providers/TensorRT-ExecutionProvider.md)|[ARM Compute Library](../execution-providers/community-maintained/ACL-ExecutionProvider.md) (*preview*)|[Xilinx Vitis-AI](../execution-providers/community-maintained/Vitis-AI-ExecutionProvider.md) (*preview*)| |[TVM](../execution-providers/community-maintained/TVM-ExecutionProvider.md) (*preview*)|[DirectML](../execution-providers/DirectML-ExecutionProvider.md)|[Android Neural Networks API](../execution-providers/NNAPI-ExecutionProvider.md)|[Huawei CANN](../execution-providers/community-maintained/CANN-ExecutionProvider.md) (*preview*)| -|[Intel OpenVINO](../execution-providers/OpenVINO-ExecutionProvider.md)|[AMD MIGraphX](../execution-providers/community-maintained/MIGraphX-ExecutionProvider.md) (*preview*)|[ARM-NN](../execution-providers/community-maintained/ArmNN-ExecutionProvider.md) (*preview*)|[Azure](../execution-providers/Azure-ExecutionProvider.md) (*preview*)| -||[AMD ROCm](../execution-providers/ROCm-ExecutionProvider.md) (*preview*)|[CoreML](../execution-providers/CoreML-ExecutionProvider.md) (*preview*)| -||[TVM](../execution-providers/community-maintained/TVM-ExecutionProvider.md) (*preview*)|[TVM](../execution-providers/community-maintained/TVM-ExecutionProvider.md) (*preview*)| -||[Intel OpenVINO](../execution-providers/OpenVINO-ExecutionProvider.md)|[Qualcomm SNPE](../execution-providers/SNPE-ExecutionProvider.md)| -|[XNNPACK](../execution-providers/Xnnpack-ExecutionProvider.md)||[XNNPACK](../execution-providers/Xnnpack-ExecutionProvider.md)| +|[Intel OpenVINO](../execution-providers/OpenVINO-ExecutionProvider.md)|[AMD MIGraphX](../execution-providers/MIGraphX-ExecutionProvider.md)|[ARM-NN](../execution-providers/community-maintained/ArmNN-ExecutionProvider.md) (*preview*)| +|[XNNPACK](../execution-providers/Xnnpack-ExecutionProvider.md)|[Intel OpenVINO](../execution-providers/OpenVINO-ExecutionProvider.md)|[CoreML](../execution-providers/CoreML-ExecutionProvider.md) (*preview*)| +||[AMD ROCm](../execution-providers/ROCm-ExecutionProvider.md)|[TVM](../execution-providers/community-maintained/TVM-ExecutionProvider.md) (*preview*)| +||[TVM](../execution-providers/community-maintained/TVM-ExecutionProvider.md) (*preview*)|[Qualcomm SNPE](../execution-providers/SNPE-ExecutionProvider.md)| +|||[XNNPACK](../execution-providers/Xnnpack-ExecutionProvider.md)| ### Add an Execution Provider diff --git a/docs/install/index.md b/docs/install/index.md index 30cac4b0f4820..4bbd51cb52bf0 100644 --- a/docs/install/index.md +++ b/docs/install/index.md @@ -245,12 +245,8 @@ The _location_ needs to be specified for any specific version other than the def ||Official build (location)|Nightly build (location)| |---|---|---| -|PyTorch 1.8.1 (CUDA 10.2)|[**onnxruntime_stable_torch181.cu102**](https://onnxruntimepackages.z14.web.core.windows.net/onnxruntime_stable_torch181.cu102.html)|[onnxruntime_nightly_torch181.cu102](https://onnxruntimepackages.z14.web.core.windows.net/onnxruntime_nightly_torch181.cu102.html)| -|PyTorch 1.8.1 (CUDA 11.1)|[**onnxruntime_stable_torch181.cu111**](https://onnxruntimepackages.z14.web.core.windows.net/onnxruntime_stable_torch181.cu111.html )|[onnxruntime_nightly_torch181.cu111](https://onnxruntimepackages.z14.web.core.windows.net/onnxruntime_nightly_torch181.cu111.html)| -|PyTorch 1.9 (CUDA 10.2) **Default**|[**onnxruntime-training**](https://pypi.org/project/onnxruntime-training/)|[onnxruntime_nightly_torch190.cu102](https://onnxruntimepackages.z14.web.core.windows.net/onnxruntime_nightly_torch190.cu102.html)| -|PyTorch 1.9 (CUDA 11.1)|[**onnxruntime_stable_torch190.cu111**](https://onnxruntimepackages.z14.web.core.windows.net/onnxruntime_stable_torch190.cu111.html)|[onnxruntime_nightly_torch190.cu111](https://onnxruntimepackages.z14.web.core.windows.net/onnxruntime_nightly_torch190.cu111.html)| -|[*Preview*] PyTorch 1.8.1 (ROCm 4.2)|[**onnxruntime_stable_torch181.rocm42**](https://onnxruntimepackages.z14.web.core.windows.net/onnxruntime_stable_torch181.rocm42.html)|[onnxruntime_nightly_torch181.rocm42](https://onnxruntimepackages.z14.web.core.windows.net/onnxruntime_nightly_torch181.rocm42.html)| -|[*Preview*] PyTorch 1.9 (ROCm 4.2)|[**onnxruntime_stable_torch190.rocm42**](https://onnxruntimepackages.z14.web.core.windows.net/onnxruntime_stable_torch190.rocm42.html)|[onnxruntime_nightly_torch190.rocm42](https://onnxruntimepackages.z14.web.core.windows.net/onnxruntime_nightly_torch190.rocm42.html)| -|[*Preview*] PyTorch 1.11 (ROCm 5.1.1)|[**onnxruntime_stable_torch1110.rocm511**](https://download.onnxruntime.ai/onnxruntime_stable_rocm511.html)|[onnxruntime_nightly_torch1110.rocm511](https://download.onnxruntime.ai/onnxruntime_nightly_rocm511.html)| -|[*Preview*] PyTorch 1.11 (ROCm 5.2)||[onnxruntime_nightly_torch1110.rocm52](https://download.onnxruntime.ai/onnxruntime_nightly_rocm511.html)| -|[*Preview*] PyTorch 1.12.1 (ROCm 5.2.3)||[onnxruntime_nightly_torch1121.rocm523](https://download.onnxruntime.ai/onnxruntime_nightly_rocm523.html)| +|PyTorch 1.11 (ROCm 5.2)||[onnxruntime_nightly_torch1110.rocm52](https://download.onnxruntime.ai/onnxruntime_stable_rocm52.html)| +|PyTorch 1.12.1 (ROCm 5.2.3)||[onnxruntime_nightly_torch1121.rocm523](https://download.onnxruntime.ai/onnxruntime_nightly_rocm523.html)| +|PyTorch 1.13 (ROCm 5.2.3)||[onnxruntime_nightly_torch1130.rocm523](https://download.onnxruntime.ai/onnxruntime_nightly_rocm523.html)| +|PyTorch 1.12.1 (ROCm 5.3.2)||[onnxruntime_nightly_torch1121.rocm532](https://download.onnxruntime.ai/onnxruntime_nightly_rocm532.html)| +|PyTorch 1.13.1(ROCm 5.4)||[onnxruntime_nightly_torch1131.rocm54](https://download.onnxruntime.ai/onnxruntime_nightly_rocm54.html)| diff --git a/index.html b/index.html index dd714fe6cb413..a8358ed2fa233 100644 --- a/index.html +++ b/index.html @@ -221,12 +221,16 @@

Hardware Acceleration

CUDA
DirectML
+
+ MIGraphX
NNAPI
oneDNN
OpenVINO
+
+ ROCm
SNPE
@@ -239,10 +243,6 @@

Hardware Acceleration

Azure (Preview)
CANN (Preview)
-
- MIGraphX (Preview)
-
- ROCm (Preview)
Rockchip NPU (Preview)