Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
10 changes: 5 additions & 5 deletions docs/build/eps.md
Original file line number Diff line number Diff line change
Expand Up @@ -629,13 +629,13 @@ The Vitis-AI execution provider is only supported on Linux.

## AMD MIGraphX

See more information on the MIGraphX Execution Provider [here](../execution-providers/community-maintained/MIGraphX-ExecutionProvider.md).
See more information on the MIGraphX Execution Provider [here](../execution-providers/MIGraphX-ExecutionProvider.md).

### Prerequisites
{: .no_toc }

* Install [ROCM](https://rocmdocs.amd.com/en/latest/Installation_Guide/Installation-Guide.html)
* The MIGraphX execution provider for ONNX Runtime is built and tested with ROCM3.3
* Install [ROCm](https://docs.amd.com/bundle/ROCm-Installation-Guide-v5.4/page/How_to_Install_ROCm.html#_How_to_Install)
* The MIGraphX execution provider for ONNX Runtime is built and tested with ROCm5.4
* Install [MIGraphX](https://github.com/ROCmSoftwarePlatform/AMDMIGraphX)
* The path to MIGraphX installation must be provided via the `--migraphx_home parameter`.

Expand All @@ -657,8 +657,8 @@ See more information on the ROCm Execution Provider [here](../execution-provider
### Prerequisites
{: .no_toc }

* Install [ROCm](https://docs.amd.com/bundle/ROCm-Installation-Guide-v5.2.3/page/How_to_Install_ROCm.html#_How_to_Install)
* The ROCm execution provider for ONNX Runtime is built and tested with ROCm5.2.3
* Install [ROCm](https://docs.amd.com/bundle/ROCm-Installation-Guide-v5.4/page/How_to_Install_ROCm.html#_How_to_Install)
* The ROCm execution provider for ONNX Runtime is built and tested with ROCm5.4

### Build Instructions
{: .no_toc }
Expand Down
Original file line number Diff line number Diff line change
@@ -1,54 +1,90 @@
---
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There is already a page for this EP here: https://onnxruntime.ai/docs/execution-providers/community-maintained/MIGraphX-ExecutionProvider.html

Can/should it be consolidated?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I added this to mirror what was done for the ROCm EP documentation. Should we be removing the other then is what you're saying?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Consolidated info to this document page instead and removed the one under community-maintained.

title: AMD - MIGraphX
description: Instructions to execute ONNX Runtime with the AMD MIGraphX execution provider
grand_parent: Execution Providers
parent: Community-maintained
nav_order: 4
parent: Execution Providers
nav_order: 10
redirect_from: /docs/reference/execution-providers/MIGraphX-ExecutionProvider
---

# MIGraphX Execution Provider
{: .no_toc }

The [MIGraphX](https://github.com/ROCmSoftwarePlatform/AMDMIGraphX/) execution provider uses AMD's Deep Learning graph optimization engine to accelerate ONNX model on AMD GPUs.
The [MIGraphX](https://github.com/ROCmSoftwarePlatform/AMDMIGraphX/) execution provider uses AMD's Deep Learning graph optimization engine to accelerate ONNX model on AMD GPUs.

## Contents
{: .no_toc }

* TOC placeholder
{:toc}
## Install

**NOTE** Please make sure to install the proper version of Pytorch specified here [PyTorch Version](../install/#training-install-table-for-all-languages).

For Nightly PyTorch builds please see [Pytorch home](https://pytorch.org/) and select ROCm as the Compute Platform.

Pre-built binaries of ONNX Runtime with MIGraphX EP are published for most language bindings. Please reference [Install ORT](../install).

## Requirements


|ONNX Runtime|MIGraphX|
|---|---|
|main|5.4|
|1.13|5.4|
|1.13|5.3.2|
|1.12|5.2.3|
|1.12|5.2|


## Build
For build instructions, please see the [BUILD page](../../build/eps.md#amd-migraphx).
For build instructions, please see the [BUILD page](../build/eps.md#amd-migraphx).

## Usage

### C/C++

```c++
Ort::Env env = Ort::Env{ORT_LOGGING_LEVEL_ERROR, "Default"};
Ort::SessionOptions sf;
Ort::SessionOptions so;
int device_id = 0;
Ort::ThrowOnError(OrtSessionOptionsAppendExecutionProvider_MiGraphX(sf, device_id));
Ort::ThrowOnError(OrtSessionOptionsAppendExecutionProvider_MIGraphX(so, device_id));
```

You can check [here](https://github.com/scxiao/ort_test/tree/master/char_rnn) for a specific c/c++ program.

The C API details are [here](../../get-started/with-c.md).
The C API details are [here](../get-started/with-c.md).

### Python

When using the Python wheel from the ONNX Runtime build with MIGraphX execution provider, it will be automatically
prioritized over the default GPU or CPU execution providers. There is no need to separately register the execution
provider. Python APIs details are [here](../../api/python/api_summary.html).
provider.

Python APIs details are [here](https://onnxruntime.ai/docs/api/python/api_summary.html).

*Note that the next release (ORT 1.10) will require explicitly setting the providers parameter if you want to use execution provider other than the default CPU provider when instantiating InferenceSession.*

You can check [here](https://github.com/scxiao/ort_test/tree/master/python/run_onnx) for a python script to run an
model on either the CPU or MIGraphX Execution Provider.


## Configuration Options
MIGraphX providers an environment variable ORT_MIGRAPHX_FP16_ENABLE to enable the FP16 mode.

## Performance Tuning
For performance tuning, please see guidance on this page: [ONNX Runtime Perf Tuning](../../performance/tune-performance.md)
For performance tuning, please see guidance on this page: [ONNX Runtime Perf Tuning](../performance/tune-performance.md)

## Samples

### Python

When/if using [onnxruntime_perf_test](https://github.com/microsoft/onnxruntime/tree/main/onnxruntime/test/perftest#onnxruntime-performance-test), use the flag `-e migraphx`
```python
import onnxruntime as ort

model_path = '<path to model>'

providers = [
'MIGraphXExecutionProvider',
'CPUExecutionProvider',
]

session = ort.InferenceSession(model_path, providers=providers)
```
8 changes: 7 additions & 1 deletion docs/execution-providers/ROCm-ExecutionProvider.md
Original file line number Diff line number Diff line change
Expand Up @@ -19,14 +19,20 @@ The ROCm Execution Provider enables hardware accelerated computation on AMD ROCm

## Install

**NOTE** Please make sure to install the proper version of Pytorch specified here [PyTorch Version](../install/#training-install-table-for-all-languages).

For Nightly PyTorch builds please see [Pytorch home](https://pytorch.org/) and select ROCm as the Compute Platform.

Pre-built binaries of ONNX Runtime with ROCm EP are published for most language bindings. Please reference [Install ORT](../install).

## Requirements


|ONNX Runtime|ROCm|
|---|---|
|main|5.2.3|
|main|5.4|
|1.13|5.4|
|1.13|5.3.2|
|1.12|5.2.3|
|1.12|5.2|

Expand Down
10 changes: 5 additions & 5 deletions docs/execution-providers/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -31,11 +31,11 @@ ONNX Runtime supports many different execution providers today. Some of the EPs
|Default CPU|[NVIDIA CUDA](../execution-providers/CUDA-ExecutionProvider.md)|[Intel OpenVINO](../execution-providers/OpenVINO-ExecutionProvider.md)|[Rockchip NPU](../execution-providers/community-maintained/RKNPU-ExecutionProvider.md) (*preview*)|
|[Intel DNNL](../execution-providers/oneDNN-ExecutionProvider.md)|[NVIDIA TensorRT](../execution-providers/TensorRT-ExecutionProvider.md)|[ARM Compute Library](../execution-providers/community-maintained/ACL-ExecutionProvider.md) (*preview*)|[Xilinx Vitis-AI](../execution-providers/community-maintained/Vitis-AI-ExecutionProvider.md) (*preview*)|
|[TVM](../execution-providers/community-maintained/TVM-ExecutionProvider.md) (*preview*)|[DirectML](../execution-providers/DirectML-ExecutionProvider.md)|[Android Neural Networks API](../execution-providers/NNAPI-ExecutionProvider.md)|[Huawei CANN](../execution-providers/community-maintained/CANN-ExecutionProvider.md) (*preview*)|
|[Intel OpenVINO](../execution-providers/OpenVINO-ExecutionProvider.md)|[AMD MIGraphX](../execution-providers/community-maintained/MIGraphX-ExecutionProvider.md) (*preview*)|[ARM-NN](../execution-providers/community-maintained/ArmNN-ExecutionProvider.md) (*preview*)|[Azure](../execution-providers/Azure-ExecutionProvider.md) (*preview*)|
||[AMD ROCm](../execution-providers/ROCm-ExecutionProvider.md) (*preview*)|[CoreML](../execution-providers/CoreML-ExecutionProvider.md) (*preview*)|
||[TVM](../execution-providers/community-maintained/TVM-ExecutionProvider.md) (*preview*)|[TVM](../execution-providers/community-maintained/TVM-ExecutionProvider.md) (*preview*)|
||[Intel OpenVINO](../execution-providers/OpenVINO-ExecutionProvider.md)|[Qualcomm SNPE](../execution-providers/SNPE-ExecutionProvider.md)|
|[XNNPACK](../execution-providers/Xnnpack-ExecutionProvider.md)||[XNNPACK](../execution-providers/Xnnpack-ExecutionProvider.md)|
|[Intel OpenVINO](../execution-providers/OpenVINO-ExecutionProvider.md)|[AMD MIGraphX](../execution-providers/MIGraphX-ExecutionProvider.md)|[ARM-NN](../execution-providers/community-maintained/ArmNN-ExecutionProvider.md) (*preview*)|
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same comment about "preview" as on the main matrix page.
Also, could you help fix the position of "XNNPACK" in the CPU column? There's a 3-row gap. (not due to your change, but just noticed it)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure let me add this in

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done let me know if it looks good

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It looks like the XNNPACK originally listed in the IoT/Edge/Mobile column got shifted accidentally. Can you fix?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Updated

|[XNNPACK](../execution-providers/Xnnpack-ExecutionProvider.md)|[Intel OpenVINO](../execution-providers/OpenVINO-ExecutionProvider.md)|[CoreML](../execution-providers/CoreML-ExecutionProvider.md) (*preview*)|
||[AMD ROCm](../execution-providers/ROCm-ExecutionProvider.md)|[TVM](../execution-providers/community-maintained/TVM-ExecutionProvider.md) (*preview*)|
||[TVM](../execution-providers/community-maintained/TVM-ExecutionProvider.md) (*preview*)|[Qualcomm SNPE](../execution-providers/SNPE-ExecutionProvider.md)|
|||[XNNPACK](../execution-providers/Xnnpack-ExecutionProvider.md)|

### Add an Execution Provider

Expand Down
14 changes: 5 additions & 9 deletions docs/install/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -245,12 +245,8 @@ The _location_ needs to be specified for any specific version other than the def

||Official build (location)|Nightly build (location)|
|---|---|---|
|PyTorch 1.8.1 (CUDA 10.2)|[**onnxruntime_stable_torch181.cu102**](https://onnxruntimepackages.z14.web.core.windows.net/onnxruntime_stable_torch181.cu102.html)|[onnxruntime_nightly_torch181.cu102](https://onnxruntimepackages.z14.web.core.windows.net/onnxruntime_nightly_torch181.cu102.html)|
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this intended to remove the CUDA builds? @ytaous / @PeixuanZuo

|PyTorch 1.8.1 (CUDA 11.1)|[**onnxruntime_stable_torch181.cu111**](https://onnxruntimepackages.z14.web.core.windows.net/onnxruntime_stable_torch181.cu111.html )|[onnxruntime_nightly_torch181.cu111](https://onnxruntimepackages.z14.web.core.windows.net/onnxruntime_nightly_torch181.cu111.html)|
|PyTorch 1.9 (CUDA 10.2) **Default**|[**onnxruntime-training**](https://pypi.org/project/onnxruntime-training/)|[onnxruntime_nightly_torch190.cu102](https://onnxruntimepackages.z14.web.core.windows.net/onnxruntime_nightly_torch190.cu102.html)|
|PyTorch 1.9 (CUDA 11.1)|[**onnxruntime_stable_torch190.cu111**](https://onnxruntimepackages.z14.web.core.windows.net/onnxruntime_stable_torch190.cu111.html)|[onnxruntime_nightly_torch190.cu111](https://onnxruntimepackages.z14.web.core.windows.net/onnxruntime_nightly_torch190.cu111.html)|
|[*Preview*] PyTorch 1.8.1 (ROCm 4.2)|[**onnxruntime_stable_torch181.rocm42**](https://onnxruntimepackages.z14.web.core.windows.net/onnxruntime_stable_torch181.rocm42.html)|[onnxruntime_nightly_torch181.rocm42](https://onnxruntimepackages.z14.web.core.windows.net/onnxruntime_nightly_torch181.rocm42.html)|
|[*Preview*] PyTorch 1.9 (ROCm 4.2)|[**onnxruntime_stable_torch190.rocm42**](https://onnxruntimepackages.z14.web.core.windows.net/onnxruntime_stable_torch190.rocm42.html)|[onnxruntime_nightly_torch190.rocm42](https://onnxruntimepackages.z14.web.core.windows.net/onnxruntime_nightly_torch190.rocm42.html)|
|[*Preview*] PyTorch 1.11 (ROCm 5.1.1)|[**onnxruntime_stable_torch1110.rocm511**](https://download.onnxruntime.ai/onnxruntime_stable_rocm511.html)|[onnxruntime_nightly_torch1110.rocm511](https://download.onnxruntime.ai/onnxruntime_nightly_rocm511.html)|
|[*Preview*] PyTorch 1.11 (ROCm 5.2)||[onnxruntime_nightly_torch1110.rocm52](https://download.onnxruntime.ai/onnxruntime_nightly_rocm511.html)|
|[*Preview*] PyTorch 1.12.1 (ROCm 5.2.3)||[onnxruntime_nightly_torch1121.rocm523](https://download.onnxruntime.ai/onnxruntime_nightly_rocm523.html)|
|PyTorch 1.11 (ROCm 5.2)||[onnxruntime_nightly_torch1110.rocm52](https://download.onnxruntime.ai/onnxruntime_stable_rocm52.html)|
|PyTorch 1.12.1 (ROCm 5.2.3)||[onnxruntime_nightly_torch1121.rocm523](https://download.onnxruntime.ai/onnxruntime_nightly_rocm523.html)|
|PyTorch 1.13 (ROCm 5.2.3)||[onnxruntime_nightly_torch1130.rocm523](https://download.onnxruntime.ai/onnxruntime_nightly_rocm523.html)|
|PyTorch 1.12.1 (ROCm 5.3.2)||[onnxruntime_nightly_torch1121.rocm532](https://download.onnxruntime.ai/onnxruntime_nightly_rocm532.html)|
|PyTorch 1.13.1(ROCm 5.4)||[onnxruntime_nightly_torch1131.rocm54](https://download.onnxruntime.ai/onnxruntime_nightly_rocm54.html)|
8 changes: 4 additions & 4 deletions index.html
Original file line number Diff line number Diff line change
Expand Up @@ -221,12 +221,16 @@ <h3 id="selectHardwareAcceleration">Hardware Acceleration</h3>
<span><abbr>CUDA</abbr></span></div>
<div class="col-lg-3 col-md-3 r-option version" role="option" tabindex="-1" aria-selected="false" id="DirectML">
<span>Direct<abbr>ML</abbr></span></div>
<div class="col-lg-3 col-md-3 r-option version" role="option" tabindex="-1" aria-selected="false" id="MIGraphX">
<span>MIGraphX </span></div>
<div class="col-lg-3 col-md-3 r-option version" role="option" tabindex="-1" aria-selected="false" id="NNAPI">
<span>NNAPI </span></div>
<div class="col-lg-3 col-md-3 r-option version" role="option" tabindex="-1" aria-selected="false" id="DNNL">
<span><abbr>oneDNN</abbr></span></div>
<div class="col-lg-3 col-md-3 r-option version" role="option" tabindex="-1" aria-selected="false" id="OpenVINO">
<span>OpenVINO</span></div>
<div class="col-lg-3 col-md-3 r-option version" role="option" tabindex="-1" aria-selected="false" id="ROCm">
<span>ROCm </span></div>
<div class="col-lg-3 col-md-3 r-option version" role="option" tabindex="-1" aria-selected="false" id="SNPE">
<span>SNPE</span></div>
<div class="col-lg-3 col-md-3 r-option version" role="option" tabindex="-1" aria-selected="false" id="TensorRT">
Expand All @@ -239,10 +243,6 @@ <h3 id="selectHardwareAcceleration">Hardware Acceleration</h3>
<span>Azure (Preview)</span></div>
<div class="col-lg-3 col-md-3 r-option version" role="option" tabindex="-1" aria-selected="false" id="CANN">
<span>CANN (Preview)</span></div>
<div class="col-lg-3 col-md-3 r-option version" role="option" tabindex="-1" aria-selected="false" id="MIGraphX">
<span>MIGraphX (Preview)</span></div>
<div class="col-lg-3 col-md-3 r-option version" role="option" tabindex="-1" aria-selected="false" id="ROCm">
<span>ROCm (Preview)</span></div>
<div class="col-lg-3 col-md-3 r-option version" role="option" tabindex="-1" aria-selected="false" id="RockchipNPU">
<span>Rockchip NPU (Preview)</span></div>
<div class="col-lg-3 col-md-3 r-option version" role="option" tabindex="-1" aria-selected="false" id="TVM">
Expand Down