Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
14 changes: 7 additions & 7 deletions docs/build/eps.md
Original file line number Diff line number Diff line change
Expand Up @@ -454,7 +454,7 @@ The DirectML execution provider supports building for both x64 and x86 architect
---

## ARM Compute Library
See more information on the ACL Execution Provider [here](../execution-providers/ACL-ExecutionProvider.md).
See more information on the ACL Execution Provider [here](../execution-providers/community-maintained/ACL-ExecutionProvider.md).

### Prerequisites
{: .no_toc }
Expand Down Expand Up @@ -521,7 +521,7 @@ onnxruntime_test_all

## ArmNN

See more information on the ArmNN Execution Provider [here](../execution-providers/ArmNN-ExecutionProvider.md).
See more information on the ArmNN Execution Provider [here](../execution-providers/community-maintained/ArmNN-ExecutionProvider.md).

### Prerequisites
{: .no_toc }
Expand Down Expand Up @@ -569,7 +569,7 @@ The ARM Compute Library home directory and build directory must also be availabl
---

## RKNPU
See more information on the RKNPU Execution Provider [here](../execution-providers/RKNPU-ExecutionProvider.md).
See more information on the RKNPU Execution Provider [here](../execution-providers/community-maintained/RKNPU-ExecutionProvider.md).

### Prerequisites
{: .no_toc }
Expand Down Expand Up @@ -608,9 +608,9 @@ set(CMAKE_C_COMPILER aarch64-linux-gnu-gcc)
---

## Vitis-AI
See more information on the Xilinx Vitis-AI execution provider [here](../execution-providers/Vitis-AI-ExecutionProvider.md).
See more information on the Xilinx Vitis-AI execution provider [here](../execution-providers/community-maintained/Vitis-AI-ExecutionProvider.md).

For instructions to setup the hardware environment: [Hardware setup](../execution-providers/Vitis-AI-ExecutionProvider.md#hardware-setup)
For instructions to setup the hardware environment: [Hardware setup](../execution-providers/community-maintained/Vitis-AI-ExecutionProvider.md#hardware-setup)

### Linux
{: .no_toc }
Expand All @@ -629,7 +629,7 @@ The Vitis-AI execution provider is only supported on Linux.

## AMD MIGraphX

See more information on the MIGraphX Execution Provider [here](../execution-providers/MIGraphX-ExecutionProvider.md).
See more information on the MIGraphX Execution Provider [here](../execution-providers/community-maintained/MIGraphX-ExecutionProvider.md).

### Prerequisites
{: .no_toc }
Expand Down Expand Up @@ -774,7 +774,7 @@ Linux example:

## CANN

See more information on the CANN Execution Provider [here](../execution-providers/CANN-ExecutionProvider.md).
See more information on the CANN Execution Provider [here](../execution-providers/community-maintained/CANN-ExecutionProvider.md).

### Prerequisites
{: .no_toc }
Expand Down
5 changes: 3 additions & 2 deletions docs/execution-providers/CUDA-ExecutionProvider.md
Original file line number Diff line number Diff line change
@@ -1,8 +1,8 @@
---
title: CUDA (NVIDIA)
title: NVIDIA - CUDA
description: Instructions to execute ONNX Runtime applications with CUDA
parent: Execution Providers
nav_order: 5
nav_order: 1
redirect_from: /docs/reference/execution-providers/CUDA-ExecutionProvider
---

Expand Down Expand Up @@ -31,6 +31,7 @@ Please reference [Nvidia CUDA Minor Version Compatibility](https://docs.nvidia.c

|ONNX Runtime|CUDA|cuDNN|Notes|
|---|---|---|---|
|1.13|11.6|8.2.4 (Linux)<br/>8.5.0.96 (Windows)|libcudart 11.4.43<br/>libcufft 10.5.2.100<br/>libcurand 10.2.5.120<br/>libcublasLt 11.6.5.2<br/>libcublas 11.6.5.2<br/>libcudnn 8.2.4|
|1.12<br/>1.11|11.4|8.2.4 (Linux)<br/>8.2.2.26 (Windows)|libcudart 11.4.43<br/>libcufft 10.5.2.100<br/>libcurand 10.2.5.120<br/>libcublasLt 11.6.5.2<br/>libcublas 11.6.5.2<br/>libcudnn 8.2.4|
|1.10|11.4|8.2.4 (Linux)<br/>8.2.2.26 (Windows)|libcudart 11.4.43<br/>libcufft 10.5.2.100<br/>libcurand 10.2.5.120<br/>libcublasLt 11.6.1.51<br/>libcublas 11.6.1.51<br/>libcudnn 8.2.4|
|1.9|11.4|8.2.4 (Linux)<br/>8.2.2.26 (Windows)|libcudart 11.4.43<br/>libcufft 10.5.2.100<br/>libcurand 10.2.5.120<br/>libcublasLt 11.6.1.51<br/>libcublas 11.6.1.51<br/>libcudnn 8.2.4|
Expand Down
4 changes: 2 additions & 2 deletions docs/execution-providers/CoreML-ExecutionProvider.md
Original file line number Diff line number Diff line change
@@ -1,8 +1,8 @@
---
title: CoreML (Apple)
title: Apple - CoreML
description: Instructions to execute ONNX Runtime with CoreML
parent: Execution Providers
nav_order: 4
nav_order: 8
redirect_from: /docs/reference/execution-providers/CoreML-ExecutionProvider
---
{::options toc_levels="2" /}
Expand Down
4 changes: 2 additions & 2 deletions docs/execution-providers/DirectML-ExecutionProvider.md
Original file line number Diff line number Diff line change
@@ -1,8 +1,8 @@
---
title: DirectML (Windows)
title: Windows - DirectML
description: Instructions to execute ONNX Runtime with the DirectML execution provider
parent: Execution Providers
nav_order: 6
nav_order: 5
redirect_from: /docs/reference/execution-providers/DirectML-ExecutionProvider
---

Expand Down
4 changes: 2 additions & 2 deletions docs/execution-providers/NNAPI-ExecutionProvider.md
Original file line number Diff line number Diff line change
@@ -1,8 +1,8 @@
---
title: NNAPI (Android)
title: Android - NNAPI
description: Instructions to execute ONNX Runtime with the NNAPI execution provider
parent: Execution Providers
nav_order: 8
nav_order: 7
redirect_from: /docs/reference/execution-providers/NNAPI-ExecutionProvider
---
{::options toc_levels="2" /}
Expand Down
4 changes: 2 additions & 2 deletions docs/execution-providers/OpenVINO-ExecutionProvider.md
Original file line number Diff line number Diff line change
@@ -1,8 +1,8 @@
---
title: OpenVINO™ (Intel)
title: Intel - OpenVINO™
description: Instructions to execute OpenVINO™ Execution Provider for ONNX Runtime.
parent: Execution Providers
nav_order: 10
nav_order: 3
redirect_from: /docs/reference/execution-providers/OpenVINO-ExecutionProvider
---

Expand Down
4 changes: 2 additions & 2 deletions docs/execution-providers/ROCm-ExecutionProvider.md
Original file line number Diff line number Diff line change
@@ -1,8 +1,8 @@
---
title: ROCm (AMD)
title: AMD - ROCm
description: Instructions to execute ONNX Runtime with the AMD ROCm execution provider
parent: Execution Providers
nav_order: 12
nav_order: 10
redirect_from: /docs/reference/execution-providers/ROCm-ExecutionProvider
---

Expand Down
4 changes: 2 additions & 2 deletions docs/execution-providers/SNPE-ExecutionProvider.md
Original file line number Diff line number Diff line change
@@ -1,8 +1,8 @@
---
title: SNPE (Qualcomm)
title: Qualcomm - SNPE
description: Execute ONNX models with SNPE Execution Provider
parent: Execution Providers
nav_order: 13
nav_order: 6
redirect_from: /docs/reference/execution-providers/SNPE-ExecutionProvider
---

Expand Down
4 changes: 2 additions & 2 deletions docs/execution-providers/TensorRT-ExecutionProvider.md
Original file line number Diff line number Diff line change
@@ -1,8 +1,8 @@
---
title: TensorRT (NVIDIA)
title: NVIDIA - TensorRT
description: Instructions to execute ONNX Runtime on NVIDIA GPUs with the TensorRT execution provider
parent: Execution Providers
nav_order: 14
nav_order: 2
redirect_from: /docs/reference/execution-providers/TensorRT-ExecutionProvider
---

Expand Down
2 changes: 1 addition & 1 deletion docs/execution-providers/Xnnpack-ExecutionProvider.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@
title: XNNPACK
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would we want to say Google - XNNPACK here, to be consistent with the others?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I thought about that, but the other prefixes represent not necessarily the company that implemented the library but is an indicator of the hardware/device targets the library is compatible with, so putting "Google" for XNNPACK seemed a little out of place.

description: Instructions to execute ONNX Runtime with the XNNPACK execution provider
parent: Execution Providers
nav_order: 17
nav_order: 9
---
{::options toc_levels="2" /}

Expand Down
4 changes: 2 additions & 2 deletions docs/execution-providers/add-execution-provider.md
Original file line number Diff line number Diff line change
@@ -1,8 +1,8 @@
---
title: Add a new execution provider
title: Add a new provider
description: Instructions to add a new execution provider to ONNX Runtime
parent: Execution Providers
nav_order: 18
nav_order: 12
redirect_from: /docs/how-to/add-execution-provider
---

Expand Down
Original file line number Diff line number Diff line change
@@ -1,8 +1,9 @@
---
title: ARM Compute Library (ACL)
title: Arm - ACL
description: Instructions to execute ONNX Runtime with the ACL Execution Provider
parent: Execution Providers
nav_order: 1
grand_parent: Execution Providers
parent: Community-maintained
nav_order: 2
redirect_from: /docs/reference/execution-providers/ACL-ExecutionProvider
---

Expand All @@ -20,7 +21,7 @@ The integration of ACL as an execution provider (EP) into ONNX Runtime accelerat


## Build
For build instructions, please see the [build page](../build/eps.md#arm-compute-library).
For build instructions, please see the [build page](../../build/eps.md#arm-compute-library).

## Usage
### C/C++
Expand All @@ -32,9 +33,9 @@ Ort::SessionOptions sf;
bool enable_cpu_mem_arena = true;
Ort::ThrowOnError(OrtSessionOptionsAppendExecutionProvider_ACL(sf, enable_cpu_mem_arena));
```
The C API details are [here](../get-started/with-c.html).
The C API details are [here](../../get-started/with-c.html).

## Performance Tuning
For performance tuning, please see guidance on this page: [ONNX Runtime Perf Tuning](../performance/tune-performance.md)
For performance tuning, please see guidance on this page: [ONNX Runtime Perf Tuning](../../performance/tune-performance.md)

When/if using [onnxruntime_perf_test](https://github.com/microsoft/onnxruntime/tree/master/onnxruntime/test/perftest){:target="_blank"}, use the flag -e acl
Original file line number Diff line number Diff line change
@@ -1,7 +1,8 @@
---
title: Arm NN
title: Arm - Arm NN
description: Instructions to execute ONNX Runtime with Arm NN on Armv8 cores
parent: Execution Providers
grand_parent: Execution Providers
parent: Community-maintained
nav_order: 2
redirect_from: /docs/reference/execution-providers/ArmNN-ExecutionProvider
---
Expand All @@ -18,7 +19,7 @@ redirect_from: /docs/reference/execution-providers/ArmNN-ExecutionProvider
Accelerate performance of ONNX model workloads across Armv8 cores with the ArmNN execution provider. [ArmNN](https://github.com/ARM-software/armnn) is an open source inference engine maintained by Arm and Linaro companies.

## Build
For build instructions, please see the [BUILD page](../build/eps.md#armnn).
For build instructions, please see the [BUILD page](../../build/eps.md#armnn).

## Usage
### C/C++
Expand All @@ -29,9 +30,9 @@ Ort::SessionOptions so;
bool enable_cpu_mem_arena = true;
Ort::ThrowOnError(OrtSessionOptionsAppendExecutionProvider_ArmNN(so, enable_cpu_mem_arena));
```
The C API details are [here](../get-started/with-c.md).
The C API details are [here](../../get-started/with-c.md).

## Performance Tuning
For performance tuning, please see guidance on this page: [ONNX Runtime Perf Tuning](../performance/tune-performance.md)
For performance tuning, please see guidance on this page: [ONNX Runtime Perf Tuning](../../performance/tune-performance.md)

When/if using [onnxruntime_perf_test](https://github.com/microsoft/onnxruntime/tree/master/onnxruntime/test/perftest), use the flag -e armnn
Original file line number Diff line number Diff line change
@@ -1,8 +1,9 @@
---
title: CANN (Huawei)
title: Huawei - CANN
description: Instructions to execute ONNX Runtime with the Huawei CANN execution provider
parent: Execution Providers
nav_order: 3
grand_parent: Execution Providers
parent: Community-maintained
nav_order: 7
redirect_from: /docs/reference/execution-providers/CANN-ExecutionProvider
---

Expand Down Expand Up @@ -32,11 +33,11 @@ Please reference table below for official CANN packages dependencies for the ONN

## Build

For build instructions, please see the [BUILD page](../build/eps.md#cann).
For build instructions, please see the [BUILD page](../../build/eps.md#cann).

## Install

Pre-built binaries of ONNX Runtime with CANN EP are published for most language bindings. Please reference [Install ORT](../install).
Pre-built binaries of ONNX Runtime with CANN EP are published for most language bindings. Please reference [Install ORT](../../install).

## Samples

Expand Down
Original file line number Diff line number Diff line change
@@ -1,8 +1,9 @@
---
title: MIGraphX (AMD)
title: AMD - MIGraphX
description: Instructions to execute ONNX Runtime with the AMD MIGraphX execution provider
parent: Execution Providers
nav_order: 7
grand_parent: Execution Providers
parent: Community-maintained
nav_order: 4
redirect_from: /docs/reference/execution-providers/MIGraphX-ExecutionProvider
---

Expand All @@ -18,7 +19,7 @@ The [MIGraphX](https://github.com/ROCmSoftwarePlatform/AMDMIGraphX/) execution p
{:toc}

## Build
For build instructions, please see the [BUILD page](../build/eps.md#amd-migraphx).
For build instructions, please see the [BUILD page](../../build/eps.md#amd-migraphx).

## Usage

Expand All @@ -33,12 +34,12 @@ Ort::ThrowOnError(OrtSessionOptionsAppendExecutionProvider_MiGraphX(sf, device_i

You can check [here](https://github.com/scxiao/ort_test/tree/master/char_rnn) for a specific c/c++ program.

The C API details are [here](../get-started/with-c.md).
The C API details are [here](../../get-started/with-c.md).

### Python
When using the Python wheel from the ONNX Runtime build with MIGraphX execution provider, it will be automatically
prioritized over the default GPU or CPU execution providers. There is no need to separately register the execution
provider. Python APIs details are [here](https://onnxruntime.ai/docs/api/python/api_summary.html).
provider. Python APIs details are [here](../../api/python/api_summary.html).
*Note that the next release (ORT 1.10) will require explicitly setting the providers parameter if you want to use execution provider other than the default CPU provider when instantiating InferenceSession.*

You can check [here](https://github.com/scxiao/ort_test/tree/master/python/run_onnx) for a python script to run an
Expand All @@ -48,6 +49,6 @@ model on either the CPU or MIGraphX Execution Provider.
MIGraphX providers an environment variable ORT_MIGRAPHX_FP16_ENABLE to enable the FP16 mode.

## Performance Tuning
For performance tuning, please see guidance on this page: [ONNX Runtime Perf Tuning](../performance/tune-performance.md)
For performance tuning, please see guidance on this page: [ONNX Runtime Perf Tuning](../../performance/tune-performance.md)

When/if using [onnxruntime_perf_test](https://github.com/microsoft/onnxruntime/tree/master/onnxruntime/test/perftest#onnxruntime-performance-test), use the flag `-e migraphx`
Original file line number Diff line number Diff line change
@@ -1,8 +1,9 @@
---
title: RKNPU
title: Rockchip - RKNPU
description: Instructions to execute ONNX Runtime on Rockchip NPUs with the RKNPU execution provider
parent: Execution Providers
nav_order: 11
grand_parent: Execution Providers
parent: Community-maintained
nav_order: 5
redirect_from: /docs/reference/execution-providers/RKNPU-ExecutionProvider
---

Expand All @@ -19,7 +20,7 @@ RKNPU DDK is an advanced interface to access Rockchip NPU. The RKNPU Execution P


## Build
For build instructions, please see the [BUILD page](../build/eps.md#rknpu).
For build instructions, please see the [BUILD page](../../build/eps.md#rknpu).

## Usage
**C/C++**
Expand All @@ -31,7 +32,7 @@ Ort::SessionOptions sf;
Ort::ThrowOnError(OrtSessionOptionsAppendExecutionProvider_RKNPU(sf));
Ort::Session session(env, model_path, sf);
```
The C API details are [here](../get-started/with-c.md).
The C API details are [here](../../get-started/with-c.md).


## Support Coverage
Expand Down
Original file line number Diff line number Diff line change
@@ -1,8 +1,9 @@
---
title: TVM (Apache)
title: Apache - TVM
description: Instructions to execute ONNX Runtime with the Apache TVM execution provider
parent: Execution Providers
nav_order: 15
grand_parent: Execution Providers
parent: Community-maintained
nav_order: 3
---

# TVM Execution Provider
Expand Down
Original file line number Diff line number Diff line change
@@ -1,8 +1,9 @@
---
title: Vitis AI
title: Xilinx - Vitis AI
description: Instructions to execute ONNX Runtime on Xilinx devices with the Vitis AI execution provider
parent: Execution Providers
nav_order: 16
grand_parent: Execution Providers
parent: Community-maintained
nav_order: 6
redirect_from: /docs/reference/execution-providers/Vitis-AI-ExecutionProvider
---

Expand Down Expand Up @@ -39,7 +40,7 @@ The following table lists system requirements for running docker containers as w
| Docker Version | 19\.03\.1 |

## Build
See [Build instructions](../build/eps.md#vitis-ai).
See [Build instructions](../../build/eps.md#vitis-ai).

### Hardware setup

Expand Down
10 changes: 10 additions & 0 deletions docs/execution-providers/community-maintained/index.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,10 @@
---
title: Community-maintained
parent: Execution Providers
has_children: true
nav_order: 11
---
# Community-maintained Providers
This list of providers for specialized hardware is contributed and maintained by ONNX Runtime community partners.


Loading