Skip to content
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
32 commits
Select commit Hold shift + click to select a range
ccc7b57
PR Branch
sreekanth-yalachigere Apr 22, 2019
bdcfc9b
PR review changes
sreekanth-yalachigere Apr 22, 2019
0da8a24
bug fixes
sreekanth-yalachigere Apr 23, 2019
c9c51c8
PR review and bug fix
sreekanth-yalachigere Apr 23, 2019
77b1090
PR review and bug fix
sreekanth-yalachigere Apr 23, 2019
b2ec434
PR review and bug fix2
sreekanth-yalachigere Apr 23, 2019
0587283
mkldnncustomop.h renamed compute to submit
sreekanth-yalachigere Apr 23, 2019
0b232d2
mklkernel: renamed compute to Bind
sreekanth-yalachigere Apr 23, 2019
72936a3
shuffle net perf bug fix
sreekanth-yalachigere Apr 29, 2019
23a412f
PR Branch
sreekanth-yalachigere Apr 22, 2019
1054df9
PR review changes
sreekanth-yalachigere Apr 22, 2019
b83a507
bug fixes
sreekanth-yalachigere Apr 23, 2019
29c9dc2
PR review and bug fix
sreekanth-yalachigere Apr 23, 2019
fe8e6fa
PR review and bug fix
sreekanth-yalachigere Apr 23, 2019
23ed5d8
PR review and bug fix2
sreekanth-yalachigere Apr 23, 2019
5d65d7d
mkldnncustomop.h renamed compute to submit
sreekanth-yalachigere Apr 23, 2019
7cbccfd
mklkernel: renamed compute to Bind
sreekanth-yalachigere Apr 23, 2019
60d17e5
shuffle net perf bug fix
sreekanth-yalachigere Apr 29, 2019
9e656c0
Merge branch 'mkl13' of https://github.com/sreekanth-yalachigere/onnx…
sreekanth-yalachigere Apr 30, 2019
f11815d
Subgraph PR review changes 1
sreekanth-yalachigere May 8, 2019
8d57a30
subgraph pr review changes 2. ReadAttributes private
sreekanth-yalachigere May 8, 2019
be3aef8
subgraph pr review changes 3: set SubgraphPrimitive.CreateKernels to …
sreekanth-yalachigere May 8, 2019
4eb51c1
subgraph pr review changes 4: moved code from mkldnn_kernel.h to mkld…
sreekanth-yalachigere May 8, 2019
44f95b0
unused parameter warining
sreekanth-yalachigere May 9, 2019
80305ae
cpp to cc
sreekanth-yalachigere May 9, 2019
6e43c39
ComputeCapability refactored
sreekanth-yalachigere May 9, 2019
fa33483
moved Subgraph to mkl_dnn namespace
sreekanth-yalachigere May 9, 2019
9bffdd5
PR review. moved struct SubgraphParameters to Subgraph struct
sreekanth-yalachigere May 11, 2019
632a8be
moved mkldnn_func_kernel to .cc
sreekanth-yalachigere May 13, 2019
eeff8a2
Merge branch 'mkl13' of https://github.com/sreekanth-yalachigere/onnx…
sreekanth-yalachigere May 13, 2019
cd67ead
compiler error fix
sreekanth-yalachigere May 13, 2019
b6f416b
memcpy fix
sreekanth-yalachigere May 24, 2019
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
The table of contents is too big for display.
Diff view
Diff view
  •  
  •  
  •  
1 change: 1 addition & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -11,6 +11,7 @@ distribute/*
*.bin
cmake_build
.cmake_build
cmake-build-debug
gen
*~
.vs
Expand Down
16 changes: 13 additions & 3 deletions BUILD.md
Original file line number Diff line number Diff line change
Expand Up @@ -36,14 +36,17 @@ ONNX Runtime python binding only supports Python 3.5, 3.6 and 3.7.
cd onnxruntime
```
2. Install cmake-3.13 or better from https://cmake.org/download/.
3. (optional) Install protobuf 3.6.1 from source code (cmake/external/protobuf). CMake flag protobuf\_BUILD\_SHARED\_LIBS must be turned off. After the installation, you should have the 'protoc' executable in your PATH.
4. (optional) Install onnx from source code (cmake/external/onnx)
3. (optional) Install protobuf 3.6.1 from source code (cmake/external/protobuf). CMake flag protobuf\_BUILD\_SHARED\_LIBS must be turned OFF on Windows and turned ON on Linux. After the installation, you should have the 'protoc' executable in your PATH. On Linux it is recommended to run `ldconfig` to make sure protobuf libraries are found.
4. If you installed your protobuf in a non standard location it would be helpful on Linux build to set the following env var:
`export CMAKE_ARGS="-DONNX_CUSTOM_PROTOC_EXECUTABLE=full path to protoc"` so ONNX build can find it.
On Linux also run `ldconfig <protobuf lib folder path>` so the linker can find protobuf libraries.
5. (optional) Install onnx from source code (cmake/external/onnx)
```
export ONNX_ML=1
python3 setup.py bdist_wheel
pip3 install --upgrade dist/*.whl
```
5. Run `./build.sh --config RelWithDebInfo --build_wheel` for Linux (or `build.bat --config RelWithDebInfo --build_wheel` for Windows)
6. Run `./build.sh --config RelWithDebInfo --build_wheel` for Linux (or `build.bat --config RelWithDebInfo --build_wheel` for Windows). Upon successful build you should be able to find the wheel under `dist` folder.

The build script runs all unit tests by default (for native builds and skips tests by default for cross-compiled builds).

Expand All @@ -53,6 +56,10 @@ The complete list of build options can be found by running `./build.sh (or ./bui
1. For Windows, just add --x86 argument when launching build.bat
2. For Linux, it must be built out of a x86 os, --x86 argument also needs be specified to build.sh

## Build ONNX Runtime Server on Linux

1. In the ONNX Runtime root folder, run `./build.sh --config RelWithDebInfo --build_server --use_openmp --parallel`

## Build/Test Flavors for CI

### CI Build Environments
Expand Down Expand Up @@ -110,6 +117,9 @@ If you want to build with an earlier version, you must temporarily remove the 'C
To build ONNX Runtime with MKL-DNN support, build it with `./build.sh --use_mkldnn`
To build ONNX Runtime using MKL-DNN built with dependency on MKL small libraries, build it with `./build.sh --use_mkldnn --use_mklml`

### nGraph
ONNX runtime with nGraph as an execution provider (released as preview) can be built on Linux as follows : `./build.sh --use_ngraph`. Similarly, on Windows use `.\build.bat --use_ngraph`.

### TensorRT
ONNX Runtime supports the TensorRT execution provider (released as preview). You will need to download and install [CUDA](https://developer.nvidia.com/cuda-toolkit), [CUDNN](https://developer.nvidia.com/cudnn) and [TensorRT](https://developer.nvidia.com/nvidia-tensorrt-download).

Expand Down
6 changes: 3 additions & 3 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@
ONNX is an open format for machine learning (ML) models that is supported by various ML and DNN frameworks and tools. This format makes it easier to interoperate between frameworks and to maximize the reach of your hardware optimization investments. Learn more about ONNX on [https://onnx.ai](https://onnx.ai) or view the [Github Repo](https://github.com/onnx/onnx).

# Why use ONNX Runtime
ONNX Runtime has an open architecture that is continually evolving to address the newest developments and challenges in AI and Deep Learning. ONNX Runtime stays up to date with the ONNX standard, supporting all ONNX releases with future compatibliity and maintaining backwards compatibility with prior releases.
ONNX Runtime has an open architecture that is continually evolving to address the newest developments and challenges in AI and Deep Learning. ONNX Runtime stays up to date with the ONNX standard, supporting all ONNX releases with future compatibility and maintaining backwards compatibility with prior releases.

ONNX Runtime continuously strives to provide top performance for a broad and growing number of usage scenarios in Machine Learning. Our investments focus on:
1. Run any ONNX model
Expand Down Expand Up @@ -74,8 +74,8 @@ system.
| API Documentation | CPU package | GPU package |
|-----|-------------|-------------|
| [Python](https://aka.ms/onnxruntime-python) | [Available on Pypi](https://pypi.org/project/onnxruntime)<br/><ul><li> Windows: x64</li><li>Linux: x64</li><li>Mac OS X: x64</li></ul><br/> | [Available on Pypi](https://pypi.org/project/onnxruntime-gpu) <br/><ul><li> Windows: x64</li><li>Linux: x64</li></ul><br/><br/> |
| [C#](docs/CSharp_API.md) | Available on Nuget : [MLAS+Eigen](https://www.nuget.org/packages/Microsoft.ML.OnnxRuntime/), [MKL-ML](https://www.nuget.org/packages/Microsoft.ML.OnnxRuntime.MKLML/)</br><ul><li>Windows: x64</li><li>Linux: x64</li><li>Mac OS X: x64 (MLAS+Eigen only)</li></ul>| [Available on Nuget](https://www.nuget.org/packages/Microsoft.ML.OnnxRuntime.Gpu/)<br/><ul><li> Windows: x64</li><li>Linux: x64</li></ul><br/>|
| [C](docs/C_API.md) | Available on Nuget : [MLAS+Eigen](https://www.nuget.org/packages/Microsoft.ML.OnnxRuntime/), [MKL-ML](https://www.nuget.org/packages/Microsoft.ML.OnnxRuntime.MKLML/)</br><ul><li>Windows: x64</li><li>Linux: x64</li><li>Mac OS X: x64 (MLAS+Eigen only)</li></ul><br/>[Files (.zip, .tgz)](https://aka.ms/onnxruntime-release)<br/><ul><li>Windows: x64, x86</li><li>Linux: x64, x86</li><li>Mac OS X: x64 (MLAS+Eigen only)</li></ul> | [Available on Nuget](https://www.nuget.org/packages/Microsoft.ML.OnnxRuntime.Gpu/)<br/><ul><li>Windows: x64</li><li>Linux: x64</li></ul><br/><br/>[Files (.zip, .tgz)](https://aka.ms/onnxruntime-release)<br/><ul><li>Windows: x64</li><li>Linux: x64</li></ul><br/> |
| [C#](docs/CSharp_API.md) | **Available on Nuget :**<br/>[MLAS+Eigen](https://www.nuget.org/packages/Microsoft.ML.OnnxRuntime/)<br/><ul><li>Windows: x64, x86</li><li>Linux: x64, x86</li><li>Mac OS X: x64</li></ul><br/>[MKL-ML](https://www.nuget.org/packages/Microsoft.ML.OnnxRuntime.MKLML/)<ul><li>Windows: x64</li><li>Linux: x64</li><li>Mac OS X: x64</li></ul>| [Available on Nuget](https://www.nuget.org/packages/Microsoft.ML.OnnxRuntime.Gpu/)<br/><ul><li> Windows: x64</li><li>Linux: x64</li></ul><br/>|
| [C](docs/C_API.md) | **Available on Nuget :**<br/>[MLAS+Eigen](https://www.nuget.org/packages/Microsoft.ML.OnnxRuntime/)<br/><ul><li>Windows: x64, x86</li><li>Linux: x64, x86</li><li>Mac OS X: x64</li></ul><br/>[MKL-ML](https://www.nuget.org/packages/Microsoft.ML.OnnxRuntime.MKLML/)<br/><ul><li>Windows: x64</li><li>Linux: x64</li><li>Mac OS X: x64</li></ul><hr>[Binaries (.zip, .tgz)](https://aka.ms/onnxruntime-release)<br/><ul><li>Windows: x64, x86</li><li>Linux: x64, x86</li><li>Mac OS X: x64</li></ul> | [Available on Nuget](https://www.nuget.org/packages/Microsoft.ML.OnnxRuntime.Gpu/)<br/><ul><li>Windows: x64</li><li>Linux: x64</li></ul><br/><br/>[Binaries (.zip, .tgz)](https://aka.ms/onnxruntime-release)<br/><ul><li>Windows: x64</li><li>Linux: x64</li></ul><br/> |
| [C++](onnxruntime/core/session/inference_session.h) | [Build from source](https://github.com/Microsoft/onnxruntime/blob/master/BUILD.md) | [Build from source](https://github.com/Microsoft/onnxruntime/blob/master/BUILD.md) |

For builds using other execution providers, see Build Details below.
Expand Down
24 changes: 0 additions & 24 deletions TensorRT-ExecutionProvider.md

This file was deleted.

2 changes: 1 addition & 1 deletion cgmanifest.json
Original file line number Diff line number Diff line change
Expand Up @@ -49,7 +49,7 @@
"component":{
"type":"git",
"git":{
"commitHash":"c1c04af4e9fa0c96fbc1fda7b330bb994118f3c5",
"commitHash":"7d7bc83d29a328233d3e8affa4c4ea8b3e3599ef",
"repositoryUrl":"https://github.com/onnx/onnx.git"
}
}
Expand Down
38 changes: 17 additions & 21 deletions cmake/CMakeLists.txt
Original file line number Diff line number Diff line change
Expand Up @@ -68,8 +68,10 @@ option(onnxruntime_ENABLE_MICROSOFT_INTERNAL "Use this option to enable/disable
option(onnxruntime_USE_NUPHAR "Build with Nupha" OFF)
option(onnxruntime_USE_BRAINSLICE "Build with BrainSlice" OFF)
option(onnxruntime_USE_TENSORRT "Build with TensorRT support" OFF)
option(onnxruntime_ENABLE_LTO "Enable link time optimization, which is not stable on older GCCs" OFF)
option(onnxruntime_ENABLE_LTO "Enable link time optimization" ON)

option(onnxruntime_CROSS_COMPILING "Cross compiling onnx runtime" OFF)
option(onnxruntime_BUILD_SERVER "Build ONNX Runtime Server" OFF)
option(onnxruntime_USE_FULL_PROTOBUF "Use full protobuf" OFF)
option(onnxruntime_DISABLE_CONTRIB_OPS "Disable contrib ops" OFF)
option(onnxruntime_USE_EIGEN_THREADPOOL "Use eigen threadpool. Otherwise OpenMP or a homemade one will be used" OFF)
Expand All @@ -81,18 +83,16 @@ set(NSYNC_ENABLE_TESTS OFF CACHE BOOL "Build protobuf tests" FORCE)
set(ONNX_ML 1)

if(onnxruntime_ENABLE_LTO)
#LTO(or LTCG) is great, in our case it can reduce binary size by 1/3.
#cmake can only help us check if the compiler support LTO or not, it can't tell us if the feature works well
#Don't enable this option in Ubuntu 16.04, protoc will crash
include(CheckIPOSupported)
check_ipo_supported(RESULT ipo_enabled OUTPUT ipo_output)
#TODO: figure out why nsync doesn't work
if(NOT onnxruntime_USE_NSYNC)
if(ipo_enabled)
set(CMAKE_INTERPROCEDURAL_OPTIMIZATION_RELEASE ON)
set(CMAKE_INTERPROCEDURAL_OPTIMIZATION_RELWITHDEBINFO ON)
else()
message(WARNING "IPO is not supported: ${ipo_output}")
if(onnxruntime_USE_NSYNC)
message(WARNING "IPO is not supported when nsync is in use")
set(onnxruntime_ENABLE_LTO OFF)
else()
include(CheckIPOSupported)
check_ipo_supported(RESULT ipo_enabled OUTPUT ipo_output)
if(NOT ipo_enabled)
message(WARNING "IPO is not supported by this compiler")
set(onnxruntime_ENABLE_LTO OFF)
endif()
endif()
endif()
Expand Down Expand Up @@ -243,11 +243,6 @@ endif()
add_executable(protobuf::protoc ALIAS protoc)
include(protobuf_function.cmake)


if (onnxruntime_USE_FULL_PROTOBUF)
add_definitions(-DUSE_FULL_PROTOBUF)
endif()

if (onnxruntime_DISABLE_CONTRIB_OPS)
add_definitions(-DDISABLE_CONTRIB_OPS)
endif()
Expand Down Expand Up @@ -436,7 +431,6 @@ else()
string(APPEND CMAKE_CXX_FLAGS_DEBUG " -Wno-nonnull-compare")
string(APPEND CMAKE_C_FLAGS_DEBUG " -Wno-nonnull-compare")
endif()
string(APPEND CMAKE_CXX_FLAGS " -Wno-error=sign-compare")
if(HAS_PARENTHESES)
string(APPEND CMAKE_CXX_FLAGS " -Wno-parentheses")
endif()
Expand Down Expand Up @@ -477,9 +471,6 @@ if (onnxruntime_USE_MKLDNN)
endif()

if (onnxruntime_USE_NGRAPH)
if (Win32)
message(FATAL_ERROR "nGraph is not currently supported on Windows.")
endif()
#if (onnxruntime_USE_OPENMP)
# message(FATAL_ERROR "Please set onnxruntime_USE_OPENMP=OFF for nGraph execution provider.")
#endif()
Expand Down Expand Up @@ -607,6 +598,10 @@ if (onnxruntime_BUILD_SHARED_LIB)
include(onnxruntime.cmake)
endif()

if (onnxruntime_BUILD_SERVER)
include(onnxruntime_server.cmake)
endif()

# some of the tests rely on the shared libs to be
# built; hence the ordering
if (onnxruntime_BUILD_UNIT_TESTS)
Expand All @@ -633,3 +628,4 @@ if (onnxruntime_BUILD_CSHARP)
# set_property(GLOBAL PROPERTY VS_DOTNET_TARGET_FRAMEWORK_VERSION "netstandard2.0")
include(onnxruntime_csharp.cmake)
endif()

Loading