Skip to content

conda-forge/onednn-feedstock

Repository files navigation

About onednn-feedstock

Feedstock license: BSD-3-Clause

About onednn

Home: https://github.com/oneapi-src/oneDNN

Package license: Apache-2.0

Summary: oneAPI Deep Neural Network Library (oneDNN)

About onednn

Home: https://github.com/oneapi-src/oneDNN

Package license: Apache-2.0

Summary: oneAPI Deep Neural Network Library (oneDNN)

oneAPI Deep Neural Network Library (oneDNN) is an open-source cross-platform performance library of basic building blocks for deep learning applications.

oneDNN is intended for deep learning applications and framework developers interested in improving application performance. Deep learning practitioners should use one of the applications enabled with oneDNN.

In this package oneDNN is built with the Threadpool CPU runtime. oneDNN requires the user to implement a Threadpool interface to enable the library to perform computations using multiple threads.

For more information please read oneDNN developer guide: https://oneapi-src.github.io/oneDNN/

About onednn-cpu-omp

Home: https://github.com/oneapi-src/oneDNN

Package license: Apache-2.0

Summary: oneAPI Deep Neural Network Library (oneDNN)

oneAPI Deep Neural Network Library (oneDNN) is an open-source cross-platform performance library of basic building blocks for deep learning applications.

oneDNN is intended for deep learning applications and framework developers interested in improving application performance. Deep learning practitioners should use one of the applications enabled with oneDNN.

In this package oneDNN is built with the OpenMP CPU runtime.

For more information please read oneDNN developer guide: https://oneapi-src.github.io/oneDNN/

About onednn-cpu-tbb

Home: https://github.com/oneapi-src/oneDNN

Package license: Apache-2.0

Summary: oneAPI Deep Neural Network Library (oneDNN)

oneAPI Deep Neural Network Library (oneDNN) is an open-source cross-platform performance library of basic building blocks for deep learning applications.

oneDNN is intended for deep learning applications and framework developers interested in improving application performance. Deep learning practitioners should use one of the applications enabled with oneDNN.

In this package oneDNN is built with the TBB CPU runtime.

For more information please read oneDNN developer guide: https://oneapi-src.github.io/oneDNN/

About onednn-cpu-threadpool

Home: https://github.com/oneapi-src/oneDNN

Package license: Apache-2.0

Summary: oneAPI Deep Neural Network Library (oneDNN)

oneAPI Deep Neural Network Library (oneDNN) is an open-source cross-platform performance library of basic building blocks for deep learning applications.

oneDNN is intended for deep learning applications and framework developers interested in improving application performance. Deep learning practitioners should use one of the applications enabled with oneDNN.

In this package oneDNN is built with the Threadpool CPU runtime. oneDNN requires the user to implement a Threadpool interface to enable the library to perform computations using multiple threads.

For more information please read oneDNN developer guide: https://oneapi-src.github.io/oneDNN/

About onednn-dpcpp

Home: https://github.com/oneapi-src/oneDNN

Package license: Apache-2.0

Summary: oneAPI Deep Neural Network Library (oneDNN)

oneAPI Deep Neural Network Library (oneDNN) is an open-source cross-platform performance library of basic building blocks for deep learning applications.

oneDNN is intended for deep learning applications and framework developers interested in improving application performance. Deep learning practitioners should use one of the applications enabled with oneDNN.

In this package oneDNN is built with the DPC++ CPU and GPU runtimes.

For more information please read oneDNN developer guide: https://oneapi-src.github.io/oneDNN/

Current build status

Azure
VariantStatus
linux_64_dnnl_cpu_runtimedpcpp variant
linux_64_dnnl_cpu_runtimeomp variant
linux_64_dnnl_cpu_runtimetbb variant
linux_64_dnnl_cpu_runtimethreadpool variant
linux_aarch64 variant
linux_ppc64le variant
osx_64_dnnl_cpu_runtimeomp variant
osx_64_dnnl_cpu_runtimetbb variant
osx_64_dnnl_cpu_runtimethreadpool variant
osx_arm64_dnnl_cpu_runtimeomp variant
osx_arm64_dnnl_cpu_runtimetbb variant
win_64_dnnl_cpu_runtimeomp variant
win_64_dnnl_cpu_runtimetbb variant
win_64_dnnl_cpu_runtimethreadpool variant

Current release info

Name Downloads Version Platforms
Conda Recipe Conda Downloads Conda Version Conda Platforms
Conda Recipe Conda Downloads Conda Version Conda Platforms
Conda Recipe Conda Downloads Conda Version Conda Platforms
Conda Recipe Conda Downloads Conda Version Conda Platforms
Conda Recipe Conda Downloads Conda Version Conda Platforms

Installing onednn

Installing onednn from the conda-forge channel can be achieved by adding conda-forge to your channels with:

conda config --add channels conda-forge
conda config --set channel_priority strict

Once the conda-forge channel has been enabled, onednn, onednn-cpu-omp, onednn-cpu-tbb, onednn-cpu-threadpool, onednn-dpcpp can be installed with conda:

conda install onednn onednn-cpu-omp onednn-cpu-tbb onednn-cpu-threadpool onednn-dpcpp

or with mamba:

mamba install onednn onednn-cpu-omp onednn-cpu-tbb onednn-cpu-threadpool onednn-dpcpp

It is possible to list all of the versions of onednn available on your platform with conda:

conda search onednn --channel conda-forge

or with mamba:

mamba search onednn --channel conda-forge

Alternatively, mamba repoquery may provide more information:

# Search all versions available on your platform:
mamba repoquery search onednn --channel conda-forge

# List packages depending on `onednn`:
mamba repoquery whoneeds onednn --channel conda-forge

# List dependencies of `onednn`:
mamba repoquery depends onednn --channel conda-forge

About conda-forge

Powered by NumFOCUS

conda-forge is a community-led conda channel of installable packages. In order to provide high-quality builds, the process has been automated into the conda-forge GitHub organization. The conda-forge organization contains one repository for each of the installable packages. Such a repository is known as a feedstock.

A feedstock is made up of a conda recipe (the instructions on what and how to build the package) and the necessary configurations for automatic building using freely available continuous integration services. Thanks to the awesome service provided by Azure, GitHub, CircleCI, AppVeyor, Drone, and TravisCI it is possible to build and upload installable packages to the conda-forge anaconda.org channel for Linux, Windows and OSX respectively.

To manage the continuous integration and simplify feedstock maintenance conda-smithy has been developed. Using the conda-forge.yml within this repository, it is possible to re-render all of this feedstock's supporting files (e.g. the CI configuration files) with conda smithy rerender.

For more information please check the conda-forge documentation.

Terminology

feedstock - the conda recipe (raw material), supporting scripts and CI configuration.

conda-smithy - the tool which helps orchestrate the feedstock. Its primary use is in the construction of the CI .yml files and simplify the management of many feedstocks.

conda-forge - the place where the feedstock and smithy live and work to produce the finished article (built conda distributions)

Updating onednn-feedstock

If you would like to improve the onednn recipe or build a new package version, please fork this repository and submit a PR. Upon submission, your changes will be run on the appropriate platforms to give the reviewer an opportunity to confirm that the changes result in a successful build. Once merged, the recipe will be re-built and uploaded automatically to the conda-forge channel, whereupon the built conda packages will be available for everybody to install and use from the conda-forge channel. Note that all branches in the conda-forge/onednn-feedstock are immediately built and any created packages are uploaded, so PRs should be based on branches in forks and branches in the main repository should only be used to build distinct package versions.

In order to produce a uniquely identifiable distribution:

  • If the version of a package is not being increased, please add or increase the build/number.
  • If the version of a package is being increased, please remember to return the build/number back to 0.

Feedstock Maintainers