THIS REPOSITORY IS OBSOLETE SINCE CHANGES WERE MERGED TO OFFIFIAL RUNTIME microsoft/onnxruntime#16050
This is an updated copy of official onnxruntime-node with DirectML and Cuda support. Demo that runs stable diffusion on GPU with this runtime is here: https://github.com/dakenf/stable-diffusion-nodejs
- Works out of the box with DirectML. You can install CUDA and onnx runtime for windows with cuda provider for experiments, if you like.
- Install CUDA (tested only on 11-7 but 12 should be supported) https://docs.nvidia.com/cuda/cuda-installation-guide-linux/
- Install cuDNN https://developer.nvidia.com/rdp/cudnn-archive
- Install onnxruntime-linux-x64-gpu-1.14.1 https://github.com/microsoft/onnxruntime/releases/tag/v1.14.1
It works in the same way as onnxruntime-node
npm i onnxruntime-node-gpu
import { InferenceSession, Tensor } from 'onnxruntime-node-gpu'
const sessionOption: InferenceSession.SessionOptions = { executionProviders: ['directml'] } // can be also 'cuda' or 'cpu'
const model = await InferenceSession.create('model.onnx', sessionOption)
const input = new Tensor('float32', Float32Array.from([0, 1, 2]), [3])
const result = await this.textEncoder.run({ input_name: input })
- Currently, all results are returned as NAPI nodejs objects, so when you run inference multiple times (e.g. sampling on StableDiffusion Unet), there are a lot of unnecessary memory copy operations input from js to gpu and back. However, performance impact is not big. Maybe later I will make output in Tensorflow.js compatible tensors
Just download the repo and run npx cmake-js compile
For some reason, dynamically linked onnx runtime tries to load outdated DirectML.dll in system32, see locaal-ai/obs-backgroundremoval#272
Special thanks to authors of https://github.com/royshil/obs-backgroundremoval and https://github.com/umireon/onnxruntime-static-win for CMake scripts to download pre-built onnxruntime for static linking.
Also thanks to ChatGPT for helping me to remember how to code in c++.
You can ask me questions on Twitter