Skip to content

Commit

Permalink
[js/web] WebGPU backend via JSEP (microsoft#14579)
Browse files Browse the repository at this point in the history
### Description
This change introduced the following new components into ONNX Runtime
Web:
- JavaScript Execution Provider (JSEP)
  - Asynchronized inferencing execution powered by Emscripten's Asyncify
- WebGPU backend implemented in TypeScript
  - initial implementation of kernels:
    - elementwise operators (22)
    - binary operators (5)
    - tensor: Shape, Reshape, Transpose, Gemm
    - nn: Conv, {Global}Maxpool, {Global}AveragePool


Code need to be polished. still working on it.

## Q&A
What is JSEP?
> JSEP, aka JavaScript Execution Provider, is a new ONNXRuntime
execution provider that specifically works on Web environment
(browsers). JSEP allows JavaScript code to kick in from various places
when ONNX Runtime inferences a model.

Why JSEP?
> JSEP is a hybrid mode EP that contains both C/C++ and
TypeScript/JavaScript implementation. There are 2 strong reasons why we
introduces JSEP:
> 1. the C/C++ part helps JSEP to leverage ONNX Runtime's capabilities
as much as possible including graph transformer, optimizers and also the
capabilities to fallback to CPU EP. TypeScript/JavaScript helps JSEP to
develop and debug much easier in the browser for the kernel
implementation.
> 2. the requirement of asynchronized execution from JavaScript API (eg.
`buffer.mapAsync()`) makes it impossible to run `OrtRun()` in a
synchronized context (see "async problem" section below). This is done
by using Emscripten's Asyncify.

What is WebGPU?
> WebGPU is the new GPU API that available in browser. It's one of the
only 2 APIs that currently available to access the GPU from browser (the
other is WebGL).
> WebGPU is designed with more advanced and stronger features comparing
to WebGL and is potentially solution that offer the best GPU performance
for model inferencing that currently available.

What is the async problem and why we have the problem?
> The "async problem" is a problem that you cannot call an async
function in a synchronous context. Think about the following C++ code:
> ```c
> // C-style declarations (API)
> typedef void (*ON_COMPLETE)(PVOID state, DATA *data);
> void read_data_from_file(FILEHANDLE file, ON_COMPLETE on_complete);
> 
> // implementation
> DATA * my_impl_read_data_from_file_sync(FILEHANDLE file) {
>   // how to implement?
> }
> ```
> The answer is, it's impossible to implement this function. Usually we
try to find a sync version API, or launch a thread to call the async
function and sync-wait on the main thread. Unfortunately, in browser
environment, neither is possible.
>
> WebGPU does not offer any synchronized API for data downloading (GPU
to CPU). This is the only operation that MUST be async. As `OrtRun()`
will eventually call into DataTransfer for copy data from GPU to CPU,
and `OrtRun()` is a synchronized function, this cannot be done in normal
way.

What is Emscripten? How is the Asyncify feature resolved the problem?
> Emscripten is the C/C++ compiler for WebAssembly. It's what we use to
compile ORT and generates the WebAssembly artifacts which runs on
browsers.
>
> Asyncify is a [compiler
feature](https://emscripten.org/docs/porting/asyncify.html) that allows
calling async functions from a synchronized context. In short, it
generates code to unwind and rewind call stack to emulate async
execution. With this feature, we are able to call the async function
inside `OrtRun()` call.

## Design Overview

**Inter-op**

JSEP is doing pretty much same thing to just another EP. It exposes an
interface for inter-op with JavaScript, which is defined in
onnxruntime/wasm/js_internal_api.js:
```js
// init JSEP
Module["jsepInit"] = function (backend, alloc, free, copy, copyAsync, createKernel, releaseKernel, run) {
    Module.jsepBackend = backend;
    Module.jsepAlloc = alloc;
    Module.jsepFree = free;
    Module.jsepCopy = copy;
    Module.jsepCopyAsync = copyAsync;
    Module.jsepCreateKernel = createKernel;
    Module.jsepReleaseKernel = releaseKernel;
    Module.jsepRun = run;
};
```
This simple JavaScript snippet defines all language barrier level
functions that requires by JSEP to achieve implementing kernels and data
transfers using JavaScript inside ONNX Runtime:
- `jsepBackend`: assign the singleton object to webassembly module
- `jsepAlloc` and `jsepFree`: implementation of data transfer's Alloc()
and Free()
- `jsepCopy`: synchronized copy ( GPU to GPU, CPU to GPU)
- `jsepCopyAsync`: asynchronized copy ( GPU to CPU)
- `jsepCreateKernel` and `jsepReleaseKernel`: a corresponding object
that maintained in JS to match lifecycle of Kernel in ORT
- `jsepRun`: OpKernel::Compute() should call into this

The abstraction above allows to tie as little as possible connections
and dependencies between C/C++ and TypeScript/JavaScript.

**Resource Management**

Lifecycle of tensor data and kernels are managed by ORT(C/C++) but the
implementation are left to JavaScript. JavaScript code are responsible
to implement the callbacks correctly.

For WebGPU, the GPU data is managed by JavaScript using a singleton map
(tensot_data_id => GPUBuffer). GPU pipeline is managed as singleton.
Shaders are managed using a singletonmap (shader_key => gpu_program),
while shader_key is generated by cache_key (OP specific, including
attributes) and input shapes.

**about data transfer**
`js::DataTransfer::CopyTensor` implemented to call either synchronized
or asynchronized copy callback, depending on the destination is GPU or
not. Emscripten's macro `EM_ASYNC_JS` is used to wrap the async function
to be called in the synchronized context.

**run kernel in JS**

Kernel class constructor calls once `jsepCreateKernel()` with an
optional per-kernel specific serialization to pass attributes into
JavaScript.

`Compute()` are implemented in a way that a metadata serialization is
performed in a base class and JavaScript code can access the data using
the Emscripten specific builtin macro `EM_ASM_*`.

**disabled features**
memory pattern is force disabled, because the WebGPU data is not
presented by a general memory model (a buffer can be represented by
offset + size).
concurrent run support is disabled. WebGPU is stateful and it also has
async function call. To support concurrent run will significantly
increase the complexity and we don't get any real benefit from it.

**prefer channels last**
JSEP prefers channels last and returns `DataLayout::NHWC` in method
`GetPreferredLayout()`. This will let the graph transformers to
preprocess the graph into a channels last form so that a more optimized
WebGPU shader can be used.

**Testing code**
It's impossible to test JSEP directly because JSEP itself does not
contain any kernel implementation. However, it has the kernel
registration which need to work together with the corresponding
JavaScript code. There are unit tests that run onnx models from
JavaScript API.

---------

Co-authored-by: Scott McKay <[email protected]>
  • Loading branch information
fs-eire and skottmckay authored Apr 24, 2023
1 parent e12d44c commit 5c4f5bb
Show file tree
Hide file tree
Showing 51 changed files with 6,177 additions and 367 deletions.
6 changes: 6 additions & 0 deletions .eslintrc.js
Original file line number Diff line number Diff line change
Expand Up @@ -182,6 +182,12 @@ module.exports = {
'import/no-extraneous-dependencies': 'off',
'no-console': 'off'
}
}, {
files: ['web/lib/**/3rd-party/**/*.ts'], rules: {
'header/header': 'off',
'unicorn/filename-case': 'off',
'@typescript-eslint/explicit-module-boundary-types': 'off',
}
}],
extends: [
'eslint:recommended',
Expand Down
3 changes: 2 additions & 1 deletion common/lib/env-impl.ts
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,7 @@ export class EnvImpl implements Env {
constructor() {
this.wasm = {};
this.webgl = {};
this.webgpu = {};
this.logLevelInternal = 'warning';
}

Expand All @@ -28,8 +29,8 @@ export class EnvImpl implements Env {
debug?: boolean;

wasm: Env.WebAssemblyFlags;

webgl: Env.WebGLFlags;
webgpu: Env.WebGpuFlags;

[name: string]: unknown;

Expand Down
9 changes: 9 additions & 0 deletions common/lib/env.ts
Original file line number Diff line number Diff line change
Expand Up @@ -86,6 +86,10 @@ export declare namespace Env {
*/
async?: boolean;
}

export interface WebGpuFlags {
profilingMode?: 'off'|'default';
}
}

export interface Env {
Expand All @@ -112,6 +116,11 @@ export interface Env {
*/
webgl: Env.WebGLFlags;

/**
* Represent a set of flags for WebGPU
*/
webgpu: Env.WebGpuFlags;

[name: string]: unknown;
}

Expand Down
36 changes: 24 additions & 12 deletions web/karma.conf.js
Original file line number Diff line number Diff line change
Expand Up @@ -6,6 +6,7 @@
const bundleMode = require('minimist')(process.argv)['bundle-mode'] || 'dev'; // 'dev'|'perf'|undefined;
const karmaPlugins = require('minimist')(process.argv)['karma-plugins'] || undefined;
const timeoutMocha = require('minimist')(process.argv)['timeout-mocha'] || 60000;
const forceLocalHost = !!require('minimist')(process.argv)['force-localhost'];
const commonFile = bundleMode === 'dev' ? '../common/dist/ort-common.js' : '../common/dist/ort-common.min.js'
const mainFile = bundleMode === 'dev' ? 'test/ort.dev.js' : 'test/ort.perf.js';

Expand All @@ -16,25 +17,32 @@ const mainFile = bundleMode === 'dev' ? 'test/ort.dev.js' : 'test/ort.perf.js';
// https://stackoverflow.com/a/8440736
//
function getMachineIpAddress() {
var os = require('os');
var ifaces = os.networkInterfaces();
if (!forceLocalHost) {
var os = require('os');
var ifaces = os.networkInterfaces();

for (const ifname in ifaces) {
for (const iface of ifaces[ifname]) {
if ('IPv4' !== iface.family || iface.internal !== false) {
// skip over internal (i.e. 127.0.0.1) and non-ipv4 addresses
continue;
}
for (const ifname in ifaces) {
for (const iface of ifaces[ifname]) {
if ('IPv4' !== iface.family || iface.internal !== false) {
// skip over internal (i.e. 127.0.0.1) and non-ipv4 addresses
continue;
}

// returns the first available IP address
return iface.address;
// returns the first available IP address
return iface.address;
}
}
}

// if no available IP address, fallback to "localhost".
return 'localhost';
}

const hostname = getMachineIpAddress();
// In Node.js v16 and below, 'localhost' is using IPv4, so need to listen to '0.0.0.0'
// In Node.js v17+, 'localhost' is using IPv6, so need to listen to '::'
const listenAddress = Number.parseInt(process.versions.node.split('.')[0]) >= 17 ? '::' : '0.0.0.0';

module.exports = function (config) {
config.set({
// global config of your BrowserStack account
Expand Down Expand Up @@ -75,12 +83,16 @@ module.exports = function (config) {
browserNoActivityTimeout: 300000,
browserDisconnectTolerance: 0,
browserSocketTimeout: 60000,
hostname: getMachineIpAddress(),
hostname,
listenAddress,
customLaunchers: {
ChromeTest: { base: 'ChromeHeadless', flags: ['--enable-features=SharedArrayBuffer'] },
ChromePerf: { base: 'Chrome', flags: ['--window-size=1,1', '--enable-features=SharedArrayBuffer'] },
ChromeDebug: { debug: true, base: 'Chrome', flags: ['--remote-debugging-port=9333', '--enable-features=SharedArrayBuffer'] },

ChromeCanaryTest: { base: 'ChromeCanary', flags: ['--window-size=1,1', '--enable-features=SharedArrayBuffer', '--enable-unsafe-webgpu'] },
ChromeCanaryProfileTest: { base: 'ChromeCanary', flags: ['--window-size=1,1', '--enable-features=SharedArrayBuffer', '--enable-unsafe-webgpu', '--disable-dawn-features=disallow_unsafe_apis'] },
ChromeCanaryDebug: { debug: true, base: 'ChromeCanary', flags: ['--remote-debugging-port=9333', '--enable-features=SharedArrayBuffer', '--enable-unsafe-webgpu'] },
ChromeCanaryProfileDebug: { debug: true, base: 'ChromeCanary', flags: ['--remote-debugging-port=9333', '--enable-features=SharedArrayBuffer', '--enable-unsafe-webgpu', '--disable-dawn-features=disallow_unsafe_apis'] },
//
// ==== BrowserStack browsers ====
//
Expand Down
4 changes: 4 additions & 0 deletions web/lib/build-def.d.ts
Original file line number Diff line number Diff line change
Expand Up @@ -14,6 +14,10 @@ interface BuildDefinitions {
* defines whether to disable the whole WebGL backend in the build.
*/
DISABLE_WEBGL: boolean;
/**
* defines whether to disable the whole WebGpu backend in the build.
*/
DISABLE_WEBGPU: boolean;
/**
* defines whether to disable the whole WebAssembly backend in the build.
*/
Expand Down
4 changes: 4 additions & 0 deletions web/lib/index.ts
Original file line number Diff line number Diff line change
Expand Up @@ -13,8 +13,12 @@ if (!BUILD_DEFS.DISABLE_WEBGL) {
const onnxjsBackend = require('./backend-onnxjs').onnxjsBackend;
registerBackend('webgl', onnxjsBackend, -10);
}

if (!BUILD_DEFS.DISABLE_WASM) {
const wasmBackend = require('./backend-wasm').wasmBackend;
if (!BUILD_DEFS.DISABLE_WEBGPU) {
registerBackend('webgpu', wasmBackend, 5);
}
registerBackend('cpu', wasmBackend, 10);
registerBackend('wasm', wasmBackend, 10);
registerBackend('xnnpack', wasmBackend, 9);
Expand Down
2 changes: 1 addition & 1 deletion web/lib/onnxjs/backend.ts
Original file line number Diff line number Diff line change
Expand Up @@ -78,7 +78,7 @@ export interface Backend {
const backendsCache: Map<string, Backend> = new Map();

export const backend: {[name: string]: Backend} = {
webgl: new WebGLBackend(),
webgl: new WebGLBackend()
};

/**
Expand Down
3 changes: 2 additions & 1 deletion web/lib/onnxjs/backends/webgl/ops/reduce.ts
Original file line number Diff line number Diff line change
Expand Up @@ -98,6 +98,7 @@ const createReduceProgramInfo =
};

const validateInputs = (inputs: Tensor[]): void => {
// TODO: support Reduce* operators with 2 inputs.
if (!inputs || inputs.length !== 1) {
throw new Error('Reduce op requires 1 input.');
}
Expand Down Expand Up @@ -174,4 +175,4 @@ export const reduceLogSumSquare: OperatorImplementation<ReduceAttributes> =
(inferenceHandler: WebGLInferenceHandler, inputs: Tensor[], attributes: ReduceAttributes): Tensor[] => {
const reduceOp: ReduceOp = (): string[] => ['float t; value = 0.0;', 't = _A(inputIdx); value += t * t;', ''];
return reduce(inferenceHandler, inputs, attributes, 'ReduceLogSumSquare', reduceOp);
};
};
2 changes: 0 additions & 2 deletions web/lib/onnxjs/opset.ts
Original file line number Diff line number Diff line change
Expand Up @@ -8,13 +8,11 @@ export interface OpSet {
domain: string;
version: number;
}

export declare namespace OpSet {
/**
* Domain of an opset, it can be an empty string(default value, represent for ai.onnx), or 'ai.onnx.ml'
*/
type Domain = ''|'ai.onnx.ml'|'com.microsoft';

/**
* A resolve rule consists of 4 or 5 items: opType, opSetDomain, versionSelector, operatorImplementation and
* operatorInitialization (optional)
Expand Down
22 changes: 22 additions & 0 deletions web/lib/wasm/binding/ort-wasm.d.ts
Original file line number Diff line number Diff line change
@@ -1,6 +1,17 @@
// Copyright (c) Microsoft Corporation. All rights reserved.
// Licensed under the MIT License.

declare namespace JSEP {
type BackendType = unknown;
type AllocFunction = (size: number) => number;
type FreeFunction = (size: number) => number;
type UploadFunction = (dataOffset: number, gpuDataId: number, size: number) => void;
type DownloadFunction = (gpuDataId: number, dataOffset: number, size: number) => Promise<void>;
type CreateKernelFunction = (name: string, kernel: number, attribute: unknown) => void;
type ReleaseKernelFunction = (kernel: number) => void;
type RunFunction = (kernel: number, contextDataOffset: number) => number;
}

export interface OrtWasmModule extends EmscriptenModule {
// #region emscripten functions
stackSave(): number;
Expand Down Expand Up @@ -51,6 +62,17 @@ export interface OrtWasmModule extends EmscriptenModule {
// #region config
mainScriptUrlOrBlob?: string|Blob;
// #endregion

// #region JSEP
jsepInit?
(backend: JSEP.BackendType, alloc: JSEP.AllocFunction, free: JSEP.FreeFunction, upload: JSEP.UploadFunction,
download: JSEP.DownloadFunction, createKernel: JSEP.CreateKernelFunction,
releaseKernel: JSEP.ReleaseKernelFunction, run: JSEP.RunFunction): void;

_JsepOutput(context: number, index: number, data: number): number;

jsepRunPromise?: Promise<number>;
// #endregion
}

declare const moduleFactory: EmscriptenModuleFactory<OrtWasmModule>;
Expand Down
Loading

0 comments on commit 5c4f5bb

Please sign in to comment.