-
Notifications
You must be signed in to change notification settings - Fork 2.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[js/webgpu] Support capture and replay for jsep #18989
Conversation
08ab978
to
0ce4a6a
Compare
This reverts commit 80b53bc.
02091c3
to
c4cfde0
Compare
@skottmckay @fs-eire @guschmue Please take a look, thanks! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@fs-eire Please take a another look, thanks.
captureBegin(): void { | ||
LOG_DEBUG('info', 'captureBegin'); | ||
let sessionCommandList = this.capturedCommandList.get(this.currentSessionId!); | ||
let sessionPendingKernels = this.capturedPendingKernels.get(this.currentSessionId!); |
Check warning
Code scanning / CodeQL
Useless assignment to local variable Warning
/azp run Windows ARM64 QNN CI Pipeline,Windows x64 QNN CI Pipeline,Windows CPU CI Pipeline,Windows GPU CI Pipeline,Windows GPU TensorRT CI Pipeline,ONNX Runtime Web CI Pipeline,Linux CPU CI Pipeline,Linux CPU Minimal Build E2E CI Pipeline,Linux GPU CI Pipeline,Linux GPU TensorRT CI Pipeline |
/azp run Linux OpenVINO CI Pipeline,Linux QNN CI Pipeline,MacOS CI Pipeline,orttraining-amd-gpu-ci-pipeline,orttraining-linux-ci-pipeline,orttraining-linux-gpu-ci-pipeline,orttraining-ortmodule-distributed,onnxruntime-python-checks-ci-pipeline,onnxruntime-binary-size-checks-ci-pipeline,Android CI Pipeline |
/azp run iOS CI Pipeline,ONNX Runtime React Native CI Pipeline |
Azure Pipelines successfully started running 2 pipeline(s). |
Azure Pipelines successfully started running 9 pipeline(s). |
Azure Pipelines successfully started running 10 pipeline(s). |
This PR expands the graph capture capability to JS EP, which is similar to #16081. But for JS EP, we don't use the CUDA Graph, instead, we records all gpu commands and replay them, which removes most of the cpu overhead to avoid the the situation that gpu waiting for cpu. mobilenetv2-12 becomes 3.7ms from 6ms on NV 3090 and becomes 3.38ms from 4.58ms on Intel A770. All limitations are similar with CUDA EP: 1. Models with control-flow ops (i.e. If, Loop and Scan ops) are not supported. 2. Usage of graph capture is limited to models where-in all ops in the model can be partitioned to the JS EP or CPU EP and no memory copy between them. 3. Shapes of inputs/outputs cannot change across inference calls. 4. IObinding is required. The usage is like below: Method 1: specify outputs buffers explicitly. ``` const sessionOptions = { executionProviders: [ { name: "webgpu", }, ], enableGraphCapture: true, }; const session = await ort.InferenceSession.create('./models/mobilenetv2-12.onnx', sessionOptions); // prepare the inputBuffer/outputBuffer ... ... const feeds = { 'input': ort.Tensor.fromGpuBuffer(inputBuffer, { dataType: 'float32', dims }) }; const fetches = { 'output': ort.Tensor.fromGpuBuffer(outputBuffer, { dataType: 'float32', dims: [1, 1000] }) }; let results = await session.run(feeds, fetches); // The first run will begin to capture the graph. // update inputBuffer content ... ... results = = await session.run(feeds, fetches); // The 2ed run and after will directly call replay to execute the graph. ... ... session.release(); ``` Method 2: Don't specify outputs buffers explicitly. Internally, when graph capture is enabled, it will set all outputs location to 'gpu-buffer'. ``` const sessionOptions = { executionProviders: [ { name: "webgpu", }, ], enableGraphCapture: true, }; const session = await ort.InferenceSession.create('./models/mobilenetv2-12.onnx', sessionOptions); // prepare the inputBuffer ... ... const feeds = { 'input': ort.Tensor.fromGpuBuffer(inputBuffer, { dataType: 'float32', dims }) }; let results = await session.run(feeds); // The first run will begin to capture the graph. // update inputBuffer content ... ... results = = await session.run(feeds); // The 2ed run and after will directly call replay to execute the graph. ... ... session.release();
This PR expands the graph capture capability to JS EP, which is similar to #16081. But for JS EP, we don't use the CUDA Graph, instead, we records all gpu commands and replay them, which removes most of the cpu overhead to avoid the the situation that gpu waiting for cpu. mobilenetv2-12 becomes 3.7ms from 6ms on NV 3090 and becomes 3.38ms from 4.58ms on Intel A770. All limitations are similar with CUDA EP: 1. Models with control-flow ops (i.e. If, Loop and Scan ops) are not supported. 2. Usage of graph capture is limited to models where-in all ops in the model can be partitioned to the JS EP or CPU EP and no memory copy between them. 3. Shapes of inputs/outputs cannot change across inference calls. 4. IObinding is required. The usage is like below: Method 1: specify outputs buffers explicitly. ``` const sessionOptions = { executionProviders: [ { name: "webgpu", }, ], enableGraphCapture: true, }; const session = await ort.InferenceSession.create('./models/mobilenetv2-12.onnx', sessionOptions); // prepare the inputBuffer/outputBuffer ... ... const feeds = { 'input': ort.Tensor.fromGpuBuffer(inputBuffer, { dataType: 'float32', dims }) }; const fetches = { 'output': ort.Tensor.fromGpuBuffer(outputBuffer, { dataType: 'float32', dims: [1, 1000] }) }; let results = await session.run(feeds, fetches); // The first run will begin to capture the graph. // update inputBuffer content ... ... results = = await session.run(feeds, fetches); // The 2ed run and after will directly call replay to execute the graph. ... ... session.release(); ``` Method 2: Don't specify outputs buffers explicitly. Internally, when graph capture is enabled, it will set all outputs location to 'gpu-buffer'. ``` const sessionOptions = { executionProviders: [ { name: "webgpu", }, ], enableGraphCapture: true, }; const session = await ort.InferenceSession.create('./models/mobilenetv2-12.onnx', sessionOptions); // prepare the inputBuffer ... ... const feeds = { 'input': ort.Tensor.fromGpuBuffer(inputBuffer, { dataType: 'float32', dims }) }; let results = await session.run(feeds); // The first run will begin to capture the graph. // update inputBuffer content ... ... results = = await session.run(feeds); // The 2ed run and after will directly call replay to execute the graph. ... ... session.release();
### Description This PR is a preview of cherry-picks for ort-web to `rel-1.17.3` based on `rel-1.17.2`. <details> <summary>Changes of ort-web to cherry-pick</summary> The following commits are from main branch. `o` stands for pick, and `x` stands for skip. ``` o 2e0a388 [js/webgpu] Add HardSigmoid support (#19215) o d226e40 [js/webgpu] set query type in onRunStart (#19202) o 61610ff [js/webgpu] Add FusedConv clip test case (#18900) o a33b5bd [JS/WebGPU] Added Uniforms to SkipLayerNorm. (#18788) o 591f90c [js/webgpu] Fix issue of timestamp query (#19258) o 7252c6e [WebNN EP] Support WebNN async API with Asyncify (#19145) o 5b06505 [js/webgpu] Fix Tanh explosion (#19201) o 656ca66 [js/webgpu] Support uniforms for conv, conv transpose, conv grouped (#18753) o a3f0e24 [js/webgpu] Support f16 uniform (#19098) o 9e69606 fix f16 for attention, enable slice and flatten for more types (#19262) o 624b4e2 [js/webgpu] Remove enableShapesUniforms (#19279) o 90883a3 [js/webgpu] Add hardSigmoid activation for fusedConv (#19233) o 85cef0a [js/webgpu] Support capture and replay for jsep (#18989) o d73131c [js/webgpu] Use DataType as uniform cpu type (#19281) o dd1f6cc [js/webgpu] resolve codescan alert (#19343) o 3a2ab19 [js/webgpu] Refactor createTensorShapeVariables (#18883) o efc17e7 [js/webgpu] Fix the undefined push error (#19366) x 50806a7 [js/web] support external data in npm test (#19377) o ccbe264 [js/webgpu] Add LeakyRelu activation for fusedConv (#19369) o 5ff27ef [js/webgpu] support customop FastGelu (#19392) x 03be65e [js/web] fix types exports in package.json (#19458) o 06269a3 [js/webgpu] allow uint8 tensors for webgpu (#19545) o dfeda90 [JS/WebGPU] Add MatMulNBits (#19446) o 1b48054 [js/webgpu] Create Split indices helpers by rank, not by shape (#19554) o 3fe2c13 [js] small fix to workaround formatter (#19400) x 70567a4 [js/web] use ApiTensor insteadof onnxjs Tensor in TensorResultValidator (#19358) o 6e04e36 [js/common] upgrade tsc in common from 4.9.5 to 5.2.2 (#19317) o 58f4921 [js] changes to allow Float16Array if any polyfill is available (#19305) o 57d6819 [js/web] Fix fused-conv is not included in npm test (#19581) o ebd220b Misspelling in README.md (#19433) o 38c3432 Bump ip from 1.1.8 to 1.1.9 in /js/react_native (#19582) o fe82fcc [js/webgpu] Fix Conv2DTransposeMatMul f16 compilation failure (#19596) o 76a2a48 Bump ip from 1.1.8 to 1.1.9 in /js/react_native/e2e (#19583) o 29b1106 [node] Switch to setImmediate to avoid starving the Node.js event loop (#19610) o ae3d73c [JS/WebGPU] Fix Split and Where to handle corner cases. (#19613) o aec2389 [js/webgpu] allows a ProgramInfo's RunData to use zero sized output (#19614) o bb43a0f [js/webgpu] minor fixes to make tinyllama work (#19564) o 0edb035 [js/web] fix suite test list for zero sized tensor (#19638) o 3cb81cd [js/common] move 'env.wasm.trace' to 'env.trace' (#19617) o e30618d [js/webgpu] use Headless for webgpu test by default (#19702) o f06164e [js/web] transfer input buffer back to caller thread (#19677) x a788514 [js/web] dump debug logs for karma for diagnose purpose (#19785) o 24b72d2 [JS/WebGPU] Preserve zero size input tensor dims. (#19737) o 4538d31 [js/webgpu] expose a few properties in WebGPU API (#19857) o 53de2d8 [js/webgpu] Enable GroupedConvVectorize path (#19791) o ed250b8 [JS/WebGPU] Optimize MatMulNBits (#19852) x e771a76 [js/test] align web test runner flags with ort.env (#19790) o 79e50ae [js/web] rewrite backend resolve to allow multiple EPs (#19735) o acb0df2 Fix #19931 broken Get Started link of "ONNX Runtime JavaScript API" page (#19932) o b29849a [js/common] fix typedoc warnings (#19933) o afdab62 Bump follow-redirects from 1.15.4 to 1.15.6 in /js/web (#19949) o 28ad6c3 Bump follow-redirects from 1.15.4 to 1.15.6 in /js/node (#19951) o 7e0d424 accumulate in fp32 for Reduce* (#19868) o 4c6a6a3 [js/webgpu] Fix NAN caused by un-initialized buffer in instance-norm (#19387) o 01c7aaf [js/webgpu] allow setting env.webgpu.adapter (#19940) o c45cff6 [js/webgpu] fix maxpool / fp16 (#19981) ``` </details> <details> <summary>Cherry-pick commandlines</summary> ```sh git cherry-pick 2e0a388 git cherry-pick d226e40 git cherry-pick 61610ff git cherry-pick a33b5bd git cherry-pick 591f90c git cherry-pick 7252c6e git cherry-pick 5b06505 git cherry-pick 656ca66 git cherry-pick a3f0e24 git cherry-pick 9e69606 git cherry-pick 624b4e2 git cherry-pick 90883a3 git cherry-pick 85cef0a #<<<<< Note: conflicts git cherry-pick d73131c git cherry-pick dd1f6cc git cherry-pick 3a2ab19 git cherry-pick efc17e7 git cherry-pick ccbe264 git cherry-pick 5ff27ef git cherry-pick 06269a3 git cherry-pick dfeda90 git cherry-pick 1b48054 git cherry-pick 3fe2c13 git cherry-pick 6e04e36 git cherry-pick 58f4921 git cherry-pick 57d6819 git cherry-pick ebd220b git cherry-pick 38c3432 git cherry-pick fe82fcc git cherry-pick 76a2a48 git cherry-pick 29b1106 git cherry-pick ae3d73c git cherry-pick aec2389 git cherry-pick bb43a0f git cherry-pick 0edb035 git cherry-pick 3cb81cd git cherry-pick e30618d git cherry-pick f06164e git cherry-pick 24b72d2 git cherry-pick 4538d31 git cherry-pick 53de2d8 git cherry-pick ed250b8 git cherry-pick 79e50ae git cherry-pick acb0df2 git cherry-pick b29849a git cherry-pick afdab62 git cherry-pick 28ad6c3 git cherry-pick 7e0d424 git cherry-pick 4c6a6a3 git cherry-pick 01c7aaf git cherry-pick c45cff6 ``` </details> <details> <summary>Cherry-pick conflicts</summary> - 85cef0a #18989 this change is for enabling graph capture feature for JSEP, and it is done after ROCM EP enabled graph capture feature. However, the ROCM EP graph capture feature is not cherry-picked in rel-1.17.2. </details> --------- Signed-off-by: dependabot[bot] <[email protected]> Co-authored-by: Jiajia Qin <[email protected]> Co-authored-by: Xu Xing <[email protected]> Co-authored-by: satyajandhyala <[email protected]> Co-authored-by: Yang Gu <[email protected]> Co-authored-by: Wanming Lin <[email protected]> Co-authored-by: Jiajie Hu <[email protected]> Co-authored-by: Guenther Schmuelling <[email protected]> Co-authored-by: Matttttt <[email protected]> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> Co-authored-by: Segev Finer <[email protected]> Co-authored-by: Belem Zhang <[email protected]>
### Description This PR expands the graph capture capability to JS EP, which is similar to microsoft#16081. But for JS EP, we don't use the CUDA Graph, instead, we records all gpu commands and replay them, which removes most of the cpu overhead to avoid the the situation that gpu waiting for cpu. mobilenetv2-12 becomes 3.7ms from 6ms on NV 3090 and becomes 3.38ms from 4.58ms on Intel A770. All limitations are similar with CUDA EP: 1. Models with control-flow ops (i.e. If, Loop and Scan ops) are not supported. 2. Usage of graph capture is limited to models where-in all ops in the model can be partitioned to the JS EP or CPU EP and no memory copy between them. 3. Shapes of inputs/outputs cannot change across inference calls. 4. IObinding is required. The usage is like below: Method 1: specify outputs buffers explicitly. ``` const sessionOptions = { executionProviders: [ { name: "webgpu", }, ], enableGraphCapture: true, }; const session = await ort.InferenceSession.create('./models/mobilenetv2-12.onnx', sessionOptions); // prepare the inputBuffer/outputBuffer ... ... const feeds = { 'input': ort.Tensor.fromGpuBuffer(inputBuffer, { dataType: 'float32', dims }) }; const fetches = { 'output': ort.Tensor.fromGpuBuffer(outputBuffer, { dataType: 'float32', dims: [1, 1000] }) }; let results = await session.run(feeds, fetches); // The first run will begin to capture the graph. // update inputBuffer content ... ... results = = await session.run(feeds, fetches); // The 2ed run and after will directly call replay to execute the graph. ... ... session.release(); ``` Method 2: Don't specify outputs buffers explicitly. Internally, when graph capture is enabled, it will set all outputs location to 'gpu-buffer'. ``` const sessionOptions = { executionProviders: [ { name: "webgpu", }, ], enableGraphCapture: true, }; const session = await ort.InferenceSession.create('./models/mobilenetv2-12.onnx', sessionOptions); // prepare the inputBuffer ... ... const feeds = { 'input': ort.Tensor.fromGpuBuffer(inputBuffer, { dataType: 'float32', dims }) }; let results = await session.run(feeds); // The first run will begin to capture the graph. // update inputBuffer content ... ... results = = await session.run(feeds); // The 2ed run and after will directly call replay to execute the graph. ... ... session.release();
Description
This PR expands the graph capture capability to JS EP, which is similar to #16081. But for JS EP, we don't use the CUDA Graph, instead, we records all gpu commands and replay them, which removes most of the cpu overhead to avoid the the situation that gpu waiting for cpu.
mobilenetv2-12 becomes 3.7ms from 6ms on NV 3090 and becomes 3.38ms from 4.58ms on Intel A770.
All limitations are similar with CUDA EP:
The usage is like below:
Method 1: specify outputs buffers explicitly.
Method 2: Don't specify outputs buffers explicitly. Internally, when graph capture is enabled, it will set all outputs location to 'gpu-buffer'.