Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[js/webgpu] Fix NAN caused by un-initialized buffer in instance-norm #19387

Merged
merged 4 commits into from
Mar 19, 2024

Conversation

axinging
Copy link
Contributor

@axinging axinging commented Feb 2, 2024

The added case will be NAN because of the un-initialized buffer.

@axinging axinging marked this pull request as draft February 2, 2024 07:02
@axinging axinging changed the title [NotForReview] Fix gpu memory [NotForReview] Fix gpu buffer reuse issue Feb 2, 2024
@axinging axinging changed the title [NotForReview] Fix gpu buffer reuse issue [js/webgpu] Fix NAN caused by buffer reuse in instance-norm Feb 29, 2024
@axinging axinging changed the title [js/webgpu] Fix NAN caused by buffer reuse in instance-norm [js/webgpu] Fix NAN caused by un-initialized buffer in instance-norm Feb 29, 2024
@axinging axinging marked this pull request as ready for review February 29, 2024 06:28
@axinging
Copy link
Contributor Author

@qjia7 @xhcao @hujiajie @jzm-intel @gyagp PTAL

@gyagp
Copy link
Contributor

gyagp commented Mar 4, 2024

@axinging Please hold this a bit and I will discuss further with you.

@axinging axinging marked this pull request as draft March 5, 2024 00:37
@fs-eire
Copy link
Contributor

fs-eire commented Mar 13, 2024

I think this fix is OK and we should go merge this fix. Let me explain it a little bit about my understanding.

The current implementation of InstanceNormalization is a multi-step implementation. It uses 3 shaders:

  • InstanceNormComputeMean
    This program takes the original input (shape=[N, H, W, C]) with scale and bias, and generate a temporary intermediate output (shape=[N, C, 64, 2]
  • InstanceNormComputeChannelScaleShift
    This program takes the output of the previous step as input with scale and bias, and generate a temporary intermediate output (shape=[N, C, 2])
  • InstanceNormalizationNHWC
    This program takes the output of the previous step as input with scale and bias and generate the final output.

The problem is, the shader of step.1 does not always write into all the memory that it claims for the output, and step.2 uses the value that step.1 does not write to. This means any garbage data can be loaded into step.2 and generate wrong result.

This PR may fix this issue. But eventually we may need a better implementation that either we don't need 3 shaders (use workerBarriers maybe) or generate shader that make sure it writes to all memory that covers the output data. For now, it's better than a broken kernel.

@guschmue @satyajandhyala

@axinging axinging marked this pull request as ready for review March 13, 2024 12:57
@guschmue guschmue added the ep:WebGPU ort-web webgpu provider label Mar 13, 2024
@guschmue
Copy link
Contributor

/azp run ONNX Runtime Web CI Pipeline

@guschmue
Copy link
Contributor

/azp run Linux CPU CI Pipeline,Linux CPU Minimal Build E2E CI Pipeline,Linux GPU CI Pipeline,Linux GPU TensorRT CI Pipeline,Linux OpenVINO CI Pipeline,Linux QNN CI Pipeline,MacOS CI Pipeline,Windows ARM64 QNN CI Pipeline,Windows CPU CI Pipeline

Copy link

Azure Pipelines successfully started running 1 pipeline(s).

@guschmue
Copy link
Contributor

/azp run Windows GPU CI Pipeline,Windows GPU TensorRT CI Pipeline,onnxruntime-binary-size-checks-ci-pipeline,orttraining-linux-ci-pipeline,orttraining-linux-gpu-ci-pipeline,orttraining-ortmodule-distributed,Windows x64 QNN CI Pipeline

Copy link

Azure Pipelines could not run because the pipeline triggers exclude this branch/path.

Copy link

Azure Pipelines successfully started running 1 pipeline(s).

@fs-eire
Copy link
Contributor

fs-eire commented Mar 13, 2024

/azp run Big Models

Copy link

Azure Pipelines could not run because the pipeline triggers exclude this branch/path.

@gyagp
Copy link
Contributor

gyagp commented Mar 14, 2024

I think this fix is OK and we should go merge this fix. Let me explain it a little bit about my understanding.

The current implementation of InstanceNormalization is a multi-step implementation. It uses 3 shaders:

  • InstanceNormComputeMean
    This program takes the original input (shape=[N, H, W, C]) with scale and bias, and generate a temporary intermediate output (shape=[N, C, 64, 2]
  • InstanceNormComputeChannelScaleShift
    This program takes the output of the previous step as input with scale and bias, and generate a temporary intermediate output (shape=[N, C, 2])
  • InstanceNormalizationNHWC
    This program takes the output of the previous step as input with scale and bias and generate the final output.

The problem is, the shader of step.1 does not always write into all the memory that it claims for the output, and step.2 uses the value that step.1 does not write to. This means any garbage data can be loaded into step.2 and generate wrong result.

This PR may fix this issue. But eventually we may need a better implementation that either we don't need 3 shaders (use workerBarriers maybe) or generate shader that make sure it writes to all memory that covers the output data. For now, it's better than a broken kernel.

@guschmue @satyajandhyala

My intuition is we should avoid reading out-of-scope data in step2, instead of filling them with all 0s in step1. Any concerns we can't do this?

@fs-eire
Copy link
Contributor

fs-eire commented Mar 14, 2024

I think this fix is OK and we should go merge this fix. Let me explain it a little bit about my understanding.
The current implementation of InstanceNormalization is a multi-step implementation. It uses 3 shaders:

  • InstanceNormComputeMean
    This program takes the original input (shape=[N, H, W, C]) with scale and bias, and generate a temporary intermediate output (shape=[N, C, 64, 2]
  • InstanceNormComputeChannelScaleShift
    This program takes the output of the previous step as input with scale and bias, and generate a temporary intermediate output (shape=[N, C, 2])
  • InstanceNormalizationNHWC
    This program takes the output of the previous step as input with scale and bias and generate the final output.

The problem is, the shader of step.1 does not always write into all the memory that it claims for the output, and step.2 uses the value that step.1 does not write to. This means any garbage data can be loaded into step.2 and generate wrong result.
This PR may fix this issue. But eventually we may need a better implementation that either we don't need 3 shaders (use workerBarriers maybe) or generate shader that make sure it writes to all memory that covers the output data. For now, it's better than a broken kernel.
@guschmue @satyajandhyala

My intuition is we should avoid reading out-of-scope data in step2, instead of filling them with all 0s in step1. Any concerns we can't do this?

We absolutely should do this. Usually I agree with "avoid reading out-of-scope data". But in this specific implementation, the step.1 creates the output in shape [N, C, 64, 2], regardless of the W and H from the original input. The dim[2] is fixed to 64 for whatever reason (maybe to simplify the implementation).

Since step.1 write to [N, C, 64, 2] and step.2 read from the same sized buffer as input, there can be another point of view that the whole NxCx128 data is "in the scope". It's the internal steps of the operator's implementation anyway, so it's up to who implement the operator as long as the read/write range should match. So in this POV, fixing step.1 to make sure writting to all required memory is also a valid fix.

But you reminded me that I didn't check whether uniforms.H equals to N*C*128/sizeof(input_elem). This is important otherwise step.1 writes to out-of-scope data.

@axinging
Copy link
Contributor Author

@fs-eire, @gyagp Boundary check added, PTAL.

But I think the instance-norm (NHWC) has room for improvement: It wastes a lot of memory and GPU compute unit.
For memory: [n, c, 64, 2] is a huge waste, indeed [n, c, 1, 2] should be enough for this case.
For compute unit, it scheduled n * c / components * 64 threads, I test that n * c / components * 1 also works.
I will do some perf test on this and if possible, I will fix this in another PR.

@fs-eire
Copy link
Contributor

fs-eire commented Mar 19, 2024

/azp run Windows ARM64 QNN CI Pipeline,Windows x64 QNN CI Pipeline,Windows CPU CI Pipeline,Windows GPU CI Pipeline,Windows GPU TensorRT CI Pipeline,ONNX Runtime Web CI Pipeline,Linux CPU CI Pipeline,Linux CPU Minimal Build E2E CI Pipeline,Linux GPU CI Pipeline,Linux GPU TensorRT CI Pipeline

@fs-eire
Copy link
Contributor

fs-eire commented Mar 19, 2024

/azp run Linux OpenVINO CI Pipeline,Linux QNN CI Pipeline,MacOS CI Pipeline,orttraining-amd-gpu-ci-pipeline,orttraining-linux-ci-pipeline,orttraining-linux-gpu-ci-pipeline,orttraining-ortmodule-distributed,onnxruntime-binary-size-checks-ci-pipeline,Big Models,Android CI Pipeline

@fs-eire
Copy link
Contributor

fs-eire commented Mar 19, 2024

/azp run iOS CI Pipeline,ONNX Runtime React Native CI Pipeline

Copy link

Azure Pipelines successfully started running 1 pipeline(s).

2 similar comments
Copy link

Azure Pipelines successfully started running 1 pipeline(s).

Copy link

Azure Pipelines successfully started running 1 pipeline(s).

@fs-eire fs-eire merged commit 4c6a6a3 into microsoft:main Mar 19, 2024
45 checks passed
fs-eire pushed a commit that referenced this pull request Mar 20, 2024
…19387)

The added case will be NAN because of the un-initialized buffer.
YUNQIUGUO pushed a commit that referenced this pull request Mar 29, 2024
### Description
This PR is a preview of cherry-picks for ort-web to `rel-1.17.3` based
on `rel-1.17.2`.

<details>

<summary>Changes of ort-web to cherry-pick</summary>

The following commits are from main branch.

`o` stands for pick, and `x` stands for skip.
```
o   2e0a388 [js/webgpu] Add HardSigmoid support (#19215)
o   d226e40 [js/webgpu] set query type in onRunStart (#19202)
o   61610ff [js/webgpu] Add FusedConv clip test case (#18900)
o   a33b5bd [JS/WebGPU] Added Uniforms to SkipLayerNorm. (#18788)
o   591f90c [js/webgpu] Fix issue of timestamp query (#19258)
o   7252c6e [WebNN EP] Support WebNN async API with Asyncify (#19145)
o   5b06505 [js/webgpu] Fix Tanh explosion (#19201)
o   656ca66 [js/webgpu] Support uniforms for conv, conv transpose, conv grouped (#18753)
o   a3f0e24 [js/webgpu] Support f16 uniform (#19098)
o   9e69606 fix f16 for attention, enable slice and flatten for more types (#19262)
o   624b4e2 [js/webgpu] Remove enableShapesUniforms (#19279)
o   90883a3 [js/webgpu] Add hardSigmoid activation for fusedConv (#19233)
o   85cef0a [js/webgpu] Support capture and replay for jsep (#18989)
o   d73131c [js/webgpu] Use DataType as uniform cpu type (#19281)
o   dd1f6cc [js/webgpu] resolve codescan alert (#19343)
o   3a2ab19 [js/webgpu] Refactor createTensorShapeVariables (#18883)
o   efc17e7 [js/webgpu] Fix the undefined push error (#19366)
 x  50806a7 [js/web] support external data in npm test (#19377)
o   ccbe264 [js/webgpu] Add LeakyRelu activation for fusedConv (#19369)
o   5ff27ef [js/webgpu] support customop FastGelu (#19392)
 x  03be65e [js/web] fix types exports in package.json (#19458)
o   06269a3 [js/webgpu] allow uint8 tensors for webgpu (#19545)
o   dfeda90 [JS/WebGPU] Add MatMulNBits (#19446)
o   1b48054 [js/webgpu] Create Split indices helpers by rank, not by shape (#19554)
o   3fe2c13 [js] small fix to workaround formatter (#19400)
 x  70567a4 [js/web] use ApiTensor insteadof onnxjs Tensor in TensorResultValidator (#19358)
o   6e04e36 [js/common] upgrade tsc in common from 4.9.5 to 5.2.2 (#19317)
o   58f4921 [js] changes to allow Float16Array if any polyfill is available (#19305)
o   57d6819 [js/web] Fix fused-conv is not included in npm test (#19581)
o   ebd220b Misspelling in README.md (#19433)
o   38c3432 Bump ip from 1.1.8 to 1.1.9 in /js/react_native (#19582)
o   fe82fcc [js/webgpu] Fix Conv2DTransposeMatMul f16 compilation failure (#19596)
o   76a2a48 Bump ip from 1.1.8 to 1.1.9 in /js/react_native/e2e (#19583)
o   29b1106 [node] Switch to setImmediate to avoid starving the Node.js event loop (#19610)
o   ae3d73c [JS/WebGPU] Fix Split and Where to handle corner cases. (#19613)
o   aec2389 [js/webgpu] allows a ProgramInfo's RunData to use zero sized output (#19614)
o   bb43a0f [js/webgpu] minor fixes to make tinyllama work (#19564)
o   0edb035 [js/web] fix suite test list for zero sized tensor (#19638)
o   3cb81cd [js/common] move 'env.wasm.trace' to 'env.trace' (#19617)
o   e30618d [js/webgpu] use Headless for webgpu test by default (#19702)
o   f06164e [js/web] transfer input buffer back to caller thread (#19677)
 x  a788514 [js/web] dump debug logs for karma for diagnose purpose (#19785)
o   24b72d2 [JS/WebGPU] Preserve zero size input tensor dims. (#19737)
o   4538d31 [js/webgpu] expose a few properties in WebGPU API (#19857)
o   53de2d8 [js/webgpu] Enable GroupedConvVectorize path (#19791)
o   ed250b8 [JS/WebGPU] Optimize MatMulNBits (#19852)
 x  e771a76 [js/test] align web test runner flags with ort.env (#19790)
o   79e50ae [js/web] rewrite backend resolve to allow multiple EPs (#19735)
o   acb0df2 Fix #19931 broken Get Started link of "ONNX Runtime JavaScript API" page (#19932)
o   b29849a [js/common] fix typedoc warnings (#19933)
o   afdab62 Bump follow-redirects from 1.15.4 to 1.15.6 in /js/web (#19949)
o   28ad6c3 Bump follow-redirects from 1.15.4 to 1.15.6 in /js/node (#19951)
o   7e0d424 accumulate in fp32 for Reduce* (#19868)
o   4c6a6a3 [js/webgpu] Fix NAN caused by un-initialized buffer in instance-norm (#19387)
o   01c7aaf [js/webgpu] allow setting env.webgpu.adapter (#19940)
o   c45cff6 [js/webgpu] fix maxpool / fp16 (#19981)
```

</details>

<details>
<summary>Cherry-pick commandlines</summary>

```sh
git cherry-pick 2e0a388
git cherry-pick d226e40
git cherry-pick 61610ff
git cherry-pick a33b5bd
git cherry-pick 591f90c
git cherry-pick 7252c6e
git cherry-pick 5b06505
git cherry-pick 656ca66
git cherry-pick a3f0e24
git cherry-pick 9e69606
git cherry-pick 624b4e2
git cherry-pick 90883a3
git cherry-pick 85cef0a  #<<<<< Note: conflicts
git cherry-pick d73131c
git cherry-pick dd1f6cc
git cherry-pick 3a2ab19
git cherry-pick efc17e7
git cherry-pick ccbe264
git cherry-pick 5ff27ef
git cherry-pick 06269a3
git cherry-pick dfeda90
git cherry-pick 1b48054
git cherry-pick 3fe2c13
git cherry-pick 6e04e36
git cherry-pick 58f4921
git cherry-pick 57d6819
git cherry-pick ebd220b
git cherry-pick 38c3432
git cherry-pick fe82fcc
git cherry-pick 76a2a48
git cherry-pick 29b1106
git cherry-pick ae3d73c
git cherry-pick aec2389
git cherry-pick bb43a0f
git cherry-pick 0edb035
git cherry-pick 3cb81cd
git cherry-pick e30618d
git cherry-pick f06164e
git cherry-pick 24b72d2
git cherry-pick 4538d31
git cherry-pick 53de2d8
git cherry-pick ed250b8
git cherry-pick 79e50ae
git cherry-pick acb0df2
git cherry-pick b29849a
git cherry-pick afdab62
git cherry-pick 28ad6c3
git cherry-pick 7e0d424
git cherry-pick 4c6a6a3
git cherry-pick 01c7aaf
git cherry-pick c45cff6
```
</details>

<details>
<summary>Cherry-pick conflicts</summary>

- 85cef0a #18989
this change is for enabling graph capture feature for JSEP, and it is
done after ROCM EP enabled graph capture feature. However, the ROCM EP
graph capture feature is not cherry-picked in rel-1.17.2.
</details>

---------

Signed-off-by: dependabot[bot] <[email protected]>
Co-authored-by: Jiajia Qin <[email protected]>
Co-authored-by: Xu Xing <[email protected]>
Co-authored-by: satyajandhyala <[email protected]>
Co-authored-by: Yang Gu <[email protected]>
Co-authored-by: Wanming Lin <[email protected]>
Co-authored-by: Jiajie Hu <[email protected]>
Co-authored-by: Guenther Schmuelling <[email protected]>
Co-authored-by: Matttttt <[email protected]>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Segev Finer <[email protected]>
Co-authored-by: Belem Zhang <[email protected]>
TedThemistokleous pushed a commit to TedThemistokleous/onnxruntime that referenced this pull request May 7, 2024
…icrosoft#19387)

The added case will be NAN because of the un-initialized buffer.
siweic0 pushed a commit to siweic0/onnxruntime-web that referenced this pull request May 9, 2024
…icrosoft#19387)

The added case will be NAN because of the un-initialized buffer.
@axinging axinging deleted the fix_gpumemory_v2 branch June 25, 2024 02:59
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ep:WebGPU ort-web webgpu provider
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants