Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[regression][gpu] Regression due to cb59389 #19436

Open
pdhirajkumarprasad opened this issue Dec 10, 2024 · 3 comments
Open

[regression][gpu] Regression due to cb59389 #19436

pdhirajkumarprasad opened this issue Dec 10, 2024 · 3 comments
Assignees
Labels
bug 🐞 Something isn't working

Comments

@pdhirajkumarprasad
Copy link

pdhirajkumarprasad commented Dec 10, 2024

What happened?

We have 19 models failing post cb59389 and all these were passing till d88d0a7

Model list:

migraphx_mlperf__resnet50_v1
model--Bartlarge--Shubham09
model--M-TurQA-convbert-base-turkish-cased-finetuned-toqad-aug--meetyildiz	
model--TinyStories-1M--roneneldan	
model--TinyStories-3M--roneneldan	
model--TinyStories-8M--roneneldan	
model--gemma-tiny-random--yujiepan	
model--ia-detection-tiny-random-gptj--arincon	
model--my_awesome_gptj_model--anandshende	
model--qa_tquad_convbert-base-turkish--Izzet	
model--qa_ytu_convbert-base-turkish--Izzet	
model--really-tiny-falcon-testing--fxmarty	
model--tiny-gpt2--taufeeque	
model--tiny-gpt2-magicprompt--pszemraj	
model--tiny-random-ConvBertForQuestionAnswering--hf-tiny-model-private	
model--tiny-random-FalconForCausalLM--illuin	
model--tiny-random-GPTJForQuestionAnswering--hf-tiny-model-private	
model--tiny-random-llama--IlyasMoutawwakil	
model--tiny-testing-falcon-alibi--fxmarty

IR:

module {
  func.func @main_graph(%arg0: !torch.vtensor<[?,?],si64>, %arg1: !torch.vtensor<[?,?],si64>) -> !torch.vtensor<[?,2,?,?],f32> attributes {torch.onnx_meta.ir_version = 7 : si64, torch.onnx_meta.opset_version = 21 : si64, torch.onnx_meta.producer_name = "pytorch", torch.onnx_meta.producer_version = "2.4.0"} {
    %0 = torch.operator "onnx.Constant"() {torch.onnx.value = dense_resource<_transformer.wte.weight> : tensor<50257x32xf32>} : () -> !torch.vtensor<[50257,32],f32> 
    %1 = torch.operator "onnx.Constant"() {torch.onnx.value = dense_resource<_transformer.wpe.weight> : tensor<1024x32xf32>} : () -> !torch.vtensor<[1024,32],f32> 
    %2 = torch.operator "onnx.Constant"() {torch.onnx.value = dense_resource<_transformer.h.0.ln_1.weight> : tensor<32xf32>} : () -> !torch.vtensor<[32],f32> 
    %3 = torch.operator "onnx.Constant"() {torch.onnx.value = dense_resource<_transformer.h.0.ln_1.bias> : tensor<32xf32>} : () -> !torch.vtensor<[32],f32> 
    %4 = torch.operator "onnx.Constant"() {torch.onnx.value = dense_resource<_transformer.h.0.attn.c_attn.weight> : tensor<32x96xf32>} : () -> !torch.vtensor<[32,96],f32> 
    %5 = torch.operator "onnx.Constant"() {torch.onnx.value = dense_resource<_transformer.h.0.attn.c_attn.bias> : tensor<96xf32>} : () -> !torch.vtensor<[96],f32> 
    %none = torch.constant.none
    %6 = torch.operator "onnx.Shape"(%arg0) : (!torch.vtensor<[?,?],si64>) -> !torch.vtensor<[2],si64> 
    %7 = torch.operator "onnx.Constant"() {torch.onnx.value = dense_resource<_> : tensor<si64>} : () -> !torch.vtensor<[],si64> 
    %8 = torch.operator "onnx.Gather"(%6, %7) {torch.onnx.axis = 0 : si64} : (!torch.vtensor<[2],si64>, !torch.vtensor<[],si64>) -> !torch.vtensor<[],si64> 
    %9 = torch.operator "onnx.Constant"() {torch.onnx.value = dense_resource<__1> : tensor<1xsi64>} : () -> !torch.vtensor<[1],si64> 
    %10 = torch.operator "onnx.Constant"() {torch.onnx.value = dense_resource<__2> : tensor<1xsi64>} : () -> !torch.vtensor<[1],si64> 
    %11 = torch.operator "onnx.Unsqueeze"(%8, %10) : (!torch.vtensor<[],si64>, !torch.vtensor<[1],si64>) -> !torch.vtensor<[1],si64> 
    %12 = torch.operator "onnx.Concat"(%9, %11) {torch.onnx.axis = 0 : si64} : (!torch.vtensor<[1],si64>, !torch.vtensor<[1],si64>) -> !torch.vtensor<[2],si64> 
    %13 = torch.operator "onnx.Reshape"(%arg0, %12) {torch.onnx.allowzero = 0 : si64} : (!torch.vtensor<[?,?],si64>, !torch.vtensor<[2],si64>) -> !torch.vtensor<[?,?],si64> 
    %14 = torch.operator "onnx.Gather"(%0, %13) : (!torch.vtensor<[50257,32],f32>, !torch.vtensor<[?,?],si64>) -> !torch.vtensor<[?,?,32],f32> 
    %15 = torch.operator "onnx.Gather"(%1, %arg1) : (!torch.vtensor<[1024,32],f32>, !torch.vtensor<[?,?],si64>) -> !torch.vtensor<[?,?,32],f32> 
    %16 = torch.operator "onnx.Add"(%14, %15) : (!torch.vtensor<[?,?,32],f32>, !torch.vtensor<[?,?,32],f32>) -> !torch.vtensor<[?,?,32],f32> 
    %17 = torch.operator "onnx.Constant"() {torch.onnx.value = dense<-1> : tensor<1xsi64>} : () -> !torch.vtensor<[1],si64> 
    %18 = torch.operator "onnx.ReduceMean"(%16, %17) : (!torch.vtensor<[?,?,32],f32>, !torch.vtensor<[1],si64>) -> !torch.vtensor<[?,?,1],f32> 
    %19 = torch.operator "onnx.Sub"(%16, %18) : (!torch.vtensor<[?,?,32],f32>, !torch.vtensor<[?,?,1],f32>) -> !torch.vtensor<[?,?,32],f32> 
    %20 = torch.operator "onnx.Constant"() {torch.onnx.value = dense_resource<__3> : tensor<f32>} : () -> !torch.vtensor<[],f32> 
    %21 = torch.operator "onnx.Pow"(%19, %20) : (!torch.vtensor<[?,?,32],f32>, !torch.vtensor<[],f32>) -> !torch.vtensor<[?,?,32],f32> 
    %22 = torch.operator "onnx.Constant"() {torch.onnx.value = dense<-1> : tensor<1xsi64>} : () -> !torch.vtensor<[1],si64> 
    %23 = torch.operator "onnx.ReduceMean"(%21, %22) : (!torch.vtensor<[?,?,32],f32>, !torch.vtensor<[1],si64>) -> !torch.vtensor<[?,?,1],f32> 
    %24 = torch.operator "onnx.Constant"() {torch.onnx.value = dense_resource<__4> : tensor<f32>} : () -> !torch.vtensor<[],f32> 
    %25 = torch.operator "onnx.Add"(%23, %24) : (!torch.vtensor<[?,?,1],f32>, !torch.vtensor<[],f32>) -> !torch.vtensor<[?,?,1],f32> 
    %26 = torch.operator "onnx.Sqrt"(%25) : (!torch.vtensor<[?,?,1],f32>) -> !torch.vtensor<[?,?,1],f32> 
    %27 = torch.operator "onnx.Div"(%19, %26) : (!torch.vtensor<[?,?,32],f32>, !torch.vtensor<[?,?,1],f32>) -> !torch.vtensor<[?,?,32],f32> 
    %28 = torch.operator "onnx.Mul"(%27, %2) : (!torch.vtensor<[?,?,32],f32>, !torch.vtensor<[32],f32>) -> !torch.vtensor<[?,?,32],f32> 
    %29 = torch.operator "onnx.Add"(%28, %3) : (!torch.vtensor<[?,?,32],f32>, !torch.vtensor<[32],f32>) -> !torch.vtensor<[?,?,32],f32> 
    %30 = torch.operator "onnx.Shape"(%29) : (!torch.vtensor<[?,?,32],f32>) -> !torch.vtensor<[3],si64> 
    %31 = torch.operator "onnx.Constant"() {torch.onnx.value = dense_resource<__5> : tensor<si64>} : () -> !torch.vtensor<[],si64> 
    %32 = torch.operator "onnx.Gather"(%30, %31) {torch.onnx.axis = 0 : si64} : (!torch.vtensor<[3],si64>, !torch.vtensor<[],si64>) -> !torch.vtensor<[],si64> 
    %33 = torch.operator "onnx.Shape"(%29) : (!torch.vtensor<[?,?,32],f32>) -> !torch.vtensor<[3],si64> 
    %34 = torch.operator "onnx.Constant"() {torch.onnx.value = dense_resource<__6> : tensor<si64>} : () -> !torch.vtensor<[],si64> 
    %35 = torch.operator "onnx.Gather"(%33, %34) {torch.onnx.axis = 0 : si64} : (!torch.vtensor<[3],si64>, !torch.vtensor<[],si64>) -> !torch.vtensor<[],si64> 
    %36 = torch.operator "onnx.Shape"(%29) : (!torch.vtensor<[?,?,32],f32>) -> !torch.vtensor<[3],si64> 
    %37 = torch.operator "onnx.Constant"() {torch.onnx.value = dense_resource<__7> : tensor<1xsi64>} : () -> !torch.vtensor<[1],si64> 
    %38 = torch.operator "onnx.Constant"() {torch.onnx.value = dense_resource<__8> : tensor<1xsi64>} : () -> !torch.vtensor<[1],si64> 
    %39 = torch.operator "onnx.Constant"() {torch.onnx.value = dense_resource<__9> : tensor<1xsi64>} : () -> !torch.vtensor<[1],si64> 
    %40 = torch.operator "onnx.Slice"(%36, %38, %39, %37) : (!torch.vtensor<[3],si64>, !torch.vtensor<[1],si64>, !torch.vtensor<[1],si64>, !torch.vtensor<[1],si64>) -> !torch.vtensor<[1],si64> 
    %41 = torch.operator "onnx.Constant"() {torch.onnx.value = dense_resource<__10> : tensor<1xsi64>} : () -> !torch.vtensor<[1],si64> 
    %42 = torch.operator "onnx.Squeeze"(%40, %41) : (!torch.vtensor<[1],si64>, !torch.vtensor<[1],si64>) -> !torch.vtensor<[],si64> 
    %43 = torch.operator "onnx.Constant"() {torch.onnx.value = dense_resource<__11> : tensor<1xsi64>} : () -> !torch.vtensor<[1],si64> 
    %44 = torch.operator "onnx.Constant"() {torch.onnx.value = dense_resource<__12> : tensor<1xsi64>} : () -> !torch.vtensor<[1],si64> 
    %45 = torch.operator "onnx.Unsqueeze"(%42, %44) : (!torch.vtensor<[],si64>, !torch.vtensor<[1],si64>) -> !torch.vtensor<[1],si64> 
    %46 = torch.operator "onnx.Concat"(%43, %45) {torch.onnx.axis = 0 : si64} : (!torch.vtensor<[1],si64>, !torch.vtensor<[1],si64>) -> !torch.vtensor<[2],si64> 
    %47 = torch.operator "onnx.Reshape"(%29, %46) {torch.onnx.allowzero = 0 : si64} : (!torch.vtensor<[?,?,32],f32>, !torch.vtensor<[2],si64>) -> !torch.vtensor<[?,32],f32> 
    %48 = torch.operator "onnx.Gemm"(%47, %4, %5) {torch.onnx.alpha = 1.000000e+00 : f32, torch.onnx.beta = 1.000000e+00 : f32} : (!torch.vtensor<[?,32],f32>, !torch.vtensor<[32,96],f32>, !torch.vtensor<[96],f32>) -> !torch.vtensor<[?,96],f32> 
    %49 = torch.operator "onnx.Constant"() {torch.onnx.value = dense_resource<__13> : tensor<1xsi64>} : () -> !torch.vtensor<[1],si64> 
    %50 = torch.operator "onnx.Unsqueeze"(%32, %49) : (!torch.vtensor<[],si64>, !torch.vtensor<[1],si64>) -> !torch.vtensor<[1],si64> 
    %51 = torch.operator "onnx.Constant"() {torch.onnx.value = dense_resource<__14> : tensor<1xsi64>} : () -> !torch.vtensor<[1],si64> 
    %52 = torch.operator "onnx.Unsqueeze"(%35, %51) : (!torch.vtensor<[],si64>, !torch.vtensor<[1],si64>) -> !torch.vtensor<[1],si64> 
    %53 = torch.operator "onnx.Constant"() {torch.onnx.value = dense_resource<__15> : tensor<1xsi64>} : () -> !torch.vtensor<[1],si64> 
    %54 = torch.operator "onnx.Concat"(%50, %52, %53) {torch.onnx.axis = 0 : si64} : (!torch.vtensor<[1],si64>, !torch.vtensor<[1],si64>, !torch.vtensor<[1],si64>) -> !torch.vtensor<[3],si64> 
    %55 = torch.operator "onnx.Reshape"(%48, %54) {torch.onnx.allowzero = 0 : si64} : (!torch.vtensor<[?,96],f32>, !torch.vtensor<[3],si64>) -> !torch.vtensor<[?,?,96],f32> 
    %56 = torch.operator "onnx.Constant"() {torch.onnx.value = dense_resource<__16> : tensor<3xsi64>} : () -> !torch.vtensor<[3],si64> 
    %57:3 = torch.operator "onnx.Split"(%55, %56) {torch.onnx.axis = 2 : si64} : (!torch.vtensor<[?,?,96],f32>, !torch.vtensor<[3],si64>) -> (!torch.vtensor<[?,?,32],f32>, !torch.vtensor<[?,?,32],f32>, !torch.vtensor<[?,?,32],f32>) 
    %58 = torch.operator "onnx.Shape"(%57#0) : (!torch.vtensor<[?,?,32],f32>) -> !torch.vtensor<[3],si64> 
    %59 = torch.operator "onnx.Constant"() {torch.onnx.value = dense_resource<__17> : tensor<si64>} : () -> !torch.vtensor<[],si64> 
    %60 = torch.operator "onnx.Gather"(%58, %59) {torch.onnx.axis = 0 : si64} : (!torch.vtensor<[3],si64>, !torch.vtensor<[],si64>) -> !torch.vtensor<[],si64> 
    %61 = torch.operator "onnx.Shape"(%57#0) : (!torch.vtensor<[?,?,32],f32>) -> !torch.vtensor<[3],si64> 
    %62 = torch.operator "onnx.Constant"() {torch.onnx.value = dense_resource<__18> : tensor<si64>} : () -> !torch.vtensor<[],si64> 
    %63 = torch.operator "onnx.Gather"(%61, %62) {torch.onnx.axis = 0 : si64} : (!torch.vtensor<[3],si64>, !torch.vtensor<[],si64>) -> !torch.vtensor<[],si64> 
    %64 = torch.operator "onnx.Constant"() {torch.onnx.value = dense_resource<__19> : tensor<1xsi64>} : () -> !torch.vtensor<[1],si64> 
    %65 = torch.operator "onnx.Unsqueeze"(%60, %64) : (!torch.vtensor<[],si64>, !torch.vtensor<[1],si64>) -> !torch.vtensor<[1],si64> 
    %66 = torch.operator "onnx.Constant"() {torch.onnx.value = dense_resource<__20> : tensor<1xsi64>} : () -> !torch.vtensor<[1],si64> 
    %67 = torch.operator "onnx.Unsqueeze"(%63, %66) : (!torch.vtensor<[],si64>, !torch.vtensor<[1],si64>) -> !torch.vtensor<[1],si64> 
    %68 = torch.operator "onnx.Constant"() {torch.onnx.value = dense_resource<__21> : tensor<1xsi64>} : () -> !torch.vtensor<[1],si64> 
    %69 = torch.operator "onnx.Constant"() {torch.onnx.value = dense_resource<__22> : tensor<1xsi64>} : () -> !torch.vtensor<[1],si64> 
    %70 = torch.operator "onnx.Concat"(%65, %67, %68, %69) {torch.onnx.axis = 0 : si64} : (!torch.vtensor<[1],si64>, !torch.vtensor<[1],si64>, !torch.vtensor<[1],si64>, !torch.vtensor<[1],si64>) -> !torch.vtensor<[4],si64> 
    %71 = torch.operator "onnx.Reshape"(%57#0, %70) {torch.onnx.allowzero = 0 : si64} : (!torch.vtensor<[?,?,32],f32>, !torch.vtensor<[4],si64>) -> !torch.vtensor<[?,?,2,16],f32> 
    %72 = torch.operator "onnx.Transpose"(%71) {torch.onnx.perm = [0 : si64, 2 : si64, 1 : si64, 3 : si64]} : (!torch.vtensor<[?,?,2,16],f32>) -> !torch.vtensor<[?,2,?,16],f32> 
    %73 = torch.operator "onnx.Shape"(%57#1) : (!torch.vtensor<[?,?,32],f32>) -> !torch.vtensor<[3],si64> 
    %74 = torch.operator "onnx.Constant"() {torch.onnx.value = dense_resource<__23> : tensor<si64>} : () -> !torch.vtensor<[],si64> 
    %75 = torch.operator "onnx.Gather"(%73, %74) {torch.onnx.axis = 0 : si64} : (!torch.vtensor<[3],si64>, !torch.vtensor<[],si64>) -> !torch.vtensor<[],si64> 
    %76 = torch.operator "onnx.Shape"(%57#1) : (!torch.vtensor<[?,?,32],f32>) -> !torch.vtensor<[3],si64> 
    %77 = torch.operator "onnx.Constant"() {torch.onnx.value = dense_resource<__24> : tensor<si64>} : () -> !torch.vtensor<[],si64> 
    %78 = torch.operator "onnx.Gather"(%76, %77) {torch.onnx.axis = 0 : si64} : (!torch.vtensor<[3],si64>, !torch.vtensor<[],si64>) -> !torch.vtensor<[],si64> 
    %79 = torch.operator "onnx.Constant"() {torch.onnx.value = dense_resource<__25> : tensor<1xsi64>} : () -> !torch.vtensor<[1],si64> 
    %80 = torch.operator "onnx.Unsqueeze"(%75, %79) : (!torch.vtensor<[],si64>, !torch.vtensor<[1],si64>) -> !torch.vtensor<[1],si64> 
    %81 = torch.operator "onnx.Constant"() {torch.onnx.value = dense_resource<__26> : tensor<1xsi64>} : () -> !torch.vtensor<[1],si64> 
    %82 = torch.operator "onnx.Unsqueeze"(%78, %81) : (!torch.vtensor<[],si64>, !torch.vtensor<[1],si64>) -> !torch.vtensor<[1],si64> 
    %83 = torch.operator "onnx.Constant"() {torch.onnx.value = dense_resource<__27> : tensor<1xsi64>} : () -> !torch.vtensor<[1],si64> 
    %84 = torch.operator "onnx.Constant"() {torch.onnx.value = dense_resource<__28> : tensor<1xsi64>} : () -> !torch.vtensor<[1],si64> 
    %85 = torch.operator "onnx.Concat"(%80, %82, %83, %84) {torch.onnx.axis = 0 : si64} : (!torch.vtensor<[1],si64>, !torch.vtensor<[1],si64>, !torch.vtensor<[1],si64>, !torch.vtensor<[1],si64>) -> !torch.vtensor<[4],si64> 
    %86 = torch.operator "onnx.Reshape"(%57#1, %85) {torch.onnx.allowzero = 0 : si64} : (!torch.vtensor<[?,?,32],f32>, !torch.vtensor<[4],si64>) -> !torch.vtensor<[?,?,2,16],f32> 
    %87 = torch.operator "onnx.Shape"(%72) : (!torch.vtensor<[?,2,?,16],f32>) -> !torch.vtensor<[4],si64> 
    %88 = torch.operator "onnx.Constant"() {torch.onnx.value = dense_resource<__29> : tensor<1xsi64>} : () -> !torch.vtensor<[1],si64> 
    %89 = torch.operator "onnx.Constant"() {torch.onnx.value = dense_resource<__30> : tensor<1xsi64>} : () -> !torch.vtensor<[1],si64> 
    %90 = torch.operator "onnx.Slice"(%87, %88, %89) : (!torch.vtensor<[4],si64>, !torch.vtensor<[1],si64>, !torch.vtensor<[1],si64>) -> !torch.vtensor<[1],si64> 
    %91 = torch.operator "onnx.Cast"(%90) {torch.onnx.to = 1 : si64} : (!torch.vtensor<[1],si64>) -> !torch.vtensor<[1],f32> 
    %92 = torch.operator "onnx.Sqrt"(%91) : (!torch.vtensor<[1],f32>) -> !torch.vtensor<[1],f32> 
    %93 = torch.operator "onnx.Constant"() {torch.onnx.value = dense_resource<__31> : tensor<1xf32>} : () -> !torch.vtensor<[1],f32> 
    %94 = torch.operator "onnx.Div"(%93, %92) : (!torch.vtensor<[1],f32>, !torch.vtensor<[1],f32>) -> !torch.vtensor<[1],f32> 
    %95 = torch.operator "onnx.Cast"(%94) {torch.onnx.to = 1 : si64} : (!torch.vtensor<[1],f32>) -> !torch.vtensor<[1],f32> 
    %96 = torch.operator "onnx.Transpose"(%86) {torch.onnx.perm = [0 : si64, 2 : si64, 3 : si64, 1 : si64]} : (!torch.vtensor<[?,?,2,16],f32>) -> !torch.vtensor<[?,2,16,?],f32> 
    %97 = torch.operator "onnx.Sqrt"(%95) : (!torch.vtensor<[1],f32>) -> !torch.vtensor<[1],f32> 
    %98 = torch.operator "onnx.Mul"(%72, %97) : (!torch.vtensor<[?,2,?,16],f32>, !torch.vtensor<[1],f32>) -> !torch.vtensor<[?,2,?,16],f32> 
    %99 = torch.operator "onnx.Sqrt"(%95) : (!torch.vtensor<[1],f32>) -> !torch.vtensor<[1],f32> 
    %100 = torch.operator "onnx.Mul"(%96, %99) : (!torch.vtensor<[?,2,16,?],f32>, !torch.vtensor<[1],f32>) -> !torch.vtensor<[?,2,16,?],f32> 
    %101 = torch.operator "onnx.MatMul"(%98, %100) : (!torch.vtensor<[?,2,?,16],f32>, !torch.vtensor<[?,2,16,?],f32>) -> !torch.vtensor<[?,2,?,?],f32> 
    return %101 : !torch.vtensor<[?,2,?,?],f32>
  }
}

command:

iree-compile model.torch_onnx.mlir --iree-hal-target-backends=rocm --iree-hip-target=gfx942 -o compiled_model.vmfb 


iree-run-module --module='compiled_model.vmfb' --device=hip --function='main_graph' --input='[email protected]' --input='[email protected]' --output=@'output.0.bin' --expected_output='1x2x128x128xf32=@golden_output.0.bin'

golden_output.0.bin.txt
input.0.bin.txt
input.1.bin.txt
model.torch_onnx.mlir.txt

Steps to reproduce your issue

  1. Go to '...'
  2. Click on '....'
  3. Scroll down to '....'
  4. See error

What component(s) does this issue relate to?

Runtime

Version information

No response

Additional context

No response

@pdhirajkumarprasad pdhirajkumarprasad added the bug 🐞 Something isn't working label Dec 10, 2024
@jerryyin
Copy link
Member

@pdhirajkumarprasad I cannot reproduce the failure. This is my local result on the PR branch:

iree-run-module --module='compiled_model.vmfb' --device=hip --function='main_graph' --input='[email protected]' --input='[email protected]' --output=@'output.0.bin' --expected_output='1x2x128x128xf32=@golden_output.0.bin'
EXEC @main_graph
[SUCCESS] all function outputs matched their expected values.

May I ask what's the failure signature look like?

@jerryyin
Copy link
Member

jerryyin commented Dec 10, 2024

In order to get an understanding of what the regression look like, I checkout to the tip of the main branch 8cdcb7e: and run the test procedure again and can confirm that the tip of main branch does have an accuracy failure.

This proves that:

  • Reproducing instruction is good
  • The issue doesn't come from my commit, it is something else after my commit
  • The tip of main branch indeed has a regression

I am un-assigning myself based on this.

@pdhirajkumarprasad pdhirajkumarprasad changed the title [regression][gpu] Regression due to d2c8e5e [regression][gpu] Regression due to cb59389 Dec 10, 2024
@jerryyin jerryyin removed their assignment Dec 10, 2024
@pdhirajkumarprasad
Copy link
Author

Bisect didn't point the correct CL so with my local script, I was able to find the CL and updated accordingly

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug 🐞 Something isn't working
Projects
None yet
Development

No branches or pull requests

3 participants