-
Notifications
You must be signed in to change notification settings - Fork 4.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Mono] Restore old code to solve the recent SpanHelpers regressions #75917
Conversation
Tagging subscribers to this area: @dotnet/area-system-memory |
/azp run runtime-wasm |
Azure Pipelines successfully started running 1 pipeline(s). |
Plenty of the CI legs failed with "Git fetch failed with exit code: 128" |
/azp run runtime |
Azure Pipelines successfully started running 1 pipeline(s). |
/backport to release/7.0-rc2 |
Started backporting to release/7.0-rc2: https://github.com/dotnet/runtime/actions/runs/3093620874 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@adamsitnik I tried comparing against SpanHelpers.T.cs
at a couple points in history and I'm not seeing a 1:1 match with what is brought in here for SpanHelpers.Mono.cs
.
Can you edit the issue description to link to the baseline that was used to copy from and provide any other noteworthy details?
@jeffhandley done |
The microbenchmarks results are suprising. Some methods are improving, while some are regressing with the current state of this PR. WASM AOTAOT better: 4, geomean: 1.293
WASMbetter: 5, geomean: 1.116
I don't know why it is this way, but I am sure that we should not be merging this PR right now. Again, I am sorry for the disappointment. I've not followed my usual perf regression routine in this case (profile => indetify => solve), but blindly listened to "just copy all the code". |
@radekdoulik has shared offline different results that show that the fix actually helps. He has run the benchmarks in different way, using perf pipeline developed by @radical |
the results I mentioned in the teams are from our bench sample measurements, run on dedicated arm SBC, so they are not coming from the CI pipeline. they have smaller coverage though. in our data the regression was visible in the Span.IndexOf measurements and in one of the Json measurements, which we have in the bench sample. @adamsitnik do you also have your microbenchmarks results for the regression itself? was it showing only regressions or some improvements too? |
…75917) * bring back the old code... * bring back more old code * Use an ifdef around clr code instead of a separate file * Delete SpanHelpers.Clr.cs * Remove a remaining INumber<T> helper from mono Co-authored-by: Jeff Handley <[email protected]>
/backport to release/7.0-rc2 |
Started backporting to release/7.0-rc2: https://github.com/dotnet/runtime/actions/runs/3102128783 |
Measurement of dotnet/runtime#75917
Fixes #75709
As discovered in #dotnet/perf-autofiling-issues#7976 (and #74395), SpanHelpers vectorization work in #73768 that went into RC1 caused severe Span.IndexOf performance regressions on mono. This PR restores the previous implementation for mono while retaining the performance gains seen for coreclr.
This is a temporary workaround for the regression on mono, but .NET 7.0 will ship with this workaround in place. For .NET 8, we have #75801 to explore the better long-term resolution.
This PR does not include the changes introduced in the following PRs:
#73481 (it caused the JSON comment parsing regression: #74442)
#73368 and #73469 where I've removed
AdvSimd.Arm64
code-path and started using Vector128/256 everywhere. The reason for that is WASM/Mono does not support Vector128/256 for all configs (#73469 (comment)), so I've not included these changes.cc @jkotas @vargaz