-
-
Notifications
You must be signed in to change notification settings - Fork 5.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
2x regression in indexing benchmarks due to 'Remove references to non opaque pointers in codegen and LLVM passes (#54853)' #55090
Comments
The regression is not present on master with 5e1bcdf reverted and 4eef1be, 756e72f cherry picked (here is the branch https://github.com/Zentrik/julia/tree/test-54853). Those two commits should be sufficient for llvm 18. Looking at function perf_sumelt(A)
s = zero(eltype(A))
for a in A
s += a
end
return s
end
C = rand(Int32, 4, 500, 500)
A = view(C, 1, :, :)
@benchmark perf_sumelt($A) The unoptimized llvm ir is identical apart from 4 instructions that are |
The Julia memory model is always inbounds for GEP. This makes the code in #55090 look almost the same as it did before the change. Locally I wasn't able to reproduce the regression, but given it's vectorized code I suspect it is backend sensitive. Fixes #55090 Co-authored-by: Zentrik <[email protected]>
Sounds like this is still an issue despite #55107:
|
While the regression hasn't been fixed, there's probably not much to be done. #55412 reverted the relevant part of the commit causing the regression but with the LLVM 18 upgrade that's now a net negative. |
The Julia memory model is always inbounds for GEP. This makes the code in JuliaLang#55090 look almost the same as it did before the change. Locally I wasn't able to reproduce the regression, but given it's vectorized code I suspect it is backend sensitive. Fixes JuliaLang#55090 Co-authored-by: Zentrik <[email protected]>
The Julia memory model is always inbounds for GEP. This makes the code in #55090 look almost the same as it did before the change. Locally I wasn't able to reproduce the regression, but given it's vectorized code I suspect it is backend sensitive. Fixes #55090 Co-authored-by: Zentrik <[email protected]> (cherry picked from commit 7e1f0be)
Through bisection I identified 5e1bcdf as causing the regressions below. This change also led to some improvements but caused many more regressions. I believe these two commits were my more minimal version of the identified commit, so would be good to test if they also cause the regression, 4eef1be 756e72f.
A subset of the results is below, for full results see https://tealquaternion.camdvr.org/compare.html?start=a14cc38512b6daab6b8417ebb8a64fc794ff89cc&end=323e725c1e4848414b5642b8f54c24916b9ddd9e&stat=min-wall-time or https://github.com/JuliaCI/NanosoldierReports/blob/master/benchmark/by_date/2024-07/05/report.md.
Summary
Benchmarks
The text was updated successfully, but these errors were encountered: