-
-
Notifications
You must be signed in to change notification settings - Fork 5.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
add inference lattice element for Tuples #28955
Conversation
here's a case still needs to updated: https://github.com/JuliaLang/julia/pull/28955/files#diff-493d285f483504268eb7cf24ea13b9aaR174 |
base/compiler/inferenceresult.jl
Outdated
@@ -13,32 +13,27 @@ mutable struct InferenceResult | |||
else | |||
result = linfo.rettype | |||
end | |||
return new(linfo, EMPTY_VECTOR, result, nothing) | |||
return new(linfo, compute_inf_result_argtypes(linfo), result, nothing) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Originally, I was going to basically rename get_argtypes(::InferenceResult)
to init_argtypes
, but then I couldn't find a place where it ever looked like we wanted to construct an InferenceResult without initializing the argtypes
. Hence this change, and getting rid of get_argtypes(::InferenceResult)
altogether.
Am I correct here that - currently, anyway - all cases where we construct an InferenceResult
, we also want to initialize argtypes
?
2eb0905
to
d812365
Compare
I'm looking at this. @jrevels can you provide a test case or two? |
base/compiler/ssair/inlining.jl
Outdated
@@ -808,7 +803,9 @@ function assemble_inline_todo!(ir::IRCode, linetable::Vector{LineInfoNode}, sv:: | |||
# and if rewrite_apply_exprargs can deal with this form | |||
ok = true | |||
for i = 3:length(atypes) | |||
typ = widenconst(atypes[i]) | |||
typ = atypes[i] | |||
typ isa PartialTuple && continue |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This looks wrong; a PartialTuple
can still be vararg I think?
base/compiler/inferenceresult.jl
Outdated
vargs = nothing | ||
if !toplevel && caller_argtypes !== nothing | ||
for i in 1:length(caller_argtypes) | ||
a = maybe_widen_conditional(given_argtypes[i]) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this meant to be caller_argtypes[i]
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah, yup! Thanks for that catch.
Previously these sorts of function would block constant propagation. Hopfully #28955 will just fix this, but until then, add a surgical fix and a test.
Previously these sorts of function would block constant propagation. Hopfully #28955 will just fix this, but until then, add a surgical fix and a test.
I think this test case is at least representative of the issue this PR seeks to resolve (e.g. JuliaLabs/Cassette.jl#71), though it might end up being e.g. necessary but insufficient: julia> a(f, args...) = f(args...)
a (generic function with 1 method)
julia> b(args::Tuple) = a(args...)
b (generic function with 1 method)
julia> c(args...) = b(args)
c (generic function with 1 method)
julia> d(f, x, y) = c(f, Bool, x, y)
d (generic function with 1 method)
julia> f(::Type{Bool}, x, y) = x
f (generic function with 1 method)
julia> f(::DataType, x, y) = y
f (generic function with 2 methods)
# I want this to infer exactly Int64 as the output type
julia> @code_typed optimize=false d(f, 1, 2.0)
CodeInfo(
1 1 ─ %1 = (Main.c)(f, Main.Bool, x, y)::Union{Float64, Int64} │
└── return %1 │
) => Union{Float64, Int64} |
91463b8
to
7c36651
Compare
Okay, I added the test case I constructed above, which now passes. Made some other changes based on @JeffBezanson's feedback (thanks!) that lets a bunch of other failing cases pass now. Let's see if CI passes. In the meantime, I'll see if this fixes the Cassette problem cases that motivated this... |
Okay, looks like this PR does resolve the OP in JuliaLabs/Cassette.jl#71, but even with this fix Cassette still requires those hacky manual primitives just to force specialization in order to pass its inferrability tests. The overarching goal is to get rid of those entirely, but at least this is an improvement. |
@nanosoldier |
:( no nanosoldier? |
@nanosoldier ...let's try again |
Your benchmark job has completed - possible performance regressions were detected. A full report can be found here. cc @ararslan |
Oof. Was definitely hoping to see some better perf improvements here. Instead, it looks like this causes a bunch of regressions...I'll do some debugging tomorrow. |
Previously these sorts of function would block constant propagation. Hopfully #28955 will just fix this, but until then, add a surgical fix and a test.
So @vtjnash helped me fix the performance problems from the previous benchmarks - it turns out I was using The most recent commit is an attempt at more correctly handling the better type info we have floating around the cache. I skipped CI on it, though, because I messed something up somewhere and am getting a bootstrap
|
437334b
to
856ab90
Compare
base/compiler/inferenceresult.jl
Outdated
isva_result_argtypes[nargs] = tuple_tfunc(result_argtypes[(nargs - 1):end]) | ||
return isva_result_argtypes | ||
end | ||
return result_argtypes |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
hmmm replacing this whole method body with a call to matching_cache_argtypes(linfo, nothing) at least allows it to build past the point that I was getting a BoundsError before (though of course probably results in a bunch of cache misses)
base/compiler/inferenceresult.jl
Outdated
for i in 1:length(given_argtypes) | ||
a = given_argtypes[i] | ||
result_argtypes[i] = !(isa(a, PartialTuple) || isa(a, Const)) ? widenconst(a) : a | ||
end |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@vtjnash Just wanted to make sure this change is correct; the cache uses maybe_widen_conditional(a)
for these argtypes so it seemed like we should match that here
lol gg pkg
(load a copy of inference with the above diff into a working build of this branch, then BAM) |
I am unable to reproduce the bitarray CI failure. |
With the most recent commit, this PR now fixes JuliaLabs/Cassette.jl#71 again |
Your benchmark job has completed - possible performance regressions were detected. A full report can be found here. cc @ararslan |
Hmm... Check out this example:
On master:
Here:
If we look at If we look at If we look at Anybody see what's going on here? I don't... |
The actual code for the accumulate might be different? |
D'oh, I need to get my first cup of coffee down before posting GitHub comments. Thanks. |
Hmm. I see statements like this floating around in the IR on this branch (looking at
Looks like the recent widenconst additions to the isa_tfunc might've been too aggressive? I'll play around with it EDIT: hmmm, seems to be something else... EDIT 2: oh look, it's our good buddy mister unreachable invalid invoke
|
@nanosoldier |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
FWIW, LGTM once CI and nanosoldier are ok.
Ok, looks like this is basically passing. @jrevels Do you want to squash this down a bit (probably combine most of the PartialTuple stuff into one commit, but maybe leave all the things we found along the way - codegen changes, the PiNode thing, separate and prior in the commit list)? |
9ab2caf
to
ccaf850
Compare
Squashed. |
Argh. Not sure what went wrong but my rebase somehow messed something up. Time to consult the reflog... |
…ment for "partially constant" tuples Previously, we hacked in an additional `InferenceResult` field to store varargs type information in order to facilitate better constant propagation through varargs methods. There were many other places, however, where constants moving in/out of tuples/varargs thwarted constant propagation. This commit removes the varargs hack, replacing it with a new inference lattice element (`PartialTuple`) that represents tuples where some (but not all) of the elements are constants. This allows us to follow through with constant propagation in more situations involving tuple construction/destructuring, and also enabled a clean-up of the `InferenceResult` caching code.
E.g. if we had `PiNode(1, CartesianIndex)`, we would eliminate that because `1` was a constant. However, leaving this in allows the compiler to realize that this code is unreachable, as well as guarding codegen against having to wrok through invalid IR.
Unfortunately, we cannnot always rely on :invokes to have argument values that match the declared ssa types (e.g. if the :invoke is dynamically unreachable). Because of that, we cannot assert here, but must instead emit a runtime trap.
ccaf850
to
29d8f08
Compare
Okay, I'm not sure how it happened but:
lol. Anyway, should be fixed now... |
Your benchmark job has completed - possible performance regressions were detected. A full report can be found here. cc @ararslan |
Looks good to me. @jrevels wanna do the honors? |
Thanks again to @vtjnash @Keno and @JeffBezanson for your help here! |
Previously these sorts of function would block constant propagation. Hopfully #28955 will just fix this, but until then, add a surgical fix and a test.
w/ @vtjnash
There are probably still some cases to handle and tests to add here...this is just a dump of what we currently have.
Once this PR is complete, it should allow for more robust constant prop of vargs/tuple elements.