-
-
Notifications
You must be signed in to change notification settings - Fork 5.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Inference discards bounds on abstract parameters #36454
Comments
The code to fix this is all there it seems, just disabled: Lines 1100 to 1123 in 6185d24
If this came up in your invalidation hunt, maybe it's time to reconsider this? |
Nice find. It's a difficult design decision; on one hand, doing more accurate inference makes inference slower. On the other hand, doing more accurate inference reduces the "vulnerability" of code to invalidation because fewer MethodInstances end up having It's quite difficult to come up with reasonable benchmarks for these things, since it depends entirely on what new methods you define. Therefore what works in practice is a bit of a cultural issue of which packages often get used in which combinations. |
JuliaLang/julia#36280 introduced the ability to pre-allocate the container used to track values of `f.(itr)` in `unique(f, itr)`. Particularly for containers with `Union` elements, this circumvents significant inference problems. Related: JuliaLang/julia#36454
JuliaLang/julia#36280 introduced the ability to pre-allocate the container used to track values of `f.(itr)` in `unique(f, itr)`. Particularly for containers with `Union` elements, this circumvents significant inference problems. Related: JuliaLang/julia#36454
`cat` is often called with Varargs or heterogenous inputs, and inference almost always fails. Even when all the arrays are of the same type, if the number of varargs isn't known inference typically fails. The culprit is probably #36454. This reduces the number of failures considerably, by avoiding creation of vararg length tuples in the shape-inference pipeline.
`cat` is often called with Varargs or heterogenous inputs, and inference almost always fails. Even when all the arrays are of the same type, if the number of varargs isn't known inference typically fails. The culprit is probably #36454. This reduces the number of failures considerably, by avoiding creation of vararg length tuples in the shape-inference pipeline.
`cat` is often called with Varargs or heterogenous inputs, and inference almost always fails. Even when all the arrays are of the same type, if the number of varargs isn't known inference typically fails. The culprit is probably #36454. This reduces the number of failures considerably, by avoiding creation of vararg length tuples in the shape-inference pipeline. (cherry picked from commit 815076b)
`cat` is often called with Varargs or heterogenous inputs, and inference almost always fails. Even when all the arrays are of the same type, if the number of varargs isn't known inference typically fails. The culprit is probably #36454. This reduces the number of failures considerably, by avoiding creation of vararg length tuples in the shape-inference pipeline. (cherry picked from commit 815076b)
Inference loses track of `Tag` due to JuliaLang/julia#36454
`cat` is often called with Varargs or heterogenous inputs, and inference almost always fails. Even when all the arrays are of the same type, if the number of varargs isn't known inference typically fails. The culprit is probably JuliaLang#36454. This reduces the number of failures considerably, by avoiding creation of vararg length tuples in the shape-inference pipeline.
`cat` is often called with Varargs or heterogenous inputs, and inference almost always fails. Even when all the arrays are of the same type, if the number of varargs isn't known inference typically fails. The culprit is probably JuliaLang#36454. This reduces the number of failures considerably, by avoiding creation of vararg length tuples in the shape-inference pipeline.
`cat` is often called with Varargs or heterogenous inputs, and inference almost always fails. Even when all the arrays are of the same type, if the number of varargs isn't known inference typically fails. The culprit is probably #36454. This reduces the number of failures considerably, by avoiding creation of vararg length tuples in the shape-inference pipeline. (cherry picked from commit 815076b)
Bump this up to triage to reconsider given the many changes to precompilation and the increased cost of invalidations? |
I think that's a good idea, but probably first someone should collect some data to bring to the discussion. |
Triage thinks that, as Tim said, this probably needs more data for a more informed decision. |
code is enabled now
|
Apologies, I'm sure this must have been reported before but a search didn't pick it up.
There appear to be cases where it would be "easy" to preserve bounds on abstract types. One that comes up a lot in my invalidation-squashing is
Iterators.Stateful
:Given the definition
julia/base/iterators.jl
Lines 1243 to 1246 in d762e8c
_A<:AbstractString
.The text was updated successfully, but these errors were encountered: