Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Differences between descend_code_warntype and code_warntype in presence of recursion #500

Open
jishnub opened this issue Sep 7, 2023 · 1 comment

Comments

@jishnub
Copy link
Contributor

jishnub commented Sep 7, 2023

This is similar to #256. On Julia v1.9.3:

julia> using StaticArrays

julia> @descend_code_warntype reduce(vcat, (SA[1], [1]))
reduce(op, itr; kw...) @ Base ~/packages/julias/julia-1.9/share/julia/base/reduce.jl:485
[ Info: This method only fills in default arguments; descend into the body method to see the full source.
485 reduce(op::typeof(vcat), itr::Tuple{SVector{1, Int64}, Vector{Int64}}; kw...)::AbstractVector = 
Select a call to descend into or  to ascend. [q]uit. [b]ookmark.
Toggles: [w]arn, [h]ide type-stable statements, [t]ype annotations, [s]yntax highlight for Source/LLVM/Native, [j]ump to source always.
Show: [S]ource code, [A]ST, [T]yped code, [L]LVM IR, [N]ative code
Actions: [E]dit source code, [R]evise and redisplay
 • %1 = NamedTuple()::Core.Const(NamedTuple())
   %2 = pairs(::NamedTuple{(), Tuple{}})::Core.Const(Base.Pairs{Symbol, Union{}, Tuple{}, NamedTuple{(), Tuple{}}}())
   %3 = #reduce#294(::Pairs{…},::#reduce,::#vcat,::Tuple{…})::AbstractVector
   

julia> @code_warntype reduce(vcat, (SA[1], [1]))
MethodInstance for reduce(::typeof(vcat), ::Tuple{SVector{1, Int64}, Vector{Int64}})
  from reduce(op, itr; kw...) @ Base ~/packages/julias/julia-1.9/share/julia/base/reduce.jl:485
Arguments
  #self#::Core.Const(reduce)
  op::Core.Const(vcat)
  itr::Tuple{SVector{1, Int64}, Vector{Int64}}
Body::Vector{Int64}
1%1 = Core.NamedTuple()::Core.Const(NamedTuple())
│   %2 = Base.pairs(%1)::Core.Const(Base.Pairs{Symbol, Union{}, Tuple{}, NamedTuple{(), Tuple{}}}())
│   %3 = Base.:(var"#reduce#294")(%2, #self#, op, itr)::Vector{Int64}
└──      return %3

Cthulhu suggests that the result is not concretely inferred, whereas code_warntype suggests that it is. Descending into the method, I see

Inference terminated in an incomplete state due to argument-type changes during recursion

so perhaps the failure in type-inference arises from this, although it's unclear why code_warntype is not impacted by this

I think the issue is that deeper down, there's a sum(map(length, V))) call in _typed_vcat, which is again a mapreduce call, albeit with different types than the original.

Edit: This appears to work correctly on v1.10:

julia> @descend_code_warntype reduce(vcat, (SA[1], [1]))
reduce(op, itr; kw...) @ Base ~/packages/julias/julia-1.10/share/julia/base/reduce.jl:490
[ Info: This method only fills in default arguments; descend into the body method to see the full source.
490 reduce(op::typeof(vcat), itr::Tuple{SVector{1, Int64}, Vector{Int64}}; kw...)::Vector{Int64} =
[...]
@jishnub
Copy link
Contributor Author

jishnub commented Sep 7, 2023

This seems improved on nightly, where Cthulhu reports the inferred result to be Union{Vector{Int64}, SVector{2, Int64}}. The reason for the SVector{2, Int64} appears to be a failure in constant-propagation analysis in afoldl:

julia> @descend_code_warntype Base.afoldl(Base.BottomRF(vcat), Base._InitialValue(), SA[1], [1])
[ Info: tracking Base
afoldl(op, a, bs...) @ Base ~/julia/base/operators.jl:540
540 function afoldl(op::Base.BottomRF{typeof(vcat)}, a::Base._InitialValue, bs::Tuple{SVector{1, Int64}, Vector{Int64}}...)::Union{Vector{Int64}, SVector{2, Int64}}
541     l::Core.Const(2) = length(bs::Tuple{SVector{1, Int64}, Vector{Int64}})::Int64
542     i =  0; y = a;            (l::Int64 == i::Int64)::Bool && return y
543     #@nexprs 31 i -> (y = op(y, bs[i]); l == i && return y)
544     i =  1; y = op(y, bs[i]); (l::Int64 == i::Int64)::Bool && return y
545     i =  2; y = op(y, bs[i]); (l::Int64 == i::Int64)::Bool && return y
546     i =  3; y = op(y, bs[i]); l == i && return y
547     i =  4; y = op(y, bs[i]); l == i && return y
548     i =  5; y = op(y, bs[i]); l == i && return y
549     i =  6; y = op(y, bs[i]); l == i && return y
550     i =  7; y = op(y, bs[i]); l == i && return y
551     i =  8; y = op(y, bs[i]); l == i && return y
552     i =  9; y = op(y, bs[i]); l == i && return y
553     i = 10; y = op(y, bs[i]); l == i && return y
554     i = 11; y = op(y, bs[i]); l == i && return y
555     i = 12; y = op(y, bs[i]); l == i && return y
556     i = 13; y = op(y, bs[i]); l == i && return y
557     i = 14; y = op(y, bs[i]); l == i && return y
558     i = 15; y = op(y, bs[i]); l == i && return y
559     i = 16; y = op(y, bs[i]); l == i && return y
560     i = 17; y = op(y, bs[i]); l == i && return y
561     i = 18; y = op(y, bs[i]); l == i && return y
562     i = 19; y = op(y, bs[i]); l == i && return y
563     i = 20; y = op(y, bs[i]); l == i && return y
564     i = 21; y = op(y, bs[i]); l == i && return y
565     i = 22; y = op(y, bs[i]); l == i && return y
566     i = 23; y = op(y, bs[i]); l == i && return y
567     i = 24; y = op(y, bs[i]); l == i && return y
568     i = 25; y = op(y, bs[i]); l == i && return y
569     i = 26; y = op(y, bs[i]); l == i && return y
570     i = 27; y = op(y, bs[i]); l == i && return y
571     i = 28; y = op(y, bs[i]); l == i && return y
572     i = 29; y = op(y, bs[i]); l == i && return y
573     i = 30; y = op(y, bs[i]); l == i && return y
574     i = 31; y = op(y, bs[i]); l == i && return y
575     for i in (i + 1):l
576         y = op(y, bs[i])
577     end
578     return y
579 end
Select a call to descend into or  to ascend. [q]uit. [b]ookmark.
Toggles: [w]arn, [h]ide type-stable statements, [t]ype annotations, [s]yntax highlight for Source/LLVM/Native.
Show: [S]ource code, [A]ST, [T]yped code, [L]LVM IR, [N]ative code
Actions: [E]dit source code, [R]evise and redisplay
 • length(bs::Tuple{SVector{1, Int64}, Vector{Int64}})
               l::Int64 == i::Int64
   %11 = getindex(::Tuple{SVector{1, Int64}, Vector{Int64}},::Int64)::Union{Vector{Int64}, SVector{1, Int64}}
   %12 = call  Base.BottomRF{typeof(vcat)}(vcat)(::Base._InitialValue,::Union{Vector{Int64}, SVector{1, Int64}})::
    l::Int64 == i::Int64
   %19 = getindex(::Tuple{SVector{1, Int64}, Vector{Int64}},::Int64)::Union{Vector{Int64}, SVector{1, Int64}}
   %20 = call  Base.BottomRF{typeof(vcat)}(vcat)(,)::Union{Vector{Int64}, SVector{2, Int64}}
    l::Int64 == i::Int64
   

In this case, the getindex operations aren't concretely inferred, which makes it appear that the index 1 may be accessed twice. This appears to be a regression, as the indices are constant-propagated correctly on v1.9.3:

julia> @descend_code_warntype Base.afoldl(Base.BottomRF(vcat), Base._InitialValue(), SA[1], [1])
[ Info: tracking Base
afoldl(op, a, bs...) @ Base ~/packages/julias/julia-1.9/share/julia/base/operators.jl:531
531 function afoldl(op::Base.BottomRF{typeof(vcat)}, a::Base._InitialValue, bs::Tuple{SVector{1, Int64}, Vector{Int64}}...)::Vector{Int64}
532     l::Core.Const(2) = length(bs::Tuple{SVector{1, Int64}, Vector{Int64}})::Int64
533     i =  0; y = a;            l::Int64 == i::Int64 && return y
534     #@nexprs 31 i -> (y = op(y, bs[i]); l == i && return y)
535     i =  1; y = op(y, bs[i]); l::Int64 == i::Int64 && return y
536     i =  2; y = op(y, bs[i]); l::Int64 == i::Int64 && return y
537     i =  3; y = op(y, bs[i]); l == i && return y
538     i =  4; y = op(y, bs[i]); l == i && return y
539     i =  5; y = op(y, bs[i]); l == i && return y
540     i =  6; y = op(y, bs[i]); l == i && return y
541     i =  7; y = op(y, bs[i]); l == i && return y
542     i =  8; y = op(y, bs[i]); l == i && return y
543     i =  9; y = op(y, bs[i]); l == i && return y
544     i = 10; y = op(y, bs[i]); l == i && return y
545     i = 11; y = op(y, bs[i]); l == i && return y
546     i = 12; y = op(y, bs[i]); l == i && return y
547     i = 13; y = op(y, bs[i]); l == i && return y
548     i = 14; y = op(y, bs[i]); l == i && return y
549     i = 15; y = op(y, bs[i]); l == i && return y
550     i = 16; y = op(y, bs[i]); l == i && return y
551     i = 17; y = op(y, bs[i]); l == i && return y
552     i = 18; y = op(y, bs[i]); l == i && return y
553     i = 19; y = op(y, bs[i]); l == i && return y
554     i = 20; y = op(y, bs[i]); l == i && return y
555     i = 21; y = op(y, bs[i]); l == i && return y
556     i = 22; y = op(y, bs[i]); l == i && return y
557     i = 23; y = op(y, bs[i]); l == i && return y
558     i = 24; y = op(y, bs[i]); l == i && return y
559     i = 25; y = op(y, bs[i]); l == i && return y
560     i = 26; y = op(y, bs[i]); l == i && return y
561     i = 27; y = op(y, bs[i]); l == i && return y
562     i = 28; y = op(y, bs[i]); l == i && return y
563     i = 29; y = op(y, bs[i]); l == i && return y
564     i = 30; y = op(y, bs[i]); l == i && return y
565     i = 31; y = op(y, bs[i]); l == i && return y
566     for i in (i + 1):l
567         y = op(y, bs[i])
568     end
569     return y
570 end
Select a call to descend into or  to ascend. [q]uit. [b]ookmark.
Toggles: [w]arn, [h]ide type-stable statements, [t]ype annotations, [s]yntax highlight for Source/LLVM/Native, [j]ump to source always.
Show: [S]ource code, [A]ST, [T]yped code, [L]LVM IR, [N]ative code
Actions: [E]dit source code, [R]evise and redisplay
 • length(bs::Tuple{SVector{1, Int64}, Vector{Int64}})
   %5 = < concrete eval > ==(::Core.Const(2),::Core.Const(0))::Core.Const(false)
   %10 = < constprop > getindex(::Tuple{SVector{1, Int64}, Vector{Int64}},::Core.Const(1))::SVector{1, Int64}
   %11 = BottomRF(::Base._InitialValue,::SVector{1, Int64})::SVector{1, Int64}
   %12 = < concrete eval > ==(::Core.Const(2),::Core.Const(1))::Core.Const(false)
   %17 = < constprop > getindex(::Tuple{SVector{1, Int64}, Vector{Int64}},::Core.Const(2))::Vector{Int64}
   %18 = BottomRF(::SVector{1, Int64},::Vector{Int64})::Vector{Int64}
   %19 = < concrete eval > ==(::Core.Const(2),::Core.Const(2))::Core.Const(true)
   

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant