Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Do some constant propagation before type inference runs #5560

Closed
Keno opened this issue Jan 26, 2014 · 7 comments · Fixed by #24362
Closed

Do some constant propagation before type inference runs #5560

Keno opened this issue Jan 26, 2014 · 7 comments · Fixed by #24362
Labels
performance Must go faster

Comments

@Keno
Copy link
Member

Keno commented Jan 26, 2014

It would be good to run some constant propagation before type inference runs for cases in which a type parameter is really constant, but we don't know because we compute with it before. E.g:

type foo{m}; end
inv{m}(x::foo{m}) = foo{-m}()
id{m}{x::foo{m}) = foo{m}()

For id, we get a nice

julia> code_typed(id,(foo{1},))
1-element Array{Any,1}:
 :($(Expr(:lambda, {:x}, {{},{{:x,foo{1},0}},{}}, quote  # /Users/kfischer/.julia/REPL/scripts/repl.jl, line 1:
        return $(Expr(:new, foo{1}))::foo{1}
    end)))

whereas for inv, we get

julia> code_typed(inv,(foo{1},))
1-element Array{Any,1}:
 :($(Expr(:lambda, {:x}, {{},{{:x,foo{1},0}},{}}, quote  # /Users/kfischer/.julia/REPL/scripts/repl.jl, line 1:
        return top(apply_type)(foo,top(box)(Int64,top(neg_int)(1))::Int64)::Type{_<:foo{m}}()::foo{m}
    end)))
@Keno
Copy link
Member Author

Keno commented Jan 26, 2014

The only problem I see is that you probably want:

Type Inference -> Inlining -> Constant Propagation -> Type Inference

@lindahua
Copy link
Contributor

+1

@JeffBezanson
Copy link
Sponsor Member

There is really no limit to this --- the more you iterate inference, inlining, and constant prop, the more cases you will optimize. You can construct examples that require arbitrary numbers of these passes.
Evaluating intrinsics like neg_int at compile time is a challenge. We could pre-compile a callable version of each, implement an interpreter for all of them, or call codegen and pass the result to llvm's interpreter (though that sounds like it has a lot of overhead).

@ArchRobison
Copy link
Contributor

An alternative to inlining to consider is context-sensitive inter-procedural analysis. That enables a compiler to do "what if I inlined?" analyses without necessarily paying the full cost of inlining. Of course context-sensitive analysis also has the "when to stop?" issue.

I'm wondering if there's a way to automatically generate the interpreter for the intrinsics from some kind of table.

JeffBezanson added a commit that referenced this issue Apr 7, 2016
this simplifies some of the code, and gives sharper type information.
fixes #5090, type inference of `&&` and `||` chains
also makes it easy to fix #5560 and #14324
JeffBezanson added a commit that referenced this issue Apr 7, 2016
this simplifies some of the code, and gives sharper type information.
fixes #5090, type inference of `&&` and `||` chains
also makes it easy to fix #5560 and #14324
JeffBezanson added a commit that referenced this issue Apr 7, 2016
this simplifies some of the code, and gives sharper type information.
fixes #5090, type inference of `&&` and `||` chains
also makes it easy to fix #5560 and #14324
JeffBezanson added a commit that referenced this issue Apr 7, 2016
this simplifies some of the code, and gives sharper type information.
fixes #5090, type inference of `&&` and `||` chains
also makes it easy to fix #5560 and #14324
JeffBezanson added a commit that referenced this issue Apr 8, 2016
this simplifies some of the code, and gives sharper type information.
fixes #5090, type inference of `&&` and `||` chains
also makes it easy to fix #5560 and #14324
JeffBezanson added a commit that referenced this issue Apr 10, 2016
this simplifies some of the code, and gives sharper type information.
fixes #5090, type inference of `&&` and `||` chains
also makes it easier to deal with #5560 and #14324
JeffBezanson added a commit that referenced this issue Apr 15, 2016
this simplifies some of the code, and gives sharper type information.
fixes #5090, type inference of `&&` and `||` chains
also makes it easier to deal with #5560 and #14324
vtjnash pushed a commit that referenced this issue Apr 18, 2016
this simplifies some of the code, and gives sharper type information.
fixes #5090, type inference of `&&` and `||` chains
also makes it easier to deal with #5560 and #14324
@maleadt
Copy link
Member

maleadt commented Oct 19, 2016

Just ran into this trying to pass a const computation to a generated function by means of a Val type:

@generated function foobar{D}(::Type{Val{D}})
    return :($D)
end

function working1()
    const size = 12
    return foobar(Val{size})
end

const size = 3*4
function working2()
    return foobar(Val{size})
end

function broken()
    const size = 3*4
    return foobar(Val{size})
end

println(@code_typed working1())
@inferred working1()

println(@code_typed working2())
@inferred working2()

println(@code_typed broken())
@inferred broken()

This broke some Julia/CUDA code where we can't perform the call to foobar dynamically. We use it to implement @cuSharedMemory(T, dims...) where we need a generated function to determine the type of T in order to allocate a certain amount of memory.

@timholy
Copy link
Sponsor Member

timholy commented May 25, 2017

Might be worth pointing out that syntax like #22056 (esp. the first comment) might be a way for the programmer to specify exactly which kind of constant propagation should be performed. Presumably this makes it a finite problem.

@vtjnash
Copy link
Sponsor Member

vtjnash commented May 25, 2017

Presumably this makes it a finite problem.

I'm pretty sure it doesn't. But with the new inference convergence, #21933 should do it.

This issue was closed.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
performance Must go faster
Projects
None yet
Development

Successfully merging a pull request may close this issue.

7 participants