Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

guaranteed tail call elimination #271

Closed
rust-highfive opened this issue Sep 24, 2014 · 43 comments
Closed

guaranteed tail call elimination #271

rust-highfive opened this issue Sep 24, 2014 · 43 comments
Labels
postponed RFCs that have been postponed and may be revisited at a later time. T-lang Relevant to the language team, which will review and decide on the RFC.

Comments

@rust-highfive
Copy link

Tracking issue for postponed PR #81

@rust-highfive rust-highfive added the postponed RFCs that have been postponed and may be revisited at a later time. label Sep 24, 2014
@ranma42
Copy link
Contributor

ranma42 commented Jan 12, 2015

In the PR several people mentioned that they preferred become over be.
Even tough the PR has been postponed, would it be a good idea to reserve become for tailcalls before releasing 1.0?

I'm not suggesting to implement tail call elimination, just reserving the keyword for the future.

@ticki
Copy link
Contributor

ticki commented Dec 14, 2015

What's the state of this?

@steveklabnik
Copy link
Member

@ticki status is "nobody has submitted a new RFC yet"

@ticki
Copy link
Contributor

ticki commented Jan 21, 2016

I might make one when I get time, if no-one is working on this.

@hatahet
Copy link

hatahet commented Feb 26, 2016

In Scala, annotating a method with @tailrec would make the compiler enforce that that method's implementation is tail recursive, otherwise it is a compiler error.

Would something similar be a good fit for Rust (e.g. tagging functions with #[tailrec], or something of that sort)?

@steveklabnik
Copy link
Member

@hatahet we reserved a 'become' keyword, the prevailing idea seems to be that return -> become would indicate that the function should guarantee TCO

@hatahet
Copy link

hatahet commented Feb 26, 2016

Does that mean that having become at the tail position would always be required?

fn loop1() {
  // ...
  become loop1()
}

otherwise, what if it were mixed as follows:

fn loop1() {
  // ...
  if foo {
    // ...
    become loop1()
  } else {
    // ...
    loop1()
  }
}

@steveklabnik
Copy link
Member

I would imagine so, but that's the work that an RFC proposing the syntax would have to work through :)

@nrc nrc added the T-lang Relevant to the language team, which will review and decide on the RFC. label Aug 17, 2016
@ConnyOnny
Copy link

How about we first implement this with a trampoline¹ as a slow cross-platform fallback implementation, and then successively implement faster methods for each architecture/platform?
This way the feature can be ready quite quickly, so people can use it for elegant programming. In a future version of rustc such code will magically become fast.

¹ see here for a small and to-be-generalized example trampoline: https://gist.github.com/ConnyOnny/5917e4e549056f6a3db01a5277480f6b

@U007D
Copy link

U007D commented Jan 8, 2017

I'm new to the Rust community, but would be forever sad if I didn't mention this before we committed to this path-- I am coming from the C++ community, where there is a strong (perhaps too strong) reticence to introduce new keywords.

TL;DR - abandon become keyword and automatically optimize return when applicable.

I understand that become has been reserved for quite some time for this purpose, but I propose that the Rust compiler recognize and optimize explicit and implicit return tail calls, freeing developers from the burden of having to specify TCO 'manually'. To borrow @ConnyOnny's phrasing, with this scheme, in a future version of rustc, existing code will magically become fast safe.

TCO would not be enabled by default for debug builds, but would for (optimized) release builds. A compiler switch would be provided for explicit control (for example, when a user wanted to selectively disable just TCO, but retain other optimizations).

If there is someone willing to guide with a Rust newb, I'm willing to work an RFC for this.

@ranma42
Copy link
Contributor

ranma42 commented Jan 8, 2017

@BradleyGibson TCO can already happen in some cases (i.e. the compiler might optimise tail calls if it decides so). become is about guaranteed tail calls (i.e. getting a compilation error when the compiler is unable to perform TCO).

@ssokolow
Copy link

ssokolow commented Jan 8, 2017

@BradleyGibson Last time I looked at it, the impression I got was that we're going in this direction:

  1. Optimize recognized tail-calls whether or not it's asked for
  2. become means "Throw a compile-time error if we cannot apply tail-call optimization here"

Given that become is already reserved, that would most definitely be in line with the approach Rust takes to this sort of thing.

EDIT: ...and @ranma42 beat me to it by less than a second.

@nagisa
Copy link
Member

nagisa commented Jan 8, 2017

abandon become keyword and automatically optimize return when applicable

This kinda goes against the explicitness of rust.

with this scheme, in a future version of rustc, existing code will magically become fast.

Guaranteed tail-calls if anything make stuff slower. The only benefit they have are guaranteed bounded stack usage. And there’s nothing preventing compiler from replacing return with become in cases where its known to work.

If there is someone willing to guide with a Rust newb on this, I'm willing to work an RFC for this.

There’s a considerable amount of work going on already. Most notably this and the corresponding prototyping effort (turned out to be quite a bag of snakes).

@U007D
Copy link

U007D commented Jan 8, 2017

@ranma42 Ah, that is good for me to know. :) Ok, then, I'm cool withbecome as long as TCO is also performed by default where applicable on return (explicit or implicit).

@U007D
Copy link

U007D commented Jan 8, 2017

This kinda goes against the explicitness of rust.

As always, it is a matter of degrees. I don't think anyone would propose creating a new keyword requiring developers to opt in for every optimization, so (for me) the real question is why would we require users to opt-in to this particular optimization? I'm not sure we are (based on @ranma42's reply), but if this is what you mean, always-opt-in doesn't scale particularly well. And if not always, when?

Guaranteed tail-calls if anything make stuff slower. The only benefit they have are guaranteed bounded stack usage. And there’s nothing preventing compiler from replacing return with become in cases where its known to work.

In most cases the real benefit is the stack, not speed, I agree. There are exceptions, though, and ideally the optimizer will do the right thing under the right circumstances. In any case, you are correct and I've updated my post to reflect this.

There’s a considerable amount of work going on already.

Great, I'll definitely take a look. Thanks!

@U007D
Copy link

U007D commented Jan 8, 2017

@ssokolow that sounds perfect to me. Thanks for the summary.

withoutboats pushed a commit to withoutboats/rfcs that referenced this issue Jan 15, 2017
Combinator to merge two futures yielding the same types
@andrestesti
Copy link

A #[tailrec] atribute decorating the function doesn't require a reserved word, and is easier to read and declare. It is also declarative, you don't need to modify your executable code to check an optimization. If the compiler can't optimize a #[tailrec] function, it raises an error (and maybe a suggestion). The annotation @tailrec works fine in Scala, I think Rust should follow the same approach.

@ConnyOnny
Copy link

The problem with an annotation is that a function may include an optimizable and a non-optimizable return. Consider

fn foo(x: i32, accu: i32) -> i32 {
    if x < 0 {
        // not able to TCO because of the +accu after the call
        // but also not required because this happens only once => no stack overflow
        return foo(-x,1) + accu;
    } else {
        // needs TCO because the stack could easily overflow
        become foo(x-1, x*accu);
    }
}

If we'd try to solve this with an annotation and two regular return statements, what should the compiler do?

The become keyword is strictly superior to an annotation.

It is also superior to "just apply TCO wherever possible" because sometimes you know "if this is not tail call optimized, the stack will overflow for sure", so you must get a compile error about it, and not a runtime error!

@andrestesti
Copy link

andrestesti commented Feb 10, 2017

They way Scala's @tailrec annotation works, is by raising a compiler error if the function cannot be optimized into a loop. Thus you will never get a non-optimizable function, since it won't compile. Additionally, the compiler would suggest you how to change your code to set your return values in optimizable position. Another quirk with the become keyword, is that it is not symmetric with return keyword omission. It hits against expression/functional code styling, while you are trying to use a very functional construction block as recursion is.

fn foo(x: i32, accu: i32) -> i32 {
    if x < 0 {
        // return omission
        foo(-x,1) + accu
    } else {
        // asimmetry
        become foo(x-1, x*accu);
    }
}

@glaebhoerl
Copy link
Contributor

#1888

@U007D
Copy link

U007D commented Feb 10, 2017

@ConnyOnny addition is commutative, so with help from the optimizer, shouldn't return foo(-x,1) + accu should be TCOable as return accu + foo(-x,1)?

@OvermindDL1
Copy link

@ConnyOnny addition is commutative, so with help from the optimizer, shouldn't return foo(-x,1) + accu should be TCOable as return accu + foo(-x,1)?

It's not that the + accu is after it visually, but computationally. From a function perspective it is +(foo(-x,1), accu), which as you can see is no different from +(accu, foo(-x,1)), meaning it is not TCO'able at all (well simple native operation 'can' be TCO'able with some translations, but that is a different issue).

@alkis
Copy link

alkis commented Jan 1, 2018

What's blocking this RFC? Are there unresolved issues?

@00imvj00
Copy link

any update on this RFC? please provide some update if possible. @steveklabnik @alexcrichton @aturon

@steveklabnik
Copy link
Member

@alkis @0zero0zero this isn't an RFC, it's an issue. As @glaebhoerl said above, #1888 is the actual RFC. you can see the latest status over there.

@kephas
Copy link

kephas commented Mar 1, 2018

@nagisa, I thought it had been established 40 years ago that TCO doesn't make anything slower. It's actually a technique to make things faster… (AFAIK, nothing can be faster than a JMP!)

@U007D, just making TCO a possibility pretty much defeats the purpose. If I write a state machine with TCO in mind, there could be billions of recursive calls. If TCO isn't guaranteed, that program is pretty much guaranteed to stack overflow.

@U007D
Copy link

U007D commented Mar 1, 2018

@kephas yes, agreed that mandatory TCO via specialized keyword is very useful so that stack does not overflow on recursion. In addition, I would like to see conventional recursive calls take advantage of TCO whenever possible, as per @ssokolow's summary, above.

@le-jzr
Copy link

le-jzr commented Mar 1, 2018

@U007D If TCO is possible but not guaranteed, there must be an artificial recursion limit when TCO is used. Otherwise you'd have code that works or not works depending on random compiler heuristics.

@ssokolow
Copy link

ssokolow commented Mar 1, 2018

@le-jzr There are problems with that:

  1. "works or not works depending on random compiler heuristics" is the norm for existing optimizations and requiring new optimizations to be held to a higher standard would have a chilling effect on new optimizations. (ie. It's unfair to single out TCO to be held to a higher standard.)
  2. Recursion depth is generally dependant on non-constant values, which means you'd need to enforce the limit at runtime, which means adding overhead.

That said, I'd be all for it if you meant "in debug builds, ensure that the non-mandatory TCO optimization is either disabled or instrumented to continue enforcing the recursion limit". That'd be analogous to the overflow/underflow checks we already have.

@le-jzr
Copy link

le-jzr commented Mar 1, 2018

@ssokolow I'm not aware of any other optimization that behaves like this. Can you make an example for your first point?

For the second one, counting a number is much less overhead than calling a function without TCO, so the net performance is still a lot better than you have any right to expect from an unreliable optimization.

That being said, if the recursion limit is enforced only in debug builds, that's fine by me.

@U007D
Copy link

U007D commented Mar 1, 2018

@le-jzr For example, inlining optimizations can behave like this, depending on register pressure, hot-path threshold analysis or resulting binary size constraints.

My thinking is that non-mandatory TCO won't make the situation worse than not having any TCO at all in terms of stack usage, so I'm looking at it exactly as you and @ssokolow are suggesting--an optimization.

From the perspective of predictability, I agree there could be an issue with this (like many optimizations) and also support the ability to disable and/or enforce recursion limits in debug.

@ssokolow
Copy link

ssokolow commented Mar 1, 2018

"like this" in what sense?

If you don't explicitly ask for guaranteed TCO, whether your program exhausts the stack depends on whether the TCO optimizer kicks in. If you don't explicitly warn the compiler about accesses to volatile memory, whether your program works or silently corrupts data or crashes is entirely dependent on the internal implementation details of passes like the loop-invariant code motion and redundant instruction combining optimizations.

When it comes to benchmarking, one of the first things you'll be taught to watch out for is the loop-deletion pass removing your entire test because the only used result it produces is the time it took to run.

On the embedded side, whether the code fits into the microcontroller can be determined by the vagarities of optimizer internals.

@le-jzr
Copy link

le-jzr commented Mar 1, 2018

@ssokolow What I mean is that TCO effectively removes a resource limit. It does not just increase the limit by a finite amount, it eliminates it completely/turns it infinite, and this happens in safe code (in contrast to your examples).

In practice, what might happen is that you test and debug your application with arbitrarily large data and everything works flawlessly, and then it doesn't work at all on the user's side, and you are left confused, unable to reproduce the issue.

@ssokolow
Copy link

ssokolow commented Mar 3, 2018

I see your point and it definitely sounds like it would be well-suited to being handled the same way overflow checks are.

@kephas
Copy link

kephas commented Mar 9, 2018

@U007D, does inlining changes the correctness of the program? I don't think it does. Whether the code is inlined or not, the program should still pass or fail the same tests. This is not true if you write TCO-dependent code and TCO is or isn't done.

@OvermindDL1
Copy link

Exactly that, some algorithms depend on guaranteed TCO.

@U007D
Copy link

U007D commented Mar 9, 2018

@kephas I agree completely. I brought up inlining as a response to the question of what other optimizations "works or not works depending on random compiler heuristics". Of course no good optimization would change the correctness of the program.

I am saying that in addition to the needs of TCO-dependent code, non-TCO-dependent code can also benefit--I would like to see that happen too.

@Centril
Copy link
Contributor

Centril commented May 15, 2018

We're not doing postponement issues anymore; just continue discussion over at #1888.
Closing this therefore to reduce the backlog.

@nixpulvis
Copy link

@Centril can we reopen #1888 then? This is currently in a graveyard of sorts.

@Centril
Copy link
Contributor

Centril commented May 10, 2019

@nixpulvis It's probably not on the roadmap this year.

@nixpulvis
Copy link

nixpulvis commented May 11, 2019

That's fine, I'm interested in Rust for the long haul ;) Been watching these issues for years now.

@shirshak55
Copy link

is there any planning for proper TCO?

@matu3ba
Copy link

matu3ba commented Jul 20, 2020

@shirshak55 look at #1888

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
postponed RFCs that have been postponed and may be revisited at a later time. T-lang Relevant to the language team, which will review and decide on the RFC.
Projects
None yet
Development

No branches or pull requests