Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

should struct fields and array elements be dropped in reverse declaration order (a la C++) #744

Closed
steveklabnik opened this issue Jan 27, 2015 · 69 comments
Assignees
Labels
T-lang Relevant to the language team, which will review and decide on the RFC.

Comments

@steveklabnik
Copy link
Member

Issue by pnkfelix
Thursday Aug 21, 2014 at 23:34 GMT

For earlier discussion, see rust-lang/rust#16661

This issue was labelled with: in the Rust repository


should struct fields and array elements be dropped in reverse declaration order (a la C++) ?

rust-lang/rust#16493 made us internally consistent with respect to struct fields, but chose to make the drops follow declaration order.

Note struct construction follows the order of the fields given at the construction site, so there's no way to ensure that the drop order is the reverse of the order used at the construction site.

But we should still think about whether we should change that. That is, if local variables are dropped in reverse declaration order (even though they can be initialized in any order), then it seems like it would make sense to do the same thing for struct fields.

@pnkfelix
Copy link
Member

(If we're going to do this, we probably need to do it for 1.0. And thus I suspect we just will not do it. Which I am fine with.)

@pnkfelix
Copy link
Member

pnkfelix commented Mar 9, 2015

Although, today I did run into this fun case:

use std::thread;

struct D(u8);

impl Drop for D {
    fn drop(&mut self) {
        println!("Dropping {}", self.0);
    }
}

fn main() {
    fn die() -> D { panic!("Oh no"); }
    let g = thread::spawn(|| {
        let _nested = vec![vec![D( 1), D( 2), D( 3), D( 4)],
                           vec![D( 5), D( 6), die(), D( 7)],
                           vec![D( 8), D( 9), D(10), D(11)]];
    });
    assert!(g.join().is_err());
}

The above prints:

Dropping 6
Dropping 5
Dropping 1
Dropping 2
Dropping 3
Dropping 4

So ... when we panic in the middle of the vec! construction, we drop in reverse order ... but then we drop fully constructed vecs in normal order (?) Does this really make sense?

@pnkfelix
Copy link
Member

A recent discussion with @nikomatsakis pointed out tuple structs as a case bordering the issues here.

http://is.gd/4mxF0Z

Namely, let a = ...; let b = ...; must drop "b, then a".

For consistency, we then dictate that let (a, b) = ...; also drops "b, then a". (This seems to me like it is not subject to debate nor change.)

The oddity is that then let p = (A, B); will drop the left component of the struct before the right component when it eventually drops p itself. I.e. that looks like dropping "a, then b"; which seems inconsistent with the two cases above.

@nikomatsakis
Copy link
Contributor

I feel pretty strongly that we ought to drop everything back-to-front. It seems to be overall most consistent. My mental model is that everything in life is a stack, and things in stacks get popped, and that's how it is. :P Now, it may be that for backwards compatibility reasons we don't want to change this though.

@nikomatsakis
Copy link
Contributor

I feel pretty strongly that we ought to drop everything back-to-front.

To clarify, I mean "as much as possible". I'm sure there are weird corner cases I'm not thinking about. But I would like to be as consistent with C++ as possible (I understand Swift took a different order, but I'm stuck in my ways.)

@steveklabnik
Copy link
Member Author

It seems to be overall most consistent

👍

@nagisa
Copy link
Member

nagisa commented Aug 18, 2015

I personally don’t care as long as it drops most of the time. Is there any code that would break due to non-specified order?

@glaebhoerl
Copy link
Contributor

I have the same stack intuition as @nikomatsakis, fwiw.

@nikomatsakis
Copy link
Contributor

@nagisa it's very hard to know whether changing the current behavior of the compiler would affect the correctness of code.

@nagisa
Copy link
Member

nagisa commented Aug 18, 2015

@nikomatsakis I’m more interested in use cases that would get fixed going from the current behaviour to any new one. That is, is there any case where current behaviour is not correct? If not, there’s moderately less incentive to do anything IMO.

@nikomatsakis
Copy link
Contributor

@nagisa I don't there is anything that is fundamentally "enabled" by any particular order; you could always reorder your fields under any rules. The appeal of changing is that the current rules were not particularly "designed" and are sort of a patchwork of conventions, making it hard to predict what will happen in any scenario without actually just trying it.

@glaebhoerl
Copy link
Contributor

Doesn't this all interact with dropck somewhere, even if only on the theoretical-hypothetical plane?

@pnkfelix
Copy link
Member

@glaebhoerl are you anticipating some world with safe interior pointers within a struct by having dropck know that the fields are dropped in some order?

That seems like such a far off extension to the language that it need not influence decisions here. (Plus my gut tells me that it's an orthogonal issue)

@glaebhoerl
Copy link
Contributor

Yeah, probably.

@nikomatsakis
Copy link
Contributor

@glaebhoerl @pnkfelix yes to everything you guys just said :)

@aidancully
Copy link

If there isn't a use case for this (and I don't see why there would be, yet, besides "surprise"?), I'd think leaving the order unspecified is the most forward-compatible approach? It's conceivable that a scenario we come up with in the future would be incompatible with whatever order gets decided on today...

For what it's worth, I'm worried about assigning unrelated meanings to the field declaration order. In a systems-programming context, one might worry about locality-of-reference in structure definitions: at the least, it affects cache performance. By using #[repr(C)], you can explicitly control locality-of-reference in a structure, but locality-of-reference is an orthogonal concern to field destruction order... so I'd think we'd want to use a different mechanism to control each concern independently?

@aidancully
Copy link

One other point, is that I suspect that in-memory-order for calling destructors may perform somewhat better, as cache controllers expect forward-memory-accesses... So

struct MyStruct {
  a: Foo,
  b: Bar,
  c: Baz,
}
fn profile_1(m: &MyStruct) {
  m.a; m.b; m.c;
}
fn profile_2(m: &MyStruct) {
  m.c; m.b; m.a;
}

profile_1 may perform better than profile_2. This could easily be tested by profiling Vec, and a changed Vec that drops in reverse order, though I haven't tried to do so yet.

@nagisa
Copy link
Member

nagisa commented Aug 21, 2015

The suspection is not valid. In the past, for some implementations of x86,
backwards memcpy used to perform even better than a forward one.

I can't exactly give more info at the moment though; on mobile.
On Aug 21, 2015 12:21 AM, "Aidan Cully" [email protected] wrote:

One other point, is that I suspect that in-memory-order for calling
destructors may perform somewhat better, as cache controllers expect
forward-memory-accesses... So

struct MyStruct {
a: Foo,
b: Bar,
c: Baz,
}fn profile_1(m: &MyStruct) {
m.a; m.b; m.c;
}fn profile_2(m: &MyStruct) {
m.c; m.b; m.a;
}

profile_1 may perform better than profile_2. This could easily be tested
by profiling Vec, and a changed Vec that drops in reverse order, though I
haven't tried to do so yet.


Reply to this email directly or view it on GitHub
#744 (comment).

@aidancully
Copy link

@nagisa I think that has to do with the specific memcpy operation. See section 2.2.5.4 of Intel's optimization manual... Quoting:

Two hardware prefetchers load data to the L1 DCache:

  • Data cache unit (DCU) prefetcher. This prefetcher, also known as the streaming prefetcher, is
    triggered by an ascending access to very recently loaded data. The processor assumes that this
    access is part of a streaming algorithm and automatically fetches the next line.
  • Instruction pointer (IP)-based stride prefetcher. This prefetcher keeps track of individual load
    instructions. If a load instruction is detected to have a regular stride, then a prefetch is sent to the
    next address which is the sum of the current address and the stride. This prefetcher can prefetch
    forward or backward and can detect strides of up to 2K bytes.

It's highly likely that memcpy could use the stride prefetcher, and would work just as well forwards and backwards. OTOH, a heterogenous structure may not use the stride prefetcher for backwards-oriented reads, but forward-looking reads should be able to use the streaming prefetcher. (This also means that my suggestion of testing performance with Vec wouldn't have worked to demonstrate the performance issue I mentioned.)

@aidancully
Copy link

I spent more time than I wanted on this, but I wrote a quick-n-dirty C++ test program (I know it's ugly and unidiomatic, sorry for that, I don't have much spare time) that (I think) shows that reverse order may actually be the least efficient order in which to clean up fields. When I run it, compiled with -O3, I get the following:

...some results not shown...
size: 2048; time: 631000 ns => 308.105469 (Reverse)
size: 2048; time: 565000 ns => 275.878906 (Random)
size: 2048; time: 533000 ns => 260.253906 (Forward)
size: 4096; time: 1577000 ns => 385.009766 (Reverse)
size: 4096; time: 1293000 ns => 315.673828 (Random)
size: 4096; time: 1101000 ns => 268.798828 (Forward)
size: 8192; time: 2932000 ns => 357.910156 (Reverse)
size: 8192; time: 2698000 ns => 329.345703 (Random)
size: 8192; time: 2257000 ns => 275.512695 (Forward)
size: 16384; time: 6782000 ns => 413.940430 (Reverse)
size: 16384; time: 6177000 ns => 377.014160 (Random)
size: 16384; time: 3852000 ns => 235.107422 (Forward)
size: 32768; time: 13016000 ns => 397.216797 (Reverse)
size: 32768; time: 10411000 ns => 317.718506 (Random)
size: 32768; time: 10884000 ns => 332.153320 (Forward)
size: 65536; time: 25647000 ns => 391.342163 (Reverse)
size: 65536; time: 23366000 ns => 356.536865 (Random)
size: 65536; time: 17553000 ns => 267.837524 (Forward)
size: 131072; time: 53949000 ns => 411.598206 (Reverse)
size: 131072; time: 46924000 ns => 358.001709 (Random)
size: 131072; time: 35042000 ns => 267.349243 (Forward)
size: 262144; time: 108725000 ns => 414.752960 (Reverse)
size: 262144; time: 94490000 ns => 360.450745 (Random)
size: 262144; time: 71401000 ns => 272.373199 (Forward)
size: 524288; time: 216911000 ns => 413.724899 (Reverse)
size: 524288; time: 188521000 ns => 359.575272 (Random)
size: 524288; time: 142849000 ns => 272.462845 (Forward)

Basically, in the test I allocate an array of a number of large structures, then read them in different orders - the "read" operation for an element in the array always works the same way, reading the array from front to back, but I select which element to read using different strategies: sequential order, reverse order, and pseudo-random order. (This is intended to be similar to how Vec would operate on Drop, if Vec implemented different element order access strategies.) The last number in each row is the amount of time we spent per element in the array on the read operation. My results seem to indicate that dropping back-to-front is actually the worst-case in terms of performance - iterating in reverse order is 1.5x as slow as iterating in forward order. (There are some results I can't immediately explain, but I've spent too long on it anyway, and the overall pattern seems reasonably clear, and in line with my expectations. Though I'm not 100% sure of the test program.)

@nagisa
Copy link
Member

nagisa commented Aug 22, 2015

One of the very important things to be noted here: the language gives up almost full control over the drop order. The only two you can’t quite control are primitive types (most relevant: arrays) and stack (already in reverse).

This might be both a pro and a con for this issue. Namely “anybody who wants a different behaviour than the default can pretty trivially implement it most of the time” and “why bother with it, if anybody who cares can make it (most of the time) be in reverse anyway” respectively.

Finally, the drop order can never be a guarantee; only a guideline – anybody can drop things in whatever order they want anyway.


EDIT: rereading my post I think I might have implied that it is impossible to drop elements from array in reverse order. That is not true; you can do that with a wrapper type.

@aidancully
Copy link

It probably won't be a surprise, but I'm 👎 on this issue: the drop-order should be left unspecified. There are a few reasons I say so:

  1. Leaving it unspecified allows the compiler to optimize the strategy to the target platform. While I suspect all platforms of interest will either behave identically with forward-and-reverse drop strategies (if they don't have a good prefetcher), or will be more efficient in the forward direction, cache prefetcher behavior is clearly platform specific, and it's conceivable that different drop orders might perform better on different platforms. Optimizing these memory accesses is only possible if the order is unspecified. (If it must be specified, forward order will ~always be as fast or faster.)
  2. Leaving it unspecified now is forward compatible with specifying the strategy in the future, if it's determined we need to do so.
  3. There's a general design principle that we should try to represent user intent as clearly as possible. Users usually won't care what order we drop fields in, and if they do care, they will probably want to make their intention explicit. This is most compatible with the language not specifying drop order, but allowing mechanisms for users to define an order for cases in which they care. (That is, I find @nagisa's "con" argument significantly more compelling than the "pro" argument.)

But those are just my opinions.

@petrochenkov
Copy link
Contributor

@aidancully

One other point, is that I suspect that in-memory-order for calling destructors may perform somewhat better, as cache controllers expect forward-memory-accesses...

After layout optimization (field reordering) declaration order may be different from order in memory, this may be one more argument in favor of leaving the destruction order unspecified at least for structs and tuples.

Otherwise,

I have the same stack intuition as @nikomatsakis, fwiw.

@nikomatsakis
Copy link
Contributor

@petrochenkov

After layout optimization (field reordering) declaration order may be different from order in memory, this may be one more argument in favor of leaving the destruction order unspecified at least for structs and tuples.

I don't think it really matters how things are lald out in memory. The whole stack thing is really just a metaphor.

@aidancully
Copy link

@nikomatsakis I think you misunderstood? The memory access pattern is important to performance. I think I've shown that the CPU will perform better when memory accesses on contiguous data structures are always in the forward direction. If we want drop to perform as efficiently as possible, then it should also go in memory forward order. If we have field re-ordering, and we don't specify drop order, then the compiler is free to drop fields in memory order, as opposed to declaration order, or reverse declaration order. This will be faster: in my earlier results, it looks like 140ns blocking on a cache fill (that's hit every time in my analogue of memory-reverse drop order), for an operation that usually takes 270ns to read almost 2KB of data (when using the analogue of memory-forward drop order). That's about 1.5x slower. And when I still don't see what problem is solved by defining the drop order to users, I struggle to understand the motivation.

Edit Actually, let me put it differently. The argument from performance, in current Rust, suggests that we should drop in forward declaration order, since that corresponds to memory order. As @petrochenkov points out, in future Rust, with field re-ordering, the argument from performance still suggests that we drop in memory order, but since the mapping from memory order to declaration order isn't controlled by users, that implies that the optimally-performing drop also won't be known to users. We get best performance by declaring that the drop order is unspecified.

@glaebhoerl
Copy link
Contributor

Given that field order is also undefined for structs, having the drop order be undefined as well does make sense. Would it be that unexpected for real-world programs to grow a behavioral dependence on the drop order used for arrays, though?

@nikomatsakis
Copy link
Contributor

Nominating for lang team discussion.

@strega-nil
Copy link

strega-nil commented Jun 13, 2016

@nikomatsakis Can we please just change this and specify? It's really weird and unexpected that

struct Dropper(u32);
impl Drop for Dropper {
  fn drop(&mut self) { println!("{}", self.0); }
}

{
  let x0 = Dropper(0);
  let x1 = Dropper(1);
  let x2 = Dropper(2);
}
// 2
// 1
// 0
{
  let x = (
    Dropper(0),
    Dropper(1),
    Dropper(2)
  );
}
// 0
// 1
// 2

will result in opposite drop orders; and unsafe code needs to be able to, imo, rely on drop ordering to be memory-safe.

@strega-nil
Copy link

I've seen too many people asking about drop order to think it's okay to not specify it anymore.

@strega-nil
Copy link

strega-nil commented Jun 13, 2016

Just an example:

struct ReadsOnDrop<'a>(&'a u32);
impl<'a> Drop for ReadsOnDrop<'a> {
  fn drop(&mut self) { println!("{}", self.0); }
}

// for this to be safe, you must order it like this
fn test0 () {
  let x = Box::new(0);
  let y = ReadsOnDrop(&*x);
}

// for *this* to be safe, you must order it like this
struct InternalRef {
  y: ReadsOnDrop<'static>, // always valid as long as InternalRef is alive
  x: Box<u32>, // is a Box so that the address stays the same
}

fn test1 () {
  let x = Box::new(1);
  let y = &*x as *const _;
  let ref_ = InternalRef { x: x, y: &*y };
}

// Now it's the same ordering as test0, what a normal person might expect to do
struct InternalRefUnsafe {
  x: Box<u32>,
  y: ReadsOnDrop<'static>,
}

fn test2 () {
  let x = Box::new(2);
  let y = &*x as *const _;
  let ref_ = InternalRef { x: x, y: &*y };
}

fn main () {
  test0 ();
  test1 ();
  test2 ();
}
// 0
// 1
// segfault!

This is weird and confusing.

@nikomatsakis nikomatsakis added the T-lang Relevant to the language team, which will review and decide on the RFC. label Jun 14, 2016
@nikomatsakis
Copy link
Contributor

I agree we should specify a drop order. I don't know that we should change the current one, though I would. Also, it seems that this nomination tag from April got overlooked due to the lack of a T-lang tag. :(

I'm not sure what's the best way to test if we could change this. We could certainly attempt a "crater-like" run that also runs tests, but I don't have a lot of confidence in that.

@strega-nil
Copy link

@nikomatsakis We don't specify it now, therefore we can break it. We should run it through nightlies, but any code that relies on it now is broken. Specifying it the "wrong" way just because of a trick of the implementation would hurt rust forever with confusing semantics, instead of in the short term with a change that breaks code which was relying on something we specifically didn't specify so we could change it.

@strega-nil
Copy link

Specifically, we should advertise this change hard, but it shouldn't harm the long-term goodness of the language.

@sfackler
Copy link
Member

There was a window to make this change, and it passed a year ago. It seems astonishingly irresponsible to change drop order a year into Rust being a stable language that people should use for real things. Is there something I'm missing or is this purely an aesthetic concern?

Real code in the wild does rely on the current drop order, including rust-openssl, and there is no upgrade path if we reverse it. Old versions of the libraries will be subtly broken when compiled with new rustc, and new versions of the libraries will be broken when compiled with old rustc.

Unless I am misremembering, our stability guarantee was not "we will make arbitrary changes as long as we post about them on reddit and users.rust-lang.org first".

@strega-nil
Copy link

@sfackler We have made no promises about drop order. Specifically, we have promised that there is no guarantee about drop order. That is different from stability. That is a promise of instability. If someone were relying on box patterns, it would not be a problem to break their code because we have said that box patterns are unstable. It is the same situation here. We can fix the code if need be; we can make the transition as easy as possible. I don't agree we have any promise to stability in this area.

@arielb1
Copy link
Contributor

arielb1 commented Jun 14, 2016

Quoting RFC #1122:

Order of dtor execution within a data structure is somewhat inconsistent (see #744).

@pnkfelix
Copy link
Member

pnkfelix commented Jul 7, 2016

This issue is ready for some RFC.

One such potential RFC would be to specify and stabilize the drop order as currently implemented.

Another such RFC would state a different drop order, and just state that we plan to change it overnight at some point. (I believe this is an example of an approach that @sfackler stated would be irresponsible.)

Another one would be an RFC that specifies a forward path to change the drop order, in a slow, controlled way.

  • For example (sort of off the top of my head), we could add an struct attribute that specifies to use the current drop order for the fields of that struct, and then also add a lint that warns for any struct without that attribute that has >1 field that has associated destruction effects.

In any case, this issue has been discussed for some time, so it seems like a good time for us to finally take action.

@nrc nrc removed the I-nominated label Jul 7, 2016
@nrc
Copy link
Member

nrc commented Jul 7, 2016

Further to @pnkfelix's comment (which, to clarify, was because we discussed this issue at the lang team meeting this week), the team felt that this is an important issue and one where we would like to make progress sooner rather than later. We felt that an RFC is the best way to make further progress here.

The team felt that simply changing the drop order without a migration plan, or with only advertising was not an option due to silent and subtle nature of failures.

That leaves either:

  • an RFC to stabilise the current behaviour,
  • an RFC to change the current behaviour by a well-defined process.

At this stage the lang team does not have a consensus over which direction to move in. In particular, we feel that making a call on the trade-off in any approach would require specifics about the details of that approach. (Personally, I would prefer change, if an acceptable plan can be formulated).

If anyone involved in this thread would like to write either RFC, we'd be very happy! If you would like to write one and need any help or advice please ping someone from the lang team and we'd be happy to help.

@carllerche
Copy link
Member

I have already written code that relies on the current drop order for correctness. It would be nice if it stays as is :)

@aochagavia
Copy link
Contributor

aochagavia commented Dec 9, 2016

I think a possibility would be to:

  • Add something like #[drop_order(legacy)] struct Foo { ... }, in order to opt-in for the current unstable behavior.
  • Add something like #[drop_order(stack)] struct Foo { ... }, in order to opt-in for the new behavior (which still needs to be specified, so we may end up choosing a different name than stack).
  • Add a warning lint for drop implementations that appear in modules where unsafe is used, in case they don't opt-in to any drop ordering. It seems reasonable to assume that modules are the limit of the scope of unsafe code, as explained here.
  • Post a PSA indicating that the drop order is going to change in the future and that you are required to update your code in case you rely on the current (unspecified) one.

IMO the combination of a lint and a PSA is a great way to raise awareness for all people affected by the change.

Note that #[drop_order(legacy)] and #[drop_order(stack)] seem non-trivial to implement, since structs are not the only place where the drop order matters (eg what happens if you create a tuple in a function and rely on its destructor order at the end of the scope? You don't actually store it in the struct, but you still want to choose the drop order). Alternatives include specifying drop order at a higher level (module or crate), or allowing using it at a function level (though the semantics could be confusing in this case).

@burdges
Copy link

burdges commented Dec 9, 2016

Just a question : Is the order specified as reverse in C++ to help reduce use after free? If so, would it be less important for Rust code which way the drop order gets standardized? I'd kinda think code that might care would needs Arcs everywhere anyways. Interacting with C++ code might become an issue of course.

@Amanieu
Copy link
Member

Amanieu commented Dec 9, 2016

@burdges In C++, fields in a struct are always initialized in the order they are declared in the struct definition. Field destruction is therefore done in the reverse order since destructors should be called in the reverse of the order that objects were constructed in.

This doesn't apply to us since in Rust the programmer has direct control over the order that fields are initialized in (left-to-right expression evaluation order).

@aochagavia
Copy link
Contributor

aochagavia commented Dec 12, 2016

@nrc I have summarized my thoughts on the subject in a blog post. Please let me know if you would like me to write an RFC based on it.

@mpartel
Copy link

mpartel commented Dec 12, 2016

Perhaps this is a naive and impractical suggestion, but a possible refinement of the create attribute idea in @aochagavia's blog might be the following: Any new crate must declare (in Cargo.toml) a language version. The drop behaviour for a struct would then depend on the language version of the crate that defines the struct.

Advantages:

  • New crates would default to the new behaviour. Older crates would start getting compiler warnings to set a language version.
  • The new drop order, and a mechanism for introducing other breaking changes in the future.

Disadvantages:

  • A codebase can now have mixed drop order: new language version crates can depend on old language version crates and vice versa.
  • Language semantics depending on a config setting adds complexity and can get out of hand if we're not careful.

@ahmedcharles
Copy link

@Amanieu In C++, you don't get to control the expression evaluation order, in the general case. In Rust, you get to control the expression evaluation order, but since Rust doesn't have constructors, it doesn't have an analogue to initialization order in C++, since C++ only defines the order in which constructors run, not the expressions used when passed to constructors as arguments.

@jan-hudec
Copy link

@ahmedcharles, in C++ you don't control evaluation order of arguments in one argument list, but between the members, arguments to the next member or base constructor are not evaluated before the previous constructor completes.

The Rust constructor expression is analogue of the C++ initialiser list. But due to the difference in initialisation order, Rust can't build a static drop order for the type (unlike C++ that does).

@aochagavia
Copy link
Contributor

This issue can be closed now, as the drop order has been specified by RFC 1857

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
T-lang Relevant to the language team, which will review and decide on the RFC.
Projects
None yet
Development

No branches or pull requests