-
Notifications
You must be signed in to change notification settings - Fork 12.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Compilation of a crate using a large static map fails on latest i686-pc-windows-gnu Beta #36799
Comments
cc @brson just want to make sure you're aware of this. |
Related to #36926 perhaps? ("1.12.0: High memory usage when linking in release mode with debug info") |
@urschrei have you tried this on platforms other than windows, out of curiosity? |
I see. It seems to work on |
I haven't tried to build i686 on Linux or OSX, but I easily could… |
Well, I just did a run on my linux box. The memory usage is certainly through the roof: https://gist.github.com/nikomatsakis/ea771dd69f12ebc5d3d5848fa59fb43a This is using nightly ( |
So @alexcrichton has executed a massif run: https://gist.github.com/alexcrichton/d20d685dd7475b1801a2ccac6ba15b08
The peak result (measurement 48) looks pretty similar. More memory used by MIR:
These numbers are just based on a kind of quick scan of the gist. |
It seems clear we need to revisit our handling of statics and constants in a big way. But then this has been clear for a long time. =) I'm wondering if we can find some kind of "quick fix" here. I also haven't compared with historical records -- but most of that memory we see above, we would have been allocating before too, so I'm guessing this is a case of being pushed just over the threshold on i686, versus a sudden spike. |
Sizes of some types from MIR (gathered from play):
|
I am feeling torn here. It seems the best we can do short-term is to make some small changes to MIR/HIR and try to bring the peak down below 4GB. The long term fix would be to revisit the overall representation particularly around constants and see what we can do to bring the size down. One thing I was wondering about (which is probably answerable from @alexcrichton's measurements) is what percentage of memory is being used just in the side-tables vs the HIR itself. In any case, it seems like 152 bytes for a mir rvalue is really quite big. |
Looking again at the massif results, it looks like MIR is taking more memory than I initially thought. One place that seems to use quite a bit is the list of scopes. |
Better numbers:
|
Just pinging @nikomatsakis to keep this P-high bug on his radar. |
I've made basically no progress here, I'm afraid. I think the most likely path forward in short term is to try and reduce memory usage in various data structures. Not very satisfying though. I'm not sure if we can make up the gap that way. |
Discussed in compiler meeting. Conclusion: miri would be the proper fix, but maybe we can shrink MIR a bit in the short term, perhaps enough to push us over the edge. I personally probably don't have time for this just now (have some other regr to examine). Hence re-assigning to @pnkfelix -- @arielb1 maybe also interested in doing something? |
cc @nnethercote in case he might have input on ways to attack this |
Massif is slow and the output isn't the easy to read, but it's the ideal tool for tackling this. The peak snapshot in @alexcrichton's data is number 48, and I've made a nicer, cut-down version of it here: https://gist.github.com/nnethercote/935db34ff2da854df8a69fa28c978497 You can see that ~30% of the memory usage comes from ExprKind::Tup(ref elts) => {
hir::ExprTup(elts.iter().map(|x| self.lower_expr(x)).collect())
}
|
The Also, |
Having a 32-bit MIR "shared object representation" We can then intern |
@arielb1 Oh right, with one CGU memory usage should be the same unless there's some kind of bug. |
encode region::Scope using fewer bytes Now that region::Scope is no longer interned, its size is more important. This PR encodes region::Scope in 8 bytes instead of 12, which should speed up region inference somewhat (perf testing needed) and should improve the margins on #36799 by 64MB (that's not a lot, I did this PR mostly to speed up region inference). This is a perf-sensitive PR. Please don't roll me up. r? @eddyb This is based on #44743 so I could get more accurate measurements on #36799.
@petrochenkov's span reform PR had lost us 400MB of space here, which means making this compile is that much harder. Maybe we should just use enough bits for the crate when encoding spans? |
or use 40 bit spans, which should be enough to handle 64MB crates? |
@arielb1 I'm not sure I understand: The span PR makes this test case use 400 MB more -- or less? |
it makes the test case use 400MB more, because the span just overflows the 24-bit length we have. |
The current peak is during LLVM translation where we are using memory from both MIR and LLVM. I think that after miri lands, we could easily store constants as byte arrays, which should bring this well into the green zone. |
triage: P-medium It seems like we are going to wait until we can fix this properly. |
miri has landed. I'm not entirely sure how to reproduce this though. |
Is it on the latest Nightly? If so, I’ll kick off an Appveyor build in a couple of hours.
…--
steph
On 17 Apr 2018, at 15:52, Oliver Schneider ***@***.***> wrote:
miri has landed. I'm not entirely sure how to reproduce this though.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub, or mute the thread.
|
jup. it's even in beta (although somewhat broken) |
we have new nightlies! |
i686 is failing on ac3c228 (2018-04-18) with exit code: 3221225501: https://ci.appveyor.com/project/urschrei/lonlat-bng/build/job/qaipuqj88243xs4c#L115 (Which is a memory exhaustion error I think?) |
looks like it. Because the x86_64 |
@oli-obk Oh, I had no idea – can you point me at some details? |
You need to install the cross toolchain via rustup and invoke cargo with the target flag for that cross target |
Not sure whether this is exactly the same issue, but compiling the test suite for v0.8.5 of the Future versions of that crate will work around the failure by changing the test. I don't know how to run the more detailed RAM diagnostics @nikomatsakis and @alexcrichton report above, but i offer this as another example of excessive memory allocation that fails on 32-bit platforms. I can also report this as a separate issue, if that would be useful. |
I'm trying to build a
cdylib
(https://github.com/urschrei/lonlat_bng) which requires the https://github.com/urschrei/ostn15_phf crate.building on AppVeyor, on
i686-pc-windows-gnu
, using the latest beta, is failing with an OOM error:Details
The
ostn15_phf
crate is essentially just a very big static map, built using PHF (the generated, uncompiled map is around 42.9mb)The build passed when running
cargo test
, usingrustc 1.12.0-beta.3 (341bfe43c 2016-09-16)
:https://ci.appveyor.com/project/urschrei/lonlat-bng/build/105/job/3y1llt6luqs3phs3
It's now failing when running
cargo test
, usingrustc 1.12.0-beta.6 (d3eb33ef8 2016-09-23)
:https://ci.appveyor.com/project/urschrei/lonlat-bng/build/job/27pgrkx2cnn2gw50
The failure occurs when compiling
ostn15_phf
withfatal runtime error: out of memory
The text was updated successfully, but these errors were encountered: