-
Notifications
You must be signed in to change notification settings - Fork 5.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Source map generation causes build to go out of memory. #9568
Comments
If this is, in-fact, a memory management problem as a result of |
If someone wants to investigate upgrading to |
If no one else has already started on this, I'll put my ✋ up. |
The great news is that this code will only need to run in Node 8, which natively supports WASM: % meteor node -p WebAssembly.compile
[Function: compile] |
Same issue here, 1.6.0 builds in under 10 minutes, 1.6.1 takes over an hour or renders an out of memory error. |
If we spent some time instrumenting |
Also, just to be clear, build times of more than a few minutes (even ~10) are unacceptable, and should always be regarded as a bug. These apps aren't written in C++ or Scala or some other language known for long compile times. Something's definitely wrong here. |
I also just got a similar error on a rebuild of
|
@jamesmillerburgess Do you see any more clues in your stack trace about where the string/buffer slicing might have been happening? |
@benjamn Not really, but maybe the error could have something to do with me deleting a project file during a rebuild? I don't remember if this was exactly the build I deleted the file, but that was one of the last things I did before seeing the error. |
With regards to updating
The Meteor Tool makes a few Lines 1091 to 1093 in 1fec23f
Adapting this code to work with Promises is likely not going to be straightforward however, since Fiber yields are blocked by Unless I'm missing something? Suggestions are welcome! 🤔 |
Yes, didn't check this first since our jenkins was building, but those got the
Yes
Yes, and then it finishes in about the same time as 1.6.0 (still about 10 minutes). So the slowdown is probably from the memory getting used up. Might it be that the new Anyway, there's something in 1.6.1 that results in higher memory usage than 1.6.0 Unfortunately we're working towards a new release of our application and I've got little time to spend on these kind of problems right now. So I'm just reporting my findings now. One thing (unrelated to the above) I just noticed, is that almost all time in the |
Interesting! I'll try upping the memory limit on our builds as well. Actual build & deploy times for our app used to be around 6 minutes, the whole CircleCI run would take about 10. It might also be worth noting that we experienced a similar issue when using standard-minifier-js 2.2.3 instead of 2.2.1 in Meteor 1.6.0. |
@hwillson Since Meteor's |
@abernix I had tried, but unfortunately the
|
Just to add @abernix - removing the P.S. > Working through the |
Hmm. Commit e7167e5 leaves a lot to be understood (grr, squash!) about the reasoning for the meteor/tools/tool-env/isopackets.js Lines 296 to 298 in 070acc0
I think that would allow the meteor/tools/isobuild/bundler.js Lines 1972 to 1977 in 070acc0
await() you're needing to add in this case. It may need to be more surgical, but I'd be curious how the test suite behaves with that change. 😸
|
Thanks @abernix - that's exactly where I'm headed, and things are looking promising. I'm prepping a PR, so we'll be able to kick the tires shortly. |
Promising, hehe. Great to see there's progress here, would love to try it out, currently abroad but planning on testing the latest changes to Cordova and the build process next week. |
My case:
This is a weird error. On a new machine - MBP 15" 2017 -, when trying to deploy to make a build -deploy to Galaxy, --production flag, ...- it runs out of memory:
However, the same exact project builds on older machines without issues 🤷♂️ :
Any idea? The logic tells me that must be something with the new machine, however, I have no idea... the only change we did to the older machines was related to this issue which seems to be solved on newer versions of Meteor #6952 |
Upping max-old-space-size to 3GB in CircleCI resolves the issue, the build now completes in +/- 12 minutes on a free instance. To test this add yourself on CircleCI (v2): TOOL_NODE_FLAGS: --max-old-space-size=3072 to the environment within your image. |
I had this happen to me, |
@benjamn I've read the details about WebAssembly and if my interpretation is correct this could soon open up the possibility of (near) native performance of Javascript applications. Just checking here, before I create a FR: Meteor could, at some point, update its build proces to compile the bundle into WebAssembly and serve that alongside the current bundle, allowing browsers which support WebAssembly to run Meteor at (near) native speed. I'm probably missing some steps still, but aside from that this would be possible, or not? |
@KoenLav Compiling JavaScript to WebAssembly isn't something people are likely to do anytime soon, because WebAssembly has no built-in garbage collection, and JS is a language that relies on GC. Languages like Rust and C++ that don't rely on GC are currently the most attractive options for compiling to WebAssembly. Also, JavaScript works as-is in the browser, so there's no need to compile it to something else (besides the usual compilation of non-native syntax, a la Babel). |
the issue is still a problem |
@macrozone Can you test with 1.9-rc.2 which includes #10798 ? |
This commit implements an --exclude-archs arg to the run command which allows you to pass a comma-seperated list of architectures to exclude. Motivation: Improve rebuild speeds, by allowing to leave out certain web architectures during development. Meteor rebuild speeds, although greatly improved in recent versions, is still a common point of frustration as seen in the issues (meteor#10800) and on the forum (https://forums.meteor.com/t/2x-to-3x-longer-build-time-also-on-testing-since-1-8-2-update/51028/3) One reason for slow rebuilds is high memory pressure (meteor#9568). The web.browser.legacy architecture is currently always being build while most of us will only test this every once in a while. So while we don't need it most of times during development it does add to the high memory usage. Furthermore, even though these builds are delayed, long-running synchronous tasks have the potential to block the server startup (meteor#10261). This last issue can be partially, but not fully, improved by meteor#10427 Therefore a commonly requested feature has been to disable the legacy build (meteor/meteor-feature-requests#333) However, we may also temporarily disable cordova builds and we may even want to disable just the web.browser build if we're just debugging a bug in the legacy build. Therefore I chose to create the more generic --exclude-archs option.
I still see a problem here - Tested with v1.10.2 and It seems switching off the GC in source-maps generation (WASM, #10798 (comment)) causes a spike at the end of the build that goes over the 2048M value you'd expect with this setting. Apparent options for people stumbling upon this (correct me if I'm wrong):
|
@opyh This issue stems from a time when we were using the none WASM version. The slow / non-wasm version was using at least as much memory. The wasm comes from Rust code which does its own memory magement. GC isn't likely to help here. So that leaves option 1 & 3. |
Maybe we could use: https://github.com/parcel-bundler/source-map |
In the meantime, how about displaying a helpful warning before source maps generation? It’s a frustrating kind of error as it happens at the very end of the build process, and it is very likely to happen on typical CI setups. You get no sensible error message (or none at all depending on your Docker build tool). And when you stumble upon it, it’s unclear where to start searching. Theoretically, the build process could even foresee the issue, and abort the build on start, before you wait 10 minutes for an error message to appear. Suggestion: console.log(`
Your build machine has ${availableMemoryInMegabytes}M memory available for building.
If this process should abort with an out-of-memory (OOM) or “non-zero exit code 137”
error message, please increase the machine’s available memory.
See https://github.com/meteor/meteor/issues/9568 for details.
`)
` |
Is there a way to limit used memory in the Rust code? I think it's generally a good idea to have less non-wasm native dependencies to be more future-proof. Maybe related: |
Meteor’s source map generation uses no GC, which can produce out-of-memory errors in CI environments with <4G RAM. If this happens, the reason for killed containers can be difficult to track down. Some environments display a ‘OOMError’, some show a cryptic error message (‘non-zero exit code 137’), some show no error at all. This adds a warning so you have a clue from the logs when the container is killed. Related Meteor issue: meteor/meteor#9568
Meteor’s source map generation uses no GC, which can produce out-of-memory errors in CI environments with <4G RAM. If this happens, the reason for killed containers can be difficult to track down. Some environments display a ‘OOMError’, some show a cryptic error message (‘non-zero exit code 137’), some show no error at all. This adds a warning so you have a clue from the logs when the container is killed. Related Meteor issue: meteor/meteor#9568
Another idea: Are fibers absolutely necessary for this build step? Couldn't the build process generate them in separate child processes that don't use fibers, for example? |
#11143 looks like this might be a bigger issue in the near future - if more node modules begin to use WASM, you won't be able to use some of them without OOM crashes in Meteor. |
Meteor’s source map generation uses no GC, which can produce out-of-memory errors in CI environments with <4G RAM. If this happens, the reason for killed containers can be difficult to track down. Some environments display a ‘OOMError’, some show a cryptic error message (‘non-zero exit code 137’), some show no error at all. This adds a warning so you have a clue from the logs when the container is killed. Related Meteor issue: meteor/meteor#9568
I'm also getting this issue while building production meteor 2.1 |
I`m closing this issue if this problem show up again I will be really happy to reopen this issue :) |
I'm trying to verify if #9552 is fixed in 1.6.1, but it takes forever and then goes out of memory.
Meteor 1.6.0 starts/builds my
test-packages
command in about 7 minutes.Meteor 1.6.1 hangs for about 25 minutes on the
Build
step, and finally breaks down:Running the application normally (
meteor run
) seems to work.I did have to modify and include some packages to get 1.6.1 to work due to the babel changes (which turned out to be a very painful upgrade for a patch release IMO):
Other than that, the code is the same.
The text was updated successfully, but these errors were encountered: