Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Release 1.9 (Node.js 12) #10527

Merged
merged 137 commits into from
Jan 9, 2020
Merged

Release 1.9 (Node.js 12) #10527

merged 137 commits into from
Jan 9, 2020

Conversation

benjamn
Copy link
Contributor

@benjamn benjamn commented Apr 12, 2019

Since Node.js 12 is scheduled to become the LTS version on October 1st, 2019, Meteor 1.9 will update the Node.js version used by Meteor from 8.16.1 (in Meteor 1.8.2) to 12.10.0 (the most recent current version).

If you would like to help test the latest prerelease version of Meteor 1.9, you can run the following command to create a fresh application:

meteor create --release 1.9-rc.1 new-19rc1-app

The first casualty of this upgrade is that 32-bit Linux can no longer be supported. In fact, 32-bit Linux support was dropped in Node 10, so this is not exactly breaking news: nodejs/build#885

@benjamn benjamn added the in-development We are already working on it label Apr 12, 2019
@benjamn benjamn added this to the Release 1.9.0 milestone Apr 12, 2019
@benjamn benjamn self-assigned this Apr 12, 2019
@benjamn benjamn changed the title Release 1.9 [WIP] Release 1.9 Apr 12, 2019
@benjamn benjamn changed the base branch from devel to master April 12, 2019 22:47
@benjamn benjamn changed the title [WIP] Release 1.9 [WIP] Release 1.9 (Node.js 11-going-on-12) Apr 13, 2019
@KoenLav
Copy link
Contributor

KoenLav commented Apr 13, 2019

Just updated one of our more complicated apps to Meteor 1.9; at first look everything seems to work just fine.

The only thing holding us back was fourseven:scss (which relies on an older version of node-sass, which doesn't support Node 11). Created this PR to update node-sass in meteor-scss: Meteor-Community-Packages/meteor-scss#290

For anyone wanting to try out Meteor 1.9 with fourseven:scss:
cd packages
git clone https://github.com/KoenLav/meteor-scss.git

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Shouldn't this be 1.9-alpha.0?

Copy link
Contributor Author

@benjamn benjamn Apr 14, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The modules test app runs from a checkout during Circle CI self-tests, so the .meteor/release file doesn't really matter (although we totally could update the version to [email protected]). The reason it shows up in the changes for this PR is because I recently merged release-1.8.2 into release-1.9 (and both PRs are targeting master, which doesn't have the change).

@SimonSimCity
Copy link
Contributor

Node.js 12 will be out by the time Meteor 1.9 is finalized [...]

According to nodejs/node#25082 the initial version of Node.js 12 will be released today but LTS will start somewhen in October, so I guess you mean the latter one.

@benjamn
Copy link
Contributor Author

benjamn commented Apr 27, 2019

@SimonSimCity Right, I was mostly thinking about the LTS schedule. That said, we will update this branch to Node.js 12 as soon as this fibers/V8 issue has been addressed: laverdet/node-fibers#409

@sebakerckhof
Copy link
Contributor

The only thing holding us back was fourseven:scss (which relies on an older version of node-sass, which doesn't support Node 11). Created this PR to update node-sass in meteor-scss: Meteor-Community-Packages/meteor-scss#290

Thanks. Just released a new version

@KoenLav
Copy link
Contributor

KoenLav commented May 1, 2019

@benjamn are there any specific areas of interest you would like to have tested?

This way people will know what to look out for.

@sakulstra
Copy link
Contributor

sakulstra commented May 3, 2019

i'm not sure if this is worth mentioning, but we've been struggling with meteor build times for quite some time(20min+). Out of curiosity i updated to meteor update --release 1.9-alpha.0 to check if node 11/12 improves anything for us.

I'm not sure yet if anything is broken - dosn't seem like it -, but development cold start time was cut by min 5x(before i could go and make a coffee, now it was almost instant) - didn't benchmark it yet though.

Our build time without build-cache(which i think is the important one when using galaxy) dropped from 861s to 243s. Is there any explanation for this massive improvement?
I didn't change anything between the two runs except running meteor update --all-packages && meteor update --release 1.9-alpha.0 (which first bumped webapp one minor version before updating meteor)

// before
| other Target#minifyJs..................................465,274 ms (3)
| files.createTarball app.tar.gz...................169,961 ms (1)

// after
| other Target#minifyJs..................................162,104 ms (3)
| files.createTarball app.tar.gz....................43,075 ms (1)

Looking forward for this improvements in a stable release 🚀 💃

edit: for the people wondering how 861s relates to (20min+) -> when running the build via meteor deploy, and running it in our ci it's appearently a few minutes slower than when running meteor build locally.

@benjamn
Copy link
Contributor Author

benjamn commented Jan 2, 2020

@gunn It sounds like you've been testing with the alphas/betas and maybe there's a cached file lying around somewhere? I wasn't able to reproduce this starting from a 1.9-alpha.7 application, for what it's worth. I suspect meteor reset would solve the problem (note: resetting will delete your development Mongo database, in case you care about that).

@paulincai
Copy link
Contributor

@benjamn I confirm that CPU usage has been dramatically reduced with 1.9-rc.1. Well done!

The only question that remains is academic - does this actually fix the problem or merely treat the symptoms of a design flaw in the node.js v12 garbage collector?

@benjamn observations from the monitoring tools. I use a no-traffic project of a pretty large code bundle. On "standby" I want to highlight the move of memory and processor for 1.8.x to 1.9-rc.0, 1.9-rc.1 and 1.9-rc2:

1.8.x to 1.9-rc.0
Memory usage keeps growing.
Screen Shot 2019-12-29 at 13 54 54
Processor stable over many days.
Screen Shot 2019-12-31 at 17 34 02

1.9-rc.1
I think this is self explanatory. There are CPU spikes for every memory drop. I don't know how big the spike would go in a heavy traffic project. Maybe we can get some graphics from others.
You can clearly notice where I deployed the 1.9-rc.1 as the memory usage pattern changes and CPU drops.
Screen Shot 2020-01-03 at 02 37 11
Screen Shot 2020-01-03 at 02 41 41

1.9-rc.2 behaves exactly like 1.9-rc.1

@sebakerckhof
Copy link
Contributor

@paulincai A couple of observations regarding those graphs:

  1. A no-traffic project isn't a very good measure in this case. The changes are related to triggering v8 global garbage collection when a fiber starts/exits. Meteor handles each incoming request from each connection in a separate fiber. With no traffic not a lot of fibers start/exit and the changes between rc.0 and rc.2 shouldn't really make a difference

  2. The time scale of your graphs (between the top and bottom) is very different, which can give a skewed view in kadira.
    E.g. this is a graph over a week on one of my projects:
    image

This is the same project from the last 8 hours of the above graph:
image

So the spikes are just not visible when you view a larger time range.

@rj-david
Copy link
Contributor

rj-david commented Jan 4, 2020

@benjamn we are getting an issue with 1.9-rc.2. The build was not starting in production

upon starting meteor service, we get:

bundle/programs/server/node_modules/fibers/bin/linux-x64-72-glibc/fibers.node is missing.

upon checking, the one available is

bundle/programs/server/node_modules/fibers/bin/linux-x64-57-glibc/fibers.node

72 and 57 point to NODE_MODULE_VERSION.

We did not get this problem with 1.9-rc.1

@benjamn
Copy link
Contributor Author

benjamn commented Jan 5, 2020

@rj-david What's your production environment? Galaxy, or something else? That problem can happen if you use a different version of Node to run the npm install command in bundle/programs/server (which is one of the necessary steps for running a bundled Meteor application in production) than the version of Node you use to run the main.js script. You should be using Node 12 (or 12.14.0 specifically) in both cases.

@rj-david
Copy link
Contributor

rj-david commented Jan 5, 2020

And that was the case. We were running npm install before updating the node version in our deploy script. Thanks

P.S. 1.9-rc.2 was running without issues

@paulincai
Copy link
Contributor

@sebakerckhof I do appreciate your explanation about the fibers, I followed your contributions in this areas but I am not sure I understand the rest. Let me please explain it for you again.
In the first image I need a long time range to show how the memory builds slowly. If you would like to see them side-by-side, you would compare the first image with the third and observe a change of pattern. Now, to 'zoom in' and see what is in the new pattern, you have the 4th image with some beautiful waves.
These are "observations" and are not subject to good or bad measure. It's more like : "Look, a change of pattern. Nice."

@sebakerckhof
Copy link
Contributor

@paulincai What I meant to say is that you shouldn't see much of a difference between 1.9-rc.0 and rc.1/rc.2 on a deployment without traffic (although 1.8.x is possible since it's a way older node version). I also have a test deployment with no traffic and don't see the same things as you between rc.0 and rc.2. So I'm trying to find an explanation for why you do see a difference.

In understood you're saying you have CPU spikes in rc.1&2 you did not have in rc.0. But the rc.0 graphs show a 5 day time window, while the graph of rc.2 with the peaks is an 8-hour window. So I was pointing out that the spikes you see might have been there as well on rc.0 if you look in the same time range, since the graphs in kadira will not show the peaks over a large time range.

filipenevola and others added 5 commits January 7, 2020 10:52
This includes laverdet/node-fibers#429, fixing the
CPU spikes reported and discussed here: #10527 (comment)

Using an official fibers release rather than a GitHub URL is preferable
because it doesn't require building fibers from source when deploying a
Meteor app to production, and also doesn't rely on GitHub being
operational, though of course it does rely on other networked services
like npm.
@benjamn benjamn merged commit 2809237 into master Jan 9, 2020
@vlasky
Copy link
Contributor

vlasky commented Jan 9, 2020

@benjamn I have just updated to release-1.9. I am getting similar issues concerning node-fibers when deploying under CentOS 7.

It seems to have been compiled with a more recent version of G++ with a more recent version of libstdc++.so.6 that is not supported under CentOS 7.

Jan 09 12:44:07 ns557985 node[5223]: ## There is an issue with `node-fibers` ##
Jan 09 12:44:07 ns557985 node[5223]: `/var/www/html/fleety/meteor/new_dispatchprod-20200109-124257/bundle/programs/server/node_modules/fibers/bin/linux-x64-72-glibc/fibers.node` is missing.
Jan 09 12:44:07 ns557985 node[5223]: Try running this to fix the issue: /usr/bin/node /var/www/html/fleety/meteor/new_dispatchprod-20200109-124257/bundle/programs/server/node_modules/fibers/
build
Jan 09 12:44:07 ns557985 node[5223]: Error: /lib64/libstdc++.so.6: version `CXXABI_1.3.9' not found (required by /var/www/html/fleety/meteor/new_dispatchprod-20200109-124257/bundle/programs/
server/node_modules/fibers/bin/linux-x64-72-glibc/fibers.node)

@benjamn we are getting an issue with 1.9-rc.2. The build was not starting in production

upon starting meteor service, we get:

bundle/programs/server/node_modules/fibers/bin/linux-x64-72-glibc/fibers.node is missing.

upon checking, the one available is

bundle/programs/server/node_modules/fibers/bin/linux-x64-57-glibc/fibers.node

72 and 57 point to NODE_MODULE_VERSION.

We did not get this problem with 1.9-rc.1

@xet7
Copy link
Contributor

xet7 commented Jan 9, 2020

@vlasky

CentOS 7 and RHEL7 is also problematic for Meteor 1.8.2, Snap updates don't work properly:
wekan/wekan-snap#103 (comment)

I don't know is it just that Meteor does not work well on CentOS 7 and RHEL7.

@vlasky
Copy link
Contributor

vlasky commented Jan 9, 2020

@xet7 no I've been happily using CentOS 7 for all Meteor development since I first began using Meteor in 2015.

This issue is simply because the binary was compiled with a too recent version of G++ that broke compatibility with older environments, e.g. the package maintainer may have accidentally built it under an up-to-date Linux distribution like Fedora or Ubuntu that has cutting edge versions of GNU tools.

CentOS 7 is using GNU GCC 4.8.5 which was released in 2015.

@benjamn
Copy link
Contributor Author

benjamn commented Jan 10, 2020

@vlasky When you run npm install in bundle/programs/server, it's supposed to run node npm-rebuild.js, which passes the --update-binary flag to npm, which I thought should recompile the fibers binary if necessary… but maybe something's wrong with that.

In the meantime, this seems like something you could potentially work around in your deployment environment, perhaps by deleting bundle/programs/server/node_modules/fibers/bin and running npm rebuild as an extra step? This has not been a problem on Galaxy, to my knowledge.

@xet7
Copy link
Contributor

xet7 commented Jan 10, 2020

About building fibers:

  • At Ubuntu 16.04 it requires build-essential and libcurl3.
  • At Ubuntu 18.04 it requires build-essential and libcurl4

https://askubuntu.com/questions/1058517/mongod-error-while-loading-shared-libraries-libcurl-so-4-cannot-open-shared

I'm currently trying to figure out how to build Wekan Snap with correct dependencies to get Ubuntu 16.04-based Snap to have working MongoDB:
https://github.com/wekan/wekan/blob/master/snapcraft.yaml#L81-L101

Sure I have also many other wishes what to get working in Snap wekan/wekan-snap#103 (comment)

I don't know could libssl.so.1 conflict on CentOS wekan/wekan-snap#117

@xet7
Copy link
Contributor

xet7 commented Jan 10, 2020

So for CentOS 7, some equivalent of build-essential would need to be installed first.

@xet7
Copy link
Contributor

xet7 commented Jan 10, 2020

Before I added libcurl3, snap errors looked like this:

 sudo snap logs wekan.mongodb
2020-01-10T00:39:03Z wekan.mongodb[32472]: ATTACHMENTS_STORE_PATH= (default value)
2020-01-10T00:39:03Z wekan.mongodb[32472]: mongodb bind options:  --bind_ip 127.0.0.1 --port 27019
2020-01-10T00:39:03Z wekan.mongodb[33222]: mongod: error while loading shared libraries: libcurl.so.4: cannot open shared object file: No such file or directory
2020-01-10T00:39:03Z systemd[1]: snap.wekan.mongodb.service: Main process exited, code=exited, status=127/n/a
2020-01-10T00:39:03Z systemd[1]: snap.wekan.mongodb.service: Failed with result 'exit-code'.
2020-01-10T00:39:03Z systemd[1]: snap.wekan.mongodb.service: Scheduled restart job, restart counter is at 5.
2020-01-10T00:39:03Z systemd[1]: Stopped Service for snap application wekan.mongodb.
2020-01-10T00:39:03Z systemd[1]: snap.wekan.mongodb.service: Start request repeated too quickly.
2020-01-10T00:39:03Z systemd[1]: snap.wekan.mongodb.service: Failed with result 'exit-code'.
2020-01-10T00:39:03Z systemd[1]: Failed to start Service for snap application wekan.mongodb.

And at syslog:

Jan 10 02:39:08 x wekan.wekan[34749]: LOGOUT_ON_MINUTES= (default value)
Jan 10 02:39:08 x wekan.wekan[34749]: DEFAULT_AUTHENTICATION_METHOD= (default value)
Jan 10 02:39:08 x wekan.wekan[34749]: ATTACHMENTS_STORE_PATH= (default value)
Jan 10 02:39:08 x wekan.wekan[34749]: MONGO_URL=mongodb://127.0.0.1:27019/wekan
Jan 10 02:39:08 x wekan.wekan[35480]: /snap/wekan/717/programs/server/node_modules/fibers/future.js:313
Jan 10 02:39:08 x wekan.wekan[35480]: #011#011#011#011#011#011throw(ex);
Jan 10 02:39:08 x wekan.wekan[35480]: #011#011#011#011#011#011^
Jan 10 02:39:08 x wekan.wekan[35480]: MongoNetworkError: failed to connect to server [127.0.0.1:27019] on first connect [Error: connect ECONNREFUSED 127.0.0.1:27019
Jan 10 02:39:08 x wekan.wekan[35480]:     at TCPConnectWrap.afterConnect [as oncomplete] (net.js:1134:16) {
Jan 10 02:39:08 x wekan.wekan[35480]:   name: 'MongoNetworkError',
Jan 10 02:39:08 x wekan.wekan[35480]:   errorLabels: [Array],
Jan 10 02:39:08 x wekan.wekan[35480]:   [Symbol(mongoErrorContextSymbol)]: {}
Jan 10 02:39:08 x wekan.wekan[35480]: }]
Jan 10 02:39:08 x wekan.wekan[35480]:     at Pool.<anonymous> (/snap/wekan/717/programs/server/npm/node_modules/meteor/npm-mongo/node_modules/mongodb-core/lib/topologies/server.js:431:11)
Jan 10 02:39:08 x wekan.wekan[35480]:     at Pool.emit (events.js:223:5)
Jan 10 02:39:08 x wekan.wekan[35480]:     at /snap/wekan/717/programs/server/npm/node_modules/meteor/npm-mongo/node_modules/mongodb-core/lib/connection/pool.js:557:14
Jan 10 02:39:08 x wekan.wekan[35480]:     at /snap/wekan/717/programs/server/npm/node_modules/meteor/npm-mongo/node_modules/mongodb-core/lib/connection/connect.js:39:11
Jan 10 02:39:08 x wekan.wekan[35480]:     at callback (/snap/wekan/717/programs/server/npm/node_modules/meteor/npm-mongo/node_modules/mongodb-core/lib/connection/connect.js:261:5)
Jan 10 02:39:08 x wekan.wekan[35480]:     at Socket.<anonymous> (/snap/wekan/717/programs/server/npm/node_modules/meteor/npm-mongo/node_modules/mongodb-core/lib/connection/connect.js:286:7)
Jan 10 02:39:08 x wekan.wekan[35480]:     at Object.onceWrapper (events.js:313:26)
Jan 10 02:39:08 x wekan.wekan[35480]:     at Socket.emit (events.js:223:5)
Jan 10 02:39:08 x wekan.wekan[35480]:     at emitErrorNT (internal/streams/destroy.js:92:8)
Jan 10 02:39:08 x wekan.wekan[35480]:     at emitErrorAndCloseNT (internal/streams/destroy.js:60:3)
Jan 10 02:39:08 x wekan.wekan[35480]:     at processTicksAndRejections (internal/process/task_queues.js:81:21) {
Jan 10 02:39:08 x wekan.wekan[35480]:   name: 'MongoNetworkError',
Jan 10 02:39:08 x wekan.wekan[35480]:   errorLabels: [ 'TransientTransactionError' ],
Jan 10 02:39:08 x wekan.wekan[35480]:   [Symbol(mongoErrorContextSymbol)]: {}
Jan 10 02:39:08 x wekan.wekan[35480]: }
Jan 10 02:39:08 x systemd[1]: snap.wekan.wekan.service: Main process exited, code=exited, status=1/FAILURE
Jan 10 02:39:08 x systemd[1]: snap.wekan.wekan.service: Failed with result 'exit-code'.
Jan 10 02:39:09 x systemd[1]: snap.wekan.wekan.service: Scheduled restart job, restart counter is at 5.
Jan 10 02:39:09 x systemd[1]: Stopped Service for snap application wekan.wekan.
Jan 10 02:39:09 x systemd[1]: snap.wekan.wekan.service: Start request repeated too quickly.
Jan 10 02:39:09 x systemd[1]: snap.wekan.wekan.service: Failed with result 'exit-code'.
Jan 10 02:39:09 x systemd[1]: Failed to start Service for snap application wekan.wekan.

@xet7
Copy link
Contributor

xet7 commented Jan 10, 2020

I don't know is it possible to go directly from MongoDB 3.2.22 to 4.2.2 or would database need some conversion steps.

@filipenevola
Copy link
Collaborator

Meteor 1.9 is now recommended 🎉 🎉 🎉

@vlasky
Copy link
Contributor

vlasky commented Jan 16, 2020

@benjamn I rebooted the server, did a clean install of meteor and then it started working normally.

@vlasky When you run npm install in bundle/programs/server, it's supposed to run node npm-rebuild.js, which passes the --update-binary flag to npm, which I thought should recompile the fibers binary if necessary… but maybe something's wrong with that.

In the meantime, this seems like something you could potentially work around in your deployment environment, perhaps by deleting bundle/programs/server/node_modules/fibers/bin and running npm rebuild as an extra step? This has not been a problem on Galaxy, to my knowledge.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
in-development We are already working on it
Projects
None yet
Development

Successfully merging this pull request may close these issues.