Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

docker: add openssl 1.0.2 and zlib #1052

Merged
merged 6 commits into from
Feb 14, 2018
Merged

docker: add openssl 1.0.2 and zlib #1052

merged 6 commits into from
Feb 14, 2018

Conversation

rvagg
Copy link
Member

@rvagg rvagg commented Dec 28, 2017

These are operational in CI now, see https://ci.nodejs.org/job/node-test-commit-linux-linked/1131/ for master and https://ci.nodejs.org/job/node-test-commit-linux-linked/1133/ for v4.x.

Next up libuv, cares and nghttp2 .. maybe http-parser, but I'm not sure anyone's actually using that shared functionality.

@rvagg
Copy link
Member Author

rvagg commented Dec 29, 2017

I've resized the two DigitalOcean droplets running this to 8 CPU / 16G and replaced the Joyent VM with an 8 CPU one, so we have some room for expansion now. The Softlayer one is still the original size but it's only got the Alpine containers + one of these "sharedlibs" containers on it so it won't be too heavily taxed. We should probably upgrade that one when we get the next batch of containers in Jenkins though.

@gibfahn
Copy link
Member

gibfahn commented Jan 2, 2018

What's the run time for these docker containers going to be like? Looking at it last week, it seemed to be about 45-60 minutes when I checked last week, but the last few builds have been 15 minutes. If it's going to be 60 that's going to double our build times (which I think are around 30 when nothing goes wrong).

@rvagg
Copy link
Member Author

rvagg commented Jan 3, 2018

It should be closer to 15, the cache warm-up is long with these because they are scattered across a bunch of hosts. Every one of those containers needs to warm its cache up before it can run at speed, and that needs to happen across all of our active branches. The daily runs on the main branches are really helpful for this and get us going in a matter of days.

When they've all warmed up, it's the Debug builds that take much longer than the rest since they not only do debug but they also do release, so there's a double build, or maybe a ~2.2x build because of the extra time for debug compile.

One thing that could improve the caching situation is to share .ccache directories across all containers on a host. I suppose either mounting a single directory in /home/iojs/.ccache on them all or symlinking all of the host mounted /home/iojs/ directories to a single common .ccache (I'm not sure if that works with Docker though). I can experiment with this when I find the time.

@gibfahn
Copy link
Member

gibfahn commented Jan 3, 2018

When they've all warmed up, it's the Debug builds that take much longer than the rest since they not only do debug but they also do release, so there's a double build, or maybe a ~2.2x build because of the extra time for debug compile.

Do we need the debug for every run? Maybe that could be switched on when needed.

mounting a single directory in /home/iojs/.ccache on them all

That sounds like it should work (and sounds reasonably straightforward).

@rvagg
Copy link
Member Author

rvagg commented Jan 8, 2018

Do we need the debug for every run? Maybe that could be switched on when needed.

no, but if we make it optional will it ever actually get run? we had a situation late last year when debug builds broke because of a single commit and it took someone external to discover it. If debug builds are the slowest in CI then I'm fine with optimising it away somehow as long as it actually gets run occasionally.

Here's a thought—how about an "exhaustive" flag that you can turn on in node-test-commit and trickles down and can turn on things that take longer, like this, then we can turn it on for the regular daily branch builds at least.

Another thought—tbh I don't know why it builds both Release and Debug when doing this, perhaps that's a bug and it should only be compiling once. I'd love for someone to investigate and report back the reasoning here.

@gibfahn
Copy link
Member

gibfahn commented Jan 8, 2018

Here's a thought—how about an "exhaustive" flag that you can turn on in node-test-commit and trickles down and can turn on things that take longer, like this, then we can turn it on for the regular daily branch builds at least.

Makes sense, but we need to make sure someone actually checks the nightly builds (which AFAIK doesn't happen).

Another thought—tbh I don't know why it builds both Release and Debug when doing this, perhaps that's a bug and it should only be compiling once. I'd love for someone to investigate and report back the reasoning here.

Sounds like we should raise an issue in core to investigate.

@gibfahn gibfahn closed this Jan 8, 2018
@gibfahn gibfahn reopened this Jan 8, 2018
@maclover7
Copy link
Contributor

ping @rvagg

@rvagg
Copy link
Member Author

rvagg commented Feb 11, 2018

@gibfahn check latest, I've set up a shared .ccache directory and docker seems to handle the nested mounts just fine. I've cleaned out test-softlayer-ubuntu1604_docker-x64-1 and run this script against it and it's now using the shared setup and working nicely.

I also added an Ubuntu 18.04 container, it's a couple of months early but I wanted to have it available in CI for tinkering with just as a heads-up for the next most important linux release ahead of us. I've not hooked it up to any of the regular jobs, but you can add it temporarily if anyone's interested. 18.04 is undergoing some pretty heavy work so I wouldn't be surprised if we uncover some problems.

@rvagg
Copy link
Member Author

rvagg commented Feb 11, 2018

18.04 looking good with master fwiw https://ci.nodejs.org/job/node-test-commit-linux/16318/nodes=ubuntu1804/

@rvagg
Copy link
Member Author

rvagg commented Feb 13, 2018

.ccache/tmp/ was getting conflicts amongst containers sharing it, so I've included a custom CCACHE_TEMPDIR in each container that should set a unique tmp dir.

@rvagg
Copy link
Member Author

rvagg commented Feb 13, 2018

Last fix seems to be working really nicely! Lots of green for containered.

@maclover7
Copy link
Contributor

@rvagg If things are looking good feel free to land

@rvagg
Copy link
Member Author

rvagg commented Feb 14, 2018

Well as it so happens I added OpenSSL 1.1.1-pre1 as per nodejs/node#18770, pushed that commit. Someone had better cast an eye on that before I push because there's a lot of stuff building up in the sharedlibs containers and I'm the only one driving that—if you disagree with this approach then out with it!

@rvagg rvagg merged commit 87bf5a6 into master Feb 14, 2018
@rvagg rvagg deleted the rvagg/docker-host branch February 14, 2018 19:51
@rvagg
Copy link
Member Author

rvagg commented Feb 14, 2018

merged cause I need them in master as latest versions so folks have better access to the current dockerfiles easily

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants