-
Notifications
You must be signed in to change notification settings - Fork 166
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
git-nodesource-update-reference has no disk space #2028
Comments
Some things on main nfs host got out of hand without me noticing, done a major cleanup of that and it'll hopefully stay under control. I haven't touched the Pi's storage, I don't see any bloat there which is great, they've been nice and stable. |
FYI arm-fanned is still offline, I've been having a ton of trouble with booting them in recent months so updated the boot kernel and all boot related resources to the latest Raspbian version, which is based on Debian Buster, while the Pi's are all on Stretch. They're booting really nicely now but Docker won't start, probably a kernel mismatch or something. I've also tried an in-place upgrade to Buster on one of the Pi3's but it won't boot now. So, they're all going to be offline until I can find time to do some more major investigation or surgery -- perhaps reprovisioning everything. @nodejs/releasers: Node 8's ARMv6 binaries still depends on building on the Pi's so if you're planning on doing a release, or a security release, then I'll need to get onto that ASAP, please let me know if there's urgency. |
Happening again, e.g. https://ci.nodejs.org/job/git-nodesource-update-reference/22704/console. |
Is there anything I can reasonably do to help resolve the space issue as a member of the WG with access to |
unfortunately this is all on me, the nfs server ran out of space, I hadn't removed all of the old roots when I set up new ones so the disks filled up quickly. the codebase is also getting bigger so this is becoming more acute. I've removed all of the old roots for now but it's something I'm going to have to monitor and figure out a strategy for managing better. I don't have any "spare" SSDs that can offer more space than the two they're currently using so we'll have to be "smarter". |
Ran out of space again today, which caused @MylesBorins' release woes #2094 (comment) - it's running now @MylesBorins: https://ci-release.nodejs.org/job/iojs+release/4938/nodes=pi1-docker/ Today's strategy to attempt to deal with this is to leave 1/2 of the Pi1's disabled, all on one of the disks. I think Refael had a similar strategy to deal with this previously and it seemed to work well. But everythings just getting so much bigger, the .git directories alone for these binary test jobs are in the order of 3-8G each (they contain actual compiled binaries so I don't think it's something we can just shallow-clone our way out of). When Node 8 is EOL, the release machines can go, that takes out two of them. We've committed to testing Pi1's for Node 10 but we can do that with 1/2 of them since the test runs are infrequent and we don't need a large degree of parallelisation. I don't think this will be the last we see of disk-full problems though. |
@rvagg how big are the SDDs you already have? They seem pretty cheap these days so I'm wondering if we should ask the Foundation to by some/send them to you? |
one 224G and one shared with a root disk but sharing 100G of that currently I need to remove the two release Pi1's now that 8 is gone, along with the shared ccache files that were only used to compile releases now we do other compiles on the cross-compiler, so that's some space to be saved. We now only have Node 10 using the Pi1's so we could just power down a few of them to just a skeleton crew to run the occasional tests because we don't need high test throughput for non-master branches. Let me try jiggering space a bit more before we throw more resources at it, but I'll keep that in mind as an option. |
@rvagg thanks for the update. |
arm-fanned is back online; we're on a new 500Gb SSD I came into possession of but don't need for anything else. It's probably not as fast as the other disks it was on and they're all running off the single disk now so that'll slow it down a little too, but there's plenty of space now at least that it gives us breathing room. I've run the tests across the release lines and there's a few flaky machines that have been taken out (there's also one Pi 3 that's busted entirely and needs reprovisioning) but mostly it's working well. There's a single test perma-fail that was introduced in the downtime, I've notified that author of it @ nodejs/node#31617 |
See e.g. https://ci.nodejs.org/job/git-nodesource-update-reference/22525/console
@nodejs/build-infra
The text was updated successfully, but these errors were encountered: