-
Notifications
You must be signed in to change notification settings - Fork 11
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fork: retry: Resource temporarily unavailable #23
Comments
But I still think we have the larger "CPUs versus Kubernetes" issue that came up in the code in coreos/coreos-assembler#1287 |
Thinking about this actually, in rpm-ostree in particular our tests do this: Ah but in rpm-ostree Actually related to this, one really nice thing |
Yeah, sadly right now we're responsible for bridging build tools' view of resources available and the Kubernetes world. I think most of our workloads do have this bridging now, but it looks like RPM building escaped through. Would be nice if in the future, e.g. The |
Otherwise, it defaults to `_SC_NPROCESSORS_ONLN` (via `%make_build` -> `%_smp_mflags` -> `%_smp_build_ncpus` -> `%{getncpus}` -> https://github.com/rpm-software-management/rpm/blob/48c0f28834eb377a54f27ee0b6950af7e6d537b8/rpmio/macro.c#L583). And that's going to be wrong in Kubernetes because we're constrained via cgroups. The `%_smp_build_ncpus` macro allows overriding this logic via `RPM_BUILD_NCPUS`. See: coreos/coreos-ci#23 See: coreos/coreos-assembler#1287
Otherwise, it defaults to `_SC_NPROCESSORS_ONLN` (via `%make_build` -> `%_smp_mflags` -> `%_smp_build_ncpus` -> `%{getncpus}` -> https://github.com/rpm-software-management/rpm/blob/48c0f28834eb377a54f27ee0b6950af7e6d537b8/rpmio/macro.c#L583). And that's going to be wrong in Kubernetes because we're constrained via cgroups. The `%_smp_build_ncpus` macro allows overriding this logic via `RPM_BUILD_NCPUS`. See: coreos/coreos-ci#23 See: coreos/coreos-assembler#632 See: coreos/coreos-assembler#1287
Otherwise, it defaults to `_SC_NPROCESSORS_ONLN` (via `%make_build` -> `%_smp_mflags` -> `%_smp_build_ncpus` -> `%{getncpus}` -> https://github.com/rpm-software-management/rpm/blob/48c0f28834eb377a54f27ee0b6950af7e6d537b8/rpmio/macro.c#L583). And that's going to be wrong in Kubernetes because we're constrained via cgroups. The `%_smp_build_ncpus` macro allows overriding this logic via `RPM_BUILD_NCPUS`. See: coreos/coreos-ci#23 See: coreos/coreos-assembler#632 See: coreos/coreos-assembler#1287
See coreos/coreos-ci#23 We're doing this manually in rpm-ostree CI; let's standardize on this.
See coreos/coreos-ci#23 We're doing this manually in rpm-ostree CI; let's standardize on this.
See coreos/coreos-ci#23 We're doing this manually in rpm-ostree CI; let's standardize on this.
Let's do the build variants and unit testing in Prow, saving the bare metal capacity of CentOS CI for our VM testing. CC coreos/coreos-ci#23
In cosa CI, we're hitting: > runtime: failed to create new OS thread I think this is another instance of non-Kubernetes-aware multiprocessing like in coreos/coreos-ci#23. Let's expose the `resources` knob for building images like we already do for `pod`. This will allow us in cosa to request a specific amount, and then asking golang to respect it.
In cosa CI, we're hitting: > runtime: failed to create new OS thread I think this is another instance of non-Kubernetes-aware multiprocessing like in coreos/coreos-ci#23. Let's expose the `resources` knob for building images like we already do for `pod`. This will allow us in cosa to request a specific amount, and then asking golang to respect it.
We're seeing e.g.
./libtool: fork: retry: Resource temporarily unavailable
in the rpm-ostree CI jobs.One thing I notice is RPM's
%{make_build}
macro is detecting 40 CPUs:/usr/bin/make -O -j40
. We might even be hitting PID limits, or perhaps per-user?Googling around a bit I found openSUSE/obs-build#425 and looking at the rpm macros, it does seem likely that we could inject
-D _smp_ncpus_max=8
or so?The text was updated successfully, but these errors were encountered: