-
Notifications
You must be signed in to change notification settings - Fork 2.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
CgroupsV2 - possible migration issue #4481
Comments
@debarshiray FYI |
@giuseppe PTAL - I don't think there are migration issues around systemd/cgroupfs, so this is probably something separate... @returntrip If the cgroup manager is set to systemd, do new containers work and migrated containers fail, or do all containers fail? |
Oh, wait, wait, wait... This could be the container's CGroup parent. That won't be migrated by our current code... Damn. |
With
With
|
Alright, it's probably CGroup parent, then. We may want to force cgroupfs driver for migrated systems, but it's not clear how we would identify that the system should retain the cgroupfs driver. |
This is also odd, why I cannot re enter a freshly
|
Can you reproduce that consistently? That's a separate bug that we haven't found a consistent repro for yet. |
Not sure this tests well consistence but it never enters toolbox:
I also guess we need to reopen #4198 cause my test was made using toolbox (migrated) images created with Adding some logs
|
I am getting OCI errors during a
I opened containers/crun#187 in crun. |
What podman command are you executing? |
I think it is the same issue we are tracking in crun |
Yes, see containers/crun#187 |
@giuseppe so the issue I logged is related to containers/crun#187? if so, should we close one of the two? |
This issue had no activity for 30 days. In the absence of activity or the "do-not-close" label, the issue will be automatically closed within 7 days. |
Looks like this is fixed in master. |
Is this a BUG REPORT or FEATURE REQUEST? (leave only one on its own line)
/kind bug
Description
After having migrated my Silverblue 30 toolboxes,
libpod.conf
will populatecgroup_manager
withcgroupfs
. If, for whatever reason a user, stops the toolbox container, deleteslibpod.conf
and re runstoolbox enter
a newlibpod.conf
is generated withcgroups_manager = "systemd"
and all migrated containers willError: unable to start container "fedora-toolbox-31": sd-bus call: Invalid argument: OCI runtime error
when they are started. This affects any migrated container and not only toolboxes as far as I can see.Replacing
systemd
withcgroupfs
restores functionality.Steps to reproduce the issue:
In Silverblue 31:
toolbox enter
to migrate pre existing toolbox containersStop the toolbox container with
podman stop <container>
Delete
.config/containers/libpod.conf
Re enter toolbox
toolbox enter
.Describe the results you received:
Above will cause:
Error: unable to start container "fedora-toolbox-31": sd-bus call: Invalid argument: OCI runtime error
Describe the results you expected:
Toolbox should open up as usual.
libpod.conf
should be created correctly with relevantcgroups_manager
or maybe with some dynamiccgroups_manager
allocation (if technically possible).Additional information you deem important (e.g. issue happens only occasionally):
Issue happens systematically on
libpod.conf
creation**Output of
podman version
:Output of
podman info --debug
:Package info (e.g. output of
rpm -q podman
orapt list podman
):Additional environment details (AWS, VirtualBox, physical, etc.):
Physical Silverblue 31
The text was updated successfully, but these errors were encountered: