Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[v0.60] Backport a number of PRs from main, bump c/image to v5.32.1 #2118

Conversation

TomSweeneyRedHat
Copy link
Member

This cherry picks a number of PR's from main to the v0.60 branch in preparation for Podman v5.2.1. This also vendors in the latest c/image v5.32.1 which includes the latest zstd:chunked functionality.

Once this merges, I'll bump c/common's version.

renovate bot and others added 12 commits August 9, 2024 15:02
Signed-off-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
Signed-off-by: tomsweeneyredhat <[email protected]>
Instead of passing a nil system context when adding to a manifest
list, use a valid one, ensuring that settings like auth and TLS
verification are passed along and respected.

Fixes containers/podman#23410

Signed-off-by: Matt Heon <[email protected]>
Signed-off-by: tomsweeneyredhat <[email protected]>
Podman can request the pod cgroup cleanup from different processes.

Do not report an error if the cgroup is already stopped.

Signed-off-by: Giuseppe Scrivano <[email protected]>
Signed-off-by: tomsweeneyredhat <[email protected]>
Signed-off-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
When I wrote this originally I thought we must avoid leaking the netns
so I tried to decrement first. However now I think this wrong because
podman actially calls into the cleanup function again if it returned an
error on the next cleanup attempt. As such we ended up doing a double
decrement and the ref counter went below zero causing a sort of issues[1].

Now if we have a bug the other way around were we not decrement
correctly this is much less of a problem. It simply means we leak once
netns file and the pasta/slirp4netns process which isn't a problem other
than needed a bit of resources.

[1] containers/podman#21569

Signed-off-by: Paul Holzinger <[email protected]>
Signed-off-by: tomsweeneyredhat <[email protected]>
The Run() function is used to run long running command in the netns,
namly podman unshare --rootless-netns used that. As such the function
actually unlocks for the main command as otherwise a user could hold the
lock forever effectively causing deadlocks.

Now because we unlock the ref count might change during that time. And
just because we create the netns doesn't mean there are now other users
of the netns. Therefore the cleanup in runInner() was wrong in that
case causing problems for other running containers.

To fix this make sure we do not cleanup in the Run() case unless the
count is 0.

Signed-off-by: Paul Holzinger <[email protected]>
Signed-off-by: tomsweeneyredhat <[email protected]>
Podman might call us more than once on the same path. If the path is not
mounted or does not exists simply return no error.

Second, retry the unmount/remove until the unmount succeeded. For some
reason we must use MNT_DETACH as otherwise the unmount call will fail
all time the time. However MNT_DETACH means it unmounts async in the
background. Now if we call remove on the file and the unmount was not
done yet it will fail with EBUSY. In this case we try again until it
works or we get another error.

This should help containers/podman#19721

Signed-off-by: Paul Holzinger <[email protected]>
Signed-off-by: tomsweeneyredhat <[email protected]>
Signed-off-by: Paul Holzinger <[email protected]>
Signed-off-by: tomsweeneyredhat <[email protected]>
Signed-off-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
Signed-off-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
Signed-off-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
As the title says, in preparation for
Podman v5.2.1

Signed-off-by: tomsweeneyredhat <[email protected]>
Copy link
Contributor

openshift-ci bot commented Aug 9, 2024

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: TomSweeneyRedHat

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@openshift-ci openshift-ci bot added the approved label Aug 9, 2024
@rhatdan
Copy link
Member

rhatdan commented Aug 10, 2024

/lgtm

@openshift-ci openshift-ci bot added the lgtm label Aug 10, 2024
@openshift-merge-bot openshift-merge-bot bot merged commit d9e12cc into containers:v0.60 Aug 10, 2024
12 checks passed
@Luap99
Copy link
Member

Luap99 commented Aug 12, 2024

Why did you pick the dependency updates? AFAIK we never updated dependencies in backports unless there is a bug or CVE to fix. I don't see anything particular wrong with them here but I think we should have a consistent policy around how we do these things.
cc @mheon

@mheon
Copy link
Member

mheon commented Aug 12, 2024

Tend to agree that we shouldn't drag in dependabot updates as a matter of course, but this particular set seem harmless enough that I don't want to bother removing them.

@TomSweeneyRedHat
Copy link
Member Author

My main reason for the dependency updates is the number of CVEs we've received over the past year for several of them. I thought including the latest/greatest now might keep us ahead of the curve. A much smaller consideration was that I also wasn't sure if any of the code that we cared about backporting might have a dependency on some of them.

If you'd prefer me to not backport those in the future, I'm happy to go with that. It would be quicker, that's for sure.

@TomSweeneyRedHat TomSweeneyRedHat deleted the dev/tsweeney/backport_4_podman_5.2.1 branch August 12, 2024 14:00
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants