Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

podman image prune should act recursively #7872

Closed
srcshelton opened this issue Oct 1, 2020 · 6 comments · Fixed by #7887
Closed

podman image prune should act recursively #7872

srcshelton opened this issue Oct 1, 2020 · 6 comments · Fixed by #7887
Assignees
Labels
In Progress This issue is actively being worked by the assignee, please do not work on this at this time. kind/bug Categorizes issue or PR as related to a bug. locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments.

Comments

@srcshelton
Copy link
Contributor

Is this a BUG REPORT or FEATURE REQUEST? (leave only one on its own line)

/kind bug

Description

It's notable as other bugs have indicated that podman image ls can be very slow (although this is better nowadays). What is still noticeable is that image ls completes much faster with fewer images.

However, podman image prune appears to only remove images which were unreferenced when it was invoked, and doesn't recurse to also remove any images that the removal of the original images free up.

I wrote a quick script to look until no further images are freed, the output from a real-world run of which appears below:

Starting to prune podman images ...
1: Removed 1077 images on this pass...
2: Removed 35 images on this pass...
3: Removed 35 images on this pass...
4: Removed 35 images on this pass...
5: Removed 35 images on this pass...
6: Removed 35 images on this pass...
7: Removed 35 images on this pass...
8: Removed 35 images on this pass...
9: Removed 35 images on this pass...
10: Removed 35 images on this pass...
11: Removed 35 images on this pass...
12: Removed 35 images on this pass...
13: Removed 35 images on this pass...
14: Removed 35 images on this pass...
15: Removed 35 images on this pass...
16: Removed 35 images on this pass...
17: Removed 35 images on this pass...
18: Removed 34 images on this pass...
19: Removed 33 images on this pass...
20: Removed 33 images on this pass...
21: Removed 33 images on this pass...
22: Removed 33 images on this pass...
23: Removed 33 images on this pass...
24: Removed 33 images on this pass...
25: Removed 33 images on this pass...
26: Removed 0 images on this pass.

Removed 1902 images in total

... so, in this case, the initial podman image prune only removed about half of the prunable images. This surely isn't the intention?

Output of podman version:

Version:      2.1.1
API Version:  2.0.0
Go Version:   go1.14.7
Git Commit:   9f6d6ba0b314d86521b66183c9ce48eaa2da1de2
Built:        Tue Sep 29 16:34:31 2020
OS/Arch:      linux/amd64

Output of podman info --debug:

host:
  arch: amd64
  buildahVersion: 1.16.1
  cgroupManager: cgroupfs
  cgroupVersion: v2
  conmon:
    package: Unknown
    path: /usr/bin/conmon
    version: 'conmon version 2.0.21, commit: 35a2fa83022e56e18af7e6a865ba5d7165fa2a4a'
  cpus: 8
  distribution:
    distribution: gentoo
    version: unknown
  eventLogger: file
  hostname: dellr330
  idMappings:
    gidmap: null
    uidmap: null
  kernel: 5.8.9-gentoo
  linkmode: dynamic
  memFree: 2493095936
  memTotal: 8063447040
  ociRuntime:
    name: crun
    package: Unknown
    path: /usr/bin/crun
    version: |-
      crun version 0.15-dirty
      commit: 56ca95e61639510c7dbd39ff512f80f626404969
      spec: 1.0.0
      +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +YAJL
  os: linux
  remoteSocket:
    path: /run/podman/podman.sock
  rootless: false
  slirp4netns:
    executable: ""
    package: ""
    version: ""
  swapFree: 24785993728
  swapTotal: 25769787392
  uptime: 64h 27m 2.06s (Approximately 2.67 days)
registries:
  search:
  - docker.io
  - docker.pkg.github.com
  - quay.io
store:
  configFile: /etc/containers/storage.conf
  containerStore:
    number: 15
    paused: 0
    running: 11
    stopped: 4
  graphDriverName: overlay
  graphOptions:
    overlay.ignore_chown_errors: "false"
  graphRoot: /space/podman/storage
  graphStatus:
    Backing Filesystem: extfs
    Native Overlay Diff: "true"
    Supports d_type: "true"
    Using metacopy: "false"
  imageStore:
    number: 180
  runRoot: /space/podman/run
  volumePath: /space/podman/volumes
version:
  APIVersion: 2.0.0
  Built: 1601393671
  BuiltTime: Tue Sep 29 16:34:31 2020
  GitCommit: 9f6d6ba0b314d86521b66183c9ce48eaa2da1de2
  GoVersion: go1.14.7
  OsArch: linux/amd64
  Version: 2.1.1

Have you tested with the latest version of Podman and have you checked the Podman Troubleshooting Guide?

Yes

@openshift-ci-robot openshift-ci-robot added the kind/bug Categorizes issue or PR as related to a bug. label Oct 1, 2020
@vrothberg vrothberg self-assigned this Oct 1, 2020
@vrothberg
Copy link
Member

Thanks for opening the issue, @srcshelton!

I try to have a look tomorrow. There are certain code paths that have not yet been optimized but we plan to. The issue is a motivator to tackle prune :)

@rhatdan rhatdan added the Good First Issue This issue would be a good issue for a first time contributor to undertake. label Oct 1, 2020
@vrothberg vrothberg added In Progress This issue is actively being worked by the assignee, please do not work on this at this time. and removed Good First Issue This issue would be a good issue for a first time contributor to undertake. labels Oct 2, 2020
@vrothberg
Copy link
Member

Removed the good-first issue label as I find it non-trivial to fix without knowing the code.

vrothberg added a commit to vrothberg/libpod that referenced this issue Oct 2, 2020
Make sure to remove images until there's nothing left to prune.
A single iteration may not be sufficient.

Fixes: containers#7872
Signed-off-by: Valentin Rothberg <[email protected]>
@vrothberg
Copy link
Member

Opened #7887 to fix the issue. Note that it does not improve performance yet. There are some CPU cycles we can squeeze out but it would require some longer refactoring. It's still on the roadmap though, along with other code paths.

@srcshelton
Copy link
Contributor Author

Just a quick note to say that while #7887 fixes this issue with podman image prune, the problem still seems to be apparent when pruning images during podman system prune!

Could this issue please be re-opened to consider this case also?

@vrothberg
Copy link
Member

Can you share a reproducer?

The image-pruning code is shared, so it seems to be another issue.

@vrothberg
Copy link
Member

I suggest opening a new issue, ideally with a reproducer.

mheon pushed a commit to mheon/libpod that referenced this issue Oct 14, 2020
Make sure to remove images until there's nothing left to prune.
A single iteration may not be sufficient.

Fixes: containers#7872
Signed-off-by: Valentin Rothberg <[email protected]>
@github-actions github-actions bot added the locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments. label Sep 22, 2023
@github-actions github-actions bot locked as resolved and limited conversation to collaborators Sep 22, 2023
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
In Progress This issue is actively being worked by the assignee, please do not work on this at this time. kind/bug Categorizes issue or PR as related to a bug. locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants