Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Can't reuse names of stopped containers #4666

Closed
twmr opened this issue Dec 9, 2019 · 7 comments · Fixed by #4692
Closed

Can't reuse names of stopped containers #4666

twmr opened this issue Dec 9, 2019 · 7 comments · Fixed by #4692
Assignees
Labels
kind/bug Categorizes issue or PR as related to a bug. locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments.

Comments

@twmr
Copy link

twmr commented Dec 9, 2019

Is this a BUG REPORT or FEATURE REQUEST? (leave only one on its own line)

/kind bug

Description

For our docker-based container setup at work we use hardcoded (in config files) containernames, which are needed for docker exec. From time to time we have to restart the containers, where those hardcoded containernames are reused.

However, podman has problems when a container is started using podman run with a name of a container that was stopped before. (We don't have this problem with docker-ce)

Steps to reproduce the issue:

  1. $ podman run --name foobar -it busybox /bin/sh
    / # %

  2. $ podman stop foobar
    b31e7a917e15f2ee87bc977b77866b0c6a27816a97b927c59b90e9f6b2510290

  3. $ podman run --name foobar -it busybox /bin/sh
    Error: error creating container storage: the container name "foobar" is already in use by "b31e7a917e15f2ee87bc977b77866b0c6a27816a97b927c59b90e9f6b2510290". You have to remove that container to be able to reuse that name.: that name is already in use

  4. $ docker inspect foobar # maybe this helps debugging the issue
    [
    {
    "Id": "b31e7a917e15f2ee87bc977b77866b0c6a27816a97b927c59b90e9f6b2510290",
    "Created": "2019-12-09T21:54:11.057240122+01:00",
    "Path": "/bin/sh",
    "Args": [
    "/bin/sh"
    ],
    "State": {
    "OciVersion": "1.0.1-dev",
    "Status": "exited",
    "Running": false,
    "Paused": false,
    "Restarting": false,
    "OOMKilled": false,
    "Dead": false,
    "Pid": 0,
    "ExitCode": 137,
    "Error": "",
    "StartedAt": "2019-12-09T21:54:11.387440834+01:00",
    "FinishedAt": "2019-12-09T21:54:33.010894755+01:00",
    "Healthcheck": {
    "Status": "",
    "FailingStreak": 0,
    "Log": null
    }
    },
    "Image": "b534869c81f05ce6fbbdd3a3293e64fd032e059ab4b28a0e0d5b485cf904be4b",
    "ImageName": "docker.io/library/busybox:latest",
    "Rootfs": "",
    "Pod": "",
    "ResolvConfPath": "/run/user/1001/vfs-containers/b31e7a917e15f2ee87bc977b77866b0c6a27816a97b927c59b90e9f6b2510290/userdata/resolv.conf",
    "HostnamePath": "/run/user/1001/vfs-containers/b31e7a917e15f2ee87bc977b77866b0c6a27816a97b927c59b90e9f6b2510290/userdata/hostname",
    "HostsPath": "/run/user/1001/vfs-containers/b31e7a917e15f2ee87bc977b77866b0c6a27816a97b927c59b90e9f6b2510290/userdata/hosts",
    "StaticDir": "/home/thomas/.local/share/containers/storage/vfs-containers/b31e7a917e15f2ee87bc977b77866b0c6a27816a97b927c59b90e9f6b2510290/userdata",
    "OCIConfigPath": "/home/thomas/.local/share/containers/storage/vfs-containers/b31e7a917e15f2ee87bc977b77866b0c6a27816a97b927c59b90e9f6b2510290/userdata/config.json",
    "OCIRuntime": "runc",
    "LogPath": "/home/thomas/.local/share/containers/storage/vfs-containers/b31e7a917e15f2ee87bc977b77866b0c6a27816a97b927c59b90e9f6b2510290/userdata/ctr.log",
    "ConmonPidFile": "/run/user/1001/vfs-containers/b31e7a917e15f2ee87bc977b77866b0c6a27816a97b927c59b90e9f6b2510290/userdata/conmon.pid",
    "Name": "foobar",
    "RestartCount": 0,
    "Driver": "vfs",
    "MountLabel": "",
    "ProcessLabel": "",
    "AppArmorProfile": "",
    "EffectiveCaps": [
    "CAP_CHOWN",
    "CAP_DAC_OVERRIDE",
    "CAP_FSETID",
    "CAP_FOWNER",
    "CAP_MKNOD",
    "CAP_NET_RAW",
    "CAP_SETGID",
    "CAP_SETUID",
    "CAP_SETFCAP",
    "CAP_SETPCAP",
    "CAP_NET_BIND_SERVICE",
    "CAP_SYS_CHROOT",
    "CAP_KILL",
    "CAP_AUDIT_WRITE"
    ],
    "BoundingCaps": [
    "CAP_CHOWN",
    "CAP_DAC_OVERRIDE",
    "CAP_FSETID",
    "CAP_FOWNER",
    "CAP_MKNOD",
    "CAP_NET_RAW",
    "CAP_SETGID",
    "CAP_SETUID",
    "CAP_SETFCAP",
    "CAP_SETPCAP",
    "CAP_NET_BIND_SERVICE",
    "CAP_SYS_CHROOT",
    "CAP_KILL",
    "CAP_AUDIT_WRITE"
    ],
    "ExecIDs": [],
    "GraphDriver": {
    "Name": "vfs",
    "Data": null
    },
    "Mounts": [],
    "Dependencies": [],
    "NetworkSettings": {
    "Bridge": "",
    "SandboxID": "",
    "HairpinMode": false,
    "LinkLocalIPv6Address": "",
    "LinkLocalIPv6PrefixLen": 0,
    "Ports": [],
    "SandboxKey": "",
    "SecondaryIPAddresses": null,
    "SecondaryIPv6Addresses": null,
    "EndpointID": "",
    "Gateway": "",
    "GlobalIPv6Address": "",
    "GlobalIPv6PrefixLen": 0,
    "IPAddress": "",
    "IPPrefixLen": 0,
    "IPv6Gateway": "",
    "MacAddress": ""
    },
    "ExitCommand": [
    "/usr/bin/podman",
    "--root",
    "/home/thomas/.local/share/containers/storage",
    "--runroot",
    "/run/user/1001",
    "--log-level",
    "error",
    "--cgroup-manager",
    "cgroupfs",
    "--tmpdir",
    "/run/user/1001/libpod/tmp",
    "--runtime",
    "runc",
    "--storage-driver",
    "vfs",
    "--events-backend",
    "journald",
    "container",
    "cleanup",
    "b31e7a917e15f2ee87bc977b77866b0c6a27816a97b927c59b90e9f6b2510290"
    ],
    "Namespace": "",
    "IsInfra": false,
    "Config": {
    "Hostname": "b31e7a917e15",
    "Domainname": "",
    "User": "",
    "AttachStdin": false,
    "AttachStdout": false,
    "AttachStderr": false,
    "Tty": true,
    "OpenStdin": true,
    "StdinOnce": false,
    "Env": [
    "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
    "TERM=xterm",
    "HOSTNAME=b31e7a917e15",
    "container=podman",
    "HOME=/root"
    ],
    "Cmd": [
    "/bin/sh"
    ],
    "Image": "docker.io/library/busybox:latest",
    "Volumes": null,
    "WorkingDir": "/",
    "Entrypoint": "",
    "OnBuild": null,
    "Labels": null,
    "Annotations": {
    "io.container.manager": "libpod",
    "io.kubernetes.cri-o.ContainerType": "sandbox",
    "io.kubernetes.cri-o.Created": "2019-12-09T21:54:11.057240122+01:00",
    "io.kubernetes.cri-o.TTY": "true",
    "io.podman.annotations.autoremove": "FALSE",
    "io.podman.annotations.init": "FALSE",
    "io.podman.annotations.privileged": "FALSE",
    "io.podman.annotations.publish-all": "FALSE",
    "org.opencontainers.image.stopSignal": "15"
    },
    "StopSignal": 15
    },
    "HostConfig": {
    "Binds": [],
    "ContainerIDFile": "",
    "LogConfig": {
    "Type": "k8s-file",
    "Config": null
    },
    "NetworkMode": "default",
    "PortBindings": {},
    "RestartPolicy": {
    "Name": "",
    "MaximumRetryCount": 0
    },
    "AutoRemove": false,
    "VolumeDriver": "",
    "VolumesFrom": null,
    "CapAdd": [],
    "CapDrop": [],
    "Dns": [],
    "DnsOptions": [],
    "DnsSearch": [],
    "ExtraHosts": [],
    "GroupAdd": [],
    "IpcMode": "",
    "Cgroup": "",
    "Cgroups": "default",
    "Links": null,
    "OomScoreAdj": 0,
    "PidMode": "",
    "Privileged": false,
    "PublishAllPorts": false,
    "ReadonlyRootfs": false,
    "SecurityOpt": [],
    "Tmpfs": {},
    "UTSMode": "",
    "UsernsMode": "",
    "ShmSize": 65536000,
    "Runtime": "oci",
    "ConsoleSize": [
    0,
    0
    ],
    "Isolation": "",
    "CpuShares": 0,
    "Memory": 0,
    "NanoCpus": 0,
    "CgroupParent": "",
    "BlkioWeight": 0,
    "BlkioWeightDevice": null,
    "BlkioDeviceReadBps": null,
    "BlkioDeviceWriteBps": null,
    "BlkioDeviceReadIOps": null,
    "BlkioDeviceWriteIOps": null,
    "CpuPeriod": 0,
    "CpuQuota": 0,
    "CpuRealtimePeriod": 0,
    "CpuRealtimeRuntime": 0,
    "CpusetCpus": "",
    "CpusetMems": "",
    "Devices": [],
    "DiskQuota": 0,
    "KernelMemory": 0,
    "MemoryReservation": 0,
    "MemorySwap": 0,
    "MemorySwappiness": 0,
    "OomKillDisable": false,
    "PidsLimit": 0,
    "Ulimits": [
    {
    "Name": "RLIMIT_NOFILE",
    "Soft": 1024,
    "Hard": 1024
    }
    ],
    "CpuCount": 0,
    "CpuPercent": 0,
    "IOMaximumIOps": 0,
    "IOMaximumBandwidth": 0
    }
    }
    ]

Describe the results you expected:

I know that it is possbile to run podman rm to fix the issue, but I think that this is a bug in podman, becuase sth is not properly cleaned up.

Additional information you deem important (e.g. issue happens only occasionally):

I reproduced this issue at work (fedora 31 host) and at home (ubuntu 19.04).


**Output of `podman version`:**

Version: 1.6.2
RemoteAPI Version: 1
Go Version: go1.10.4
OS/Arch: linux/amd64


**Output of `podman info --debug`:**

debug:
compiler: gc
git commit: ""
go version: go1.10.4
podman version: 1.6.2
host:
BuildahVersion: 1.11.3
CgroupVersion: v1
Conmon:
package: 'conmon: /usr/libexec/podman/conmon'
path: /usr/libexec/podman/conmon
version: 'conmon version 2.0.1, commit: unknown'
Distribution:
distribution: ubuntu
version: "19.10"
IDMappings:
gidmap:
- container_id: 0
host_id: 1001
size: 1
- container_id: 1
host_id: 165536
size: 65536
uidmap:
- container_id: 0
host_id: 1001
size: 1
- container_id: 1
host_id: 165536
size: 65536
MemFree: 7383703552
MemTotal: 16505286656
OCIRuntime:
name: runc
package: 'cri-o-runc: /usr/lib/cri-o-runc/sbin/runc'
path: /usr/lib/cri-o-runc/sbin/runc
version: 'runc version spec: 1.0.1-dev'
SwapFree: 32626647040
SwapTotal: 34118561792
arch: amd64
cpus: 4
eventlogger: journald
hostname: thomas-XPS-13-9360
kernel: 5.3.0-23-generic
os: linux
rootless: true
slirp4netns:
Executable: /usr/bin/slirp4netns
Package: 'slirp4netns: /usr/bin/slirp4netns'
Version: |-
slirp4netns version 0.4.2
commit: unknown
uptime: 228h 54m 22.36s (Approximately 9.50 days)
registries:
blocked: null
insecure: null
search:

  • docker.io
  • registry.fedoraproject.org
  • quay.io
  • registry.access.redhat.com
  • registry.centos.org
    store:
    ConfigFile: /home/thomas/.config/containers/storage.conf
    ContainerStore:
    number: 1
    GraphDriverName: vfs
    GraphOptions: {}
    GraphRoot: /home/thomas/.local/share/containers/storage
    GraphStatus: {}
    ImageStore:
    number: 5
    RunRoot: /run/user/1001
    VolumePath: /home/thomas/.local/share/containers/storage/volumes

**Package info (e.g. output of `rpm -q podman` or `apt list podman`):**

podman/now 1.6.2-1ubuntu19.04ppa1 amd64 [installed,local]

@openshift-ci-robot openshift-ci-robot added the kind/bug Categorizes issue or PR as related to a bug. label Dec 9, 2019
@mheon
Copy link
Member

mheon commented Dec 9, 2019

I'm not sure what you were doing with Docker, but this should never work on either Docker or Podman. Container names cannot be reused until the container in question has been removed.

When I run your reproducer on Docker, for example, I get a very similar error:
/usr/bin/docker-current: Error response from daemon: Conflict. The container name "/foobar" is already in use by container 7cb12aebca9c2b4fce18c3b8b1c7f25c2acff656e84eb64cb1e5904b2e31d258. You have to remove (or rename) that container to be able to reuse that name..

You must be doing something different if you're seeing different results - perhaps using the --rm flag to run? I would expect your example to work if run with podman run --rm --name foobar ...

@twmr
Copy link
Author

twmr commented Dec 9, 2019

Thx for your quick reply! You are right, we indeed use the --rm flag. You are also right that my examples works if podman was run using --rm. However, at work (I'm currently writing this issue report on my laptop at home) I even have issues with --rm. I'll try to find a minimal example tomorrow.

@twmr
Copy link
Author

twmr commented Dec 9, 2019

Note that I get ERRO messages in the terminal where docker run --rm ... was called from time to time.

I have 3 terminal windows

1: podman run --rm --name foobar -it busybox /bin/sh
2: podman exec -it foobar /bin/sh
3: podman stop foobar

When I run the command in terminal 3, the following msg is output in terminal 1:

 # ERRO[0018] Error removing container d75fcafb6556d4beaf96fd33da4750865ae86e1b2cf487a101f436b0a45666de: container d75fcafb6556d4beaf96fd33da4750865ae86e1b2cf487a101f436b0a45666de does not exist in database: no such container 

In one test case

# ERRO[0008] Error removing container aa5e8171ffcf91dfbe802bb1305b277e3eadeb1ca9fb0826cf00c258faf64309: cannot remove container aa5e8171ffcf91dfbe802bb1305b277e3eadeb1ca9fb0826cf00c258faf64309 as it has active exec sessions: container state improper 

was output.

@twmr
Copy link
Author

twmr commented Dec 10, 2019

I can reproduce the issue from my previous comment on my fedora31 host at work.

@mheon
Copy link
Member

mheon commented Dec 10, 2019

Hmmm. I think I know what's going on here.

Self-assigning.

@mheon
Copy link
Member

mheon commented Dec 12, 2019

#4692 fixes the issue on my machine.

mheon added a commit to mheon/libpod that referenced this issue Dec 12, 2019
We currently rely on exec sessions being removed from the state
by the Exec() API itself, on detecting the session stopping. This
is not a reliable method, though. The Podman frontend for exec
could be killed before the session ended, or another Podman
process could be holding the lock and prevent update (most
notable in `run --rm`, when a container with an active exec
session is stopped).

To resolve this, add a function to reap active exec sessions from
the state, and use it on cleanup (to clear sessions after the
container stops) and remove (to do the same when --rm is passed).
This is a bit more complicated than it ought to be because Kata
and company exist, and we can't guarantee the exec session has a
PID on the host, so we have to plumb this through to the OCI
runtime.

Fixes containers#4666

Signed-off-by: Matthew Heon <[email protected]>
@twmr
Copy link
Author

twmr commented Dec 13, 2019

@mheon thx so much for the quick fix. (I hope that it will be part of the 1.7 release, s.t. I can soon test it ;) )

rhatdan pushed a commit to rhatdan/podman that referenced this issue Dec 15, 2020
We currently rely on exec sessions being removed from the state
by the Exec() API itself, on detecting the session stopping. This
is not a reliable method, though. The Podman frontend for exec
could be killed before the session ended, or another Podman
process could be holding the lock and prevent update (most
notable in `run --rm`, when a container with an active exec
session is stopped).

To resolve this, add a function to reap active exec sessions from
the state, and use it on cleanup (to clear sessions after the
container stops) and remove (to do the same when --rm is passed).
This is a bit more complicated than it ought to be because Kata
and company exist, and we can't guarantee the exec session has a
PID on the host, so we have to plumb this through to the OCI
runtime.

Fixes containers#4666

Signed-off-by: Matthew Heon <[email protected]>
rhatdan pushed a commit to rhatdan/podman that referenced this issue Dec 15, 2020
We currently rely on exec sessions being removed from the state
by the Exec() API itself, on detecting the session stopping. This
is not a reliable method, though. The Podman frontend for exec
could be killed before the session ended, or another Podman
process could be holding the lock and prevent update (most
notable in `run --rm`, when a container with an active exec
session is stopped).

To resolve this, add a function to reap active exec sessions from
the state, and use it on cleanup (to clear sessions after the
container stops) and remove (to do the same when --rm is passed).
This is a bit more complicated than it ought to be because Kata
and company exist, and we can't guarantee the exec session has a
PID on the host, so we have to plumb this through to the OCI
runtime.

Fixes containers#4666

Signed-off-by: Matthew Heon <[email protected]>
mheon added a commit to mheon/libpod that referenced this issue Jan 5, 2021
We currently rely on exec sessions being removed from the state
by the Exec() API itself, on detecting the session stopping. This
is not a reliable method, though. The Podman frontend for exec
could be killed before the session ended, or another Podman
process could be holding the lock and prevent update (most
notable in `run --rm`, when a container with an active exec
session is stopped).

To resolve this, add a function to reap active exec sessions from
the state, and use it on cleanup (to clear sessions after the
container stops) and remove (to do the same when --rm is passed).
This is a bit more complicated than it ought to be because Kata
and company exist, and we can't guarantee the exec session has a
PID on the host, so we have to plumb this through to the OCI
runtime.

Fixes containers#4666

MH: Zstream backport to v1.6.4-rhel branch for RHBZ1841485

Signed-off-by: Matthew Heon <[email protected]>
@github-actions github-actions bot added the locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments. label Sep 23, 2023
@github-actions github-actions bot locked as resolved and limited conversation to collaborators Sep 23, 2023
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
kind/bug Categorizes issue or PR as related to a bug. locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants