-
Notifications
You must be signed in to change notification settings - Fork 2.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Can't reuse names of stopped containers #4666
Comments
I'm not sure what you were doing with Docker, but this should never work on either Docker or Podman. Container names cannot be reused until the container in question has been removed. When I run your reproducer on Docker, for example, I get a very similar error: You must be doing something different if you're seeing different results - perhaps using the |
Thx for your quick reply! You are right, we indeed use the |
Note that I get I have 3 terminal windows 1: When I run the command in terminal 3, the following msg is output in terminal 1:
In one test case
was output. |
I can reproduce the issue from my previous comment on my fedora31 host at work. |
Hmmm. I think I know what's going on here. Self-assigning. |
#4692 fixes the issue on my machine. |
We currently rely on exec sessions being removed from the state by the Exec() API itself, on detecting the session stopping. This is not a reliable method, though. The Podman frontend for exec could be killed before the session ended, or another Podman process could be holding the lock and prevent update (most notable in `run --rm`, when a container with an active exec session is stopped). To resolve this, add a function to reap active exec sessions from the state, and use it on cleanup (to clear sessions after the container stops) and remove (to do the same when --rm is passed). This is a bit more complicated than it ought to be because Kata and company exist, and we can't guarantee the exec session has a PID on the host, so we have to plumb this through to the OCI runtime. Fixes containers#4666 Signed-off-by: Matthew Heon <[email protected]>
@mheon thx so much for the quick fix. (I hope that it will be part of the 1.7 release, s.t. I can soon test it ;) ) |
We currently rely on exec sessions being removed from the state by the Exec() API itself, on detecting the session stopping. This is not a reliable method, though. The Podman frontend for exec could be killed before the session ended, or another Podman process could be holding the lock and prevent update (most notable in `run --rm`, when a container with an active exec session is stopped). To resolve this, add a function to reap active exec sessions from the state, and use it on cleanup (to clear sessions after the container stops) and remove (to do the same when --rm is passed). This is a bit more complicated than it ought to be because Kata and company exist, and we can't guarantee the exec session has a PID on the host, so we have to plumb this through to the OCI runtime. Fixes containers#4666 Signed-off-by: Matthew Heon <[email protected]>
We currently rely on exec sessions being removed from the state by the Exec() API itself, on detecting the session stopping. This is not a reliable method, though. The Podman frontend for exec could be killed before the session ended, or another Podman process could be holding the lock and prevent update (most notable in `run --rm`, when a container with an active exec session is stopped). To resolve this, add a function to reap active exec sessions from the state, and use it on cleanup (to clear sessions after the container stops) and remove (to do the same when --rm is passed). This is a bit more complicated than it ought to be because Kata and company exist, and we can't guarantee the exec session has a PID on the host, so we have to plumb this through to the OCI runtime. Fixes containers#4666 Signed-off-by: Matthew Heon <[email protected]>
We currently rely on exec sessions being removed from the state by the Exec() API itself, on detecting the session stopping. This is not a reliable method, though. The Podman frontend for exec could be killed before the session ended, or another Podman process could be holding the lock and prevent update (most notable in `run --rm`, when a container with an active exec session is stopped). To resolve this, add a function to reap active exec sessions from the state, and use it on cleanup (to clear sessions after the container stops) and remove (to do the same when --rm is passed). This is a bit more complicated than it ought to be because Kata and company exist, and we can't guarantee the exec session has a PID on the host, so we have to plumb this through to the OCI runtime. Fixes containers#4666 MH: Zstream backport to v1.6.4-rhel branch for RHBZ1841485 Signed-off-by: Matthew Heon <[email protected]>
Is this a BUG REPORT or FEATURE REQUEST? (leave only one on its own line)
/kind bug
Description
For our docker-based container setup at work we use hardcoded (in config files) containernames, which are needed for
docker exec
. From time to time we have to restart the containers, where those hardcoded containernames are reused.However,
podman
has problems when a container is started usingpodman run
with a name of a container that was stopped before. (We don't have this problem with docker-ce)Steps to reproduce the issue:
Describe the results you expected:
I know that it is possbile to run
podman rm
to fix the issue, but I think that this is a bug in podman, becuase sth is not properly cleaned up.Additional information you deem important (e.g. issue happens only occasionally):
I reproduced this issue at work (fedora 31 host) and at home (ubuntu 19.04).
Version: 1.6.2
RemoteAPI Version: 1
Go Version: go1.10.4
OS/Arch: linux/amd64
debug:
compiler: gc
git commit: ""
go version: go1.10.4
podman version: 1.6.2
host:
BuildahVersion: 1.11.3
CgroupVersion: v1
Conmon:
package: 'conmon: /usr/libexec/podman/conmon'
path: /usr/libexec/podman/conmon
version: 'conmon version 2.0.1, commit: unknown'
Distribution:
distribution: ubuntu
version: "19.10"
IDMappings:
gidmap:
- container_id: 0
host_id: 1001
size: 1
- container_id: 1
host_id: 165536
size: 65536
uidmap:
- container_id: 0
host_id: 1001
size: 1
- container_id: 1
host_id: 165536
size: 65536
MemFree: 7383703552
MemTotal: 16505286656
OCIRuntime:
name: runc
package: 'cri-o-runc: /usr/lib/cri-o-runc/sbin/runc'
path: /usr/lib/cri-o-runc/sbin/runc
version: 'runc version spec: 1.0.1-dev'
SwapFree: 32626647040
SwapTotal: 34118561792
arch: amd64
cpus: 4
eventlogger: journald
hostname: thomas-XPS-13-9360
kernel: 5.3.0-23-generic
os: linux
rootless: true
slirp4netns:
Executable: /usr/bin/slirp4netns
Package: 'slirp4netns: /usr/bin/slirp4netns'
Version: |-
slirp4netns version 0.4.2
commit: unknown
uptime: 228h 54m 22.36s (Approximately 9.50 days)
registries:
blocked: null
insecure: null
search:
store:
ConfigFile: /home/thomas/.config/containers/storage.conf
ContainerStore:
number: 1
GraphDriverName: vfs
GraphOptions: {}
GraphRoot: /home/thomas/.local/share/containers/storage
GraphStatus: {}
ImageStore:
number: 5
RunRoot: /run/user/1001
VolumePath: /home/thomas/.local/share/containers/storage/volumes
podman/now 1.6.2-1
ubuntu19.04ppa1 amd64 [installed,local]The text was updated successfully, but these errors were encountered: