-
Notifications
You must be signed in to change notification settings - Fork 2.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
podman hangs when attempting to mount the same volume on multiple mountpoints #8221
Labels
In Progress
This issue is actively being worked by the assignee, please do not work on this at this time.
kind/bug
Categorizes issue or PR as related to a bug.
locked - please file new issue/PR
Assist humans wanting to comment on an old issue or PR with locked comments.
Comments
openshift-ci-robot
added
the
kind/bug
Categorizes issue or PR as related to a bug.
label
Nov 2, 2020
Confirmed on master. Debug log and stack trace: $ ./bin/podman --log-level=debug run -it --rm -v testvol:/vol1 -v testvol:/vol2 alpine sh
...
DEBU[0000] Creating new volume testvol for container
DEBU[0000] Validating options for local driver
[here is where it hangs. I press ^\ to kill it, and get]
^\SIGQUIT: quit
PC=0x471b01 m=0 sigcode=128
goroutine 0 [idle]:
runtime.futex(0x2a9a0a8, 0x80, 0x0, 0x0, 0x0, 0xc000000000, 0x1f0dbb8, 0x1f0dbb8, 0x7ffc45641d18, 0x4156df, ...)
/usr/lib/golang/src/runtime/sys_linux_amd64.s:567 +0x21
runtime.futexsleep(0x2a9a0a8, 0x7ffc00000000, 0xffffffffffffffff)
/usr/lib/golang/src/runtime/os_linux.go:45 +0x46
runtime.notesleep(0x2a9a0a8)
/usr/lib/golang/src/runtime/lock_futex.go:151 +0x9f
runtime.stoplockedm()
/usr/lib/golang/src/runtime/proc.go:2010 +0x88
runtime.schedule()
/usr/lib/golang/src/runtime/proc.go:2493 +0x4a6
runtime.park_m(0xc000336300)
/usr/lib/golang/src/runtime/proc.go:2729 +0x9d
runtime.mcall(0x16a9140)
/usr/lib/golang/src/runtime/asm_amd64.s:318 +0x5b
goroutine 1 [syscall, 2 minutes, locked to thread]:
github.com/containers/podman/v2/libpod/lock/shm._Cfunc_lock_semaphore(0x7f09b8050000, 0x3f7, 0xc000000000)
/dev/shm/go-build222687428/b524/_cgo_gotypes.go:197 +0x4d
github.com/containers/podman/v2/libpod/lock/shm.(*SHMLocks).LockSemaphore(0xc000489690, 0xc0000003f7, 0xc000795158, 0xc000000180)
libpod/lock/shm/shm_lock.go:214 +0x63
github.com/containers/podman/v2/libpod/lock.(*SHMLock).Lock(0xc0005d0f60)
libpod/lock/shm_lock_manager_linux.go:113 +0x38
github.com/containers/podman/v2/libpod.(*Runtime).setupContainer(0xc00054a9c0, 0x1db8280, 0xc0006d57d0, 0xc0001141e0, 0x0, 0x0, 0x0)
libpod/runtime_ctr.go:350 +0xc82
github.com/containers/podman/v2/libpod.(*Runtime).newContainer(0xc00054a9c0, 0x1db8280, 0xc0006d57d0, 0xc000400c80, 0xc000400c00, 0x10, 0x10, 0x0, 0x0, 0x0)
libpod/runtime_ctr.go:144 +0x272
github.com/containers/podman/v2/libpod.(*Runtime).NewContainer(0xc00054a9c0, 0x1db8280, 0xc0006d57d0, 0xc000400c80, 0xc000400c00, 0x10, 0x10, 0x0, 0x0, 0x0)
libpod/runtime_ctr.go:46 +0xeb
github.com/containers/podman/v2/pkg/specgen/generate.MakeContainer(0x1db8280, 0xc0006d57d0, 0xc00054a9c0, 0xc0004c6b00, 0x2ac6778, 0x0, 0x0)
pkg/specgen/generate/container_create.go:140 +0xb16
github.com/containers/podman/v2/pkg/domain/infra/abi.(*ContainerEngine).ContainerRun(0xc00041c400, 0x1db8280, 0xc0006d57d0, 0x0, 0x0, 0x0, 0x1a6f042, 0xd, 0xc000010020, 0xc000010010, ...)
pkg/domain/infra/abi/containers.go:845 +0x19e
github.com/containers/podman/v2/cmd/podman/containers.run(0x2a0b9c0, 0xc00052ad80, 0x2, 0x8, 0x0, 0x0)
cmd/podman/containers/run.go:181 +0x566
github.com/spf13/cobra.(*Command).execute(0x2a0b9c0, 0xc00003c0e0, 0x8, 0x8, 0x2a0b9c0, 0xc00003c0e0)
vendor/github.com/spf13/cobra/command.go:850 +0x453
github.com/spf13/cobra.(*Command).ExecuteC(0x2a1ef80, 0xc000042198, 0x18489e0, 0x2ac6778)
vendor/github.com/spf13/cobra/command.go:958 +0x349
github.com/spf13/cobra.(*Command).Execute(...)
vendor/github.com/spf13/cobra/command.go:895
github.com/spf13/cobra.(*Command).ExecuteContext(...)
vendor/github.com/spf13/cobra/command.go:888
main.Execute()
cmd/podman/root.go:88 +0xec
main.main()
cmd/podman/main.go:78 +0x18c
goroutine 38 [chan receive]:
k8s.io/klog.(*loggingT).flushDaemon(0x2a985a0)
vendor/k8s.io/klog/klog.go:1010 +0x8b
created by k8s.io/klog.init.0
vendor/k8s.io/klog/klog.go:411 +0xd6
goroutine 37 [chan receive]:
k8s.io/klog/v2.(*loggingT).flushDaemon(0x2a98780)
vendor/k8s.io/klog/v2/klog.go:1131 +0x8b
created by k8s.io/klog/v2.init.0
vendor/k8s.io/klog/v2/klog.go:416 +0xd6
goroutine 16 [syscall, 2 minutes]:
os/signal.signal_recv(0x0)
/usr/lib/golang/src/runtime/sigqueue.go:147 +0x9c
os/signal.loop()
/usr/lib/golang/src/os/signal/signal_unix.go:23 +0x22
created by os/signal.Notify.func1
/usr/lib/golang/src/os/signal/signal.go:127 +0x44
goroutine 67 [select, 2 minutes]:
github.com/containers/podman/v2/libpod/shutdown.Start.func1()
libpod/shutdown/handler.go:39 +0xcb
created by github.com/containers/podman/v2/libpod/shutdown.Start
libpod/shutdown/handler.go:38 +0x114
rax 0xca
rbx 0x2a99f60
rcx 0x471b03
rdx 0x0
rdi 0x2a9a0a8
rsi 0x80
rbp 0x7ffc45641ce0
rsp 0x7ffc45641c98
r8 0x0
r9 0x0
r10 0x0
r11 0x286
r12 0x0
r13 0x1
r14 0xc0006a6180
r15 0x0
rip 0x471b01
rflags 0x286
cs 0x33
fs 0x0
gs 0x0 |
I'll take this. |
rhatdan
added
the
In Progress
This issue is actively being worked by the assignee, please do not work on this at this time.
label
Nov 3, 2020
This one's pretty bad - we're leaking locks. I think container creation is deadlocking somewhere - looking further. |
Note to self: I should do a |
mheon
added a commit
to mheon/libpod
that referenced
this issue
Nov 11, 2020
When making containers, we want to lock all named volumes we are adding the container to, to ensure they aren't removed from under us while we are working. Unfortunately, this code did not account for a container having the same volume mounted in multiple places so it could deadlock. Add a map to ensure that we don't lock the same name more than once to resolve this. Fixes containers#8221 Signed-off-by: Matthew Heon <[email protected]>
github-actions
bot
added
the
locked - please file new issue/PR
Assist humans wanting to comment on an old issue or PR with locked comments.
label
Sep 22, 2023
Sign up for free
to subscribe to this conversation on GitHub.
Already have an account?
Sign in.
Labels
In Progress
This issue is actively being worked by the assignee, please do not work on this at this time.
kind/bug
Categorizes issue or PR as related to a bug.
locked - please file new issue/PR
Assist humans wanting to comment on an old issue or PR with locked comments.
Is this a BUG REPORT or FEATURE REQUEST? (leave only one on its own line)
/kind bug
Description
I am trying to mount the same named volume on multiple points within a container, like this:
This causes
podman
to hang and the container never starts.Steps to reproduce the issue:
Run the comamdline provided above
Watch podman get stuck
Describe the results you received:
Podman got stuck.
Describe the results you expected:
I expected the named volume to mount successfully on
/mnt1
and/mnt2
inside the container.Output of
podman version
:Output of
podman info --debug
:Package info (e.g. output of
rpm -q podman
orapt list podman
):The text was updated successfully, but these errors were encountered: