Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

podman hangs when attempting to mount the same volume on multiple mountpoints #8221

Closed
larsks opened this issue Nov 2, 2020 · 4 comments · Fixed by #8307
Closed

podman hangs when attempting to mount the same volume on multiple mountpoints #8221

larsks opened this issue Nov 2, 2020 · 4 comments · Fixed by #8307
Assignees
Labels
In Progress This issue is actively being worked by the assignee, please do not work on this at this time. kind/bug Categorizes issue or PR as related to a bug. locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments.

Comments

@larsks
Copy link
Contributor

larsks commented Nov 2, 2020

Is this a BUG REPORT or FEATURE REQUEST? (leave only one on its own line)

/kind bug

Description

I am trying to mount the same named volume on multiple points within a container, like this:

podman run -it --rm -v testvol:/mnt1 -v testvol:/mnt2 alpine sh

This causes podman to hang and the container never starts.

Steps to reproduce the issue:

  1. Run the comamdline provided above

  2. Watch podman get stuck

Describe the results you received:

Podman got stuck.

Describe the results you expected:

I expected the named volume to mount successfully on /mnt1 and /mnt2 inside the container.

Output of podman version:

Version:      2.2.0-dev
API Version:  2.0.0
Go Version:   go1.13.15
Built:        Tue Oct 27 09:24:53 2020
OS/Arch:      linux/amd64

Output of podman info --debug:

host:
  arch: amd64
  buildahVersion: 1.16.4
  cgroupManager: systemd
  cgroupVersion: v2
  conmon:
    package: Unknown
    path: /usr/libexec/podman/conmon
    version: 'conmon version 2.0.7, commit: 712d9f7cab967fda23547f49a01b44dfbbd13d57'
  cpus: 8
  distribution:
    distribution: fedora
    version: "31"
  eventLogger: journald
  hostname: madhatter
  idMappings:
    gidmap:
    - container_id: 0
      host_id: 1000
      size: 1
    - container_id: 1
      host_id: 100000
      size: 65536
    uidmap:
    - container_id: 0
      host_id: 1000
      size: 1
    - container_id: 1
      host_id: 100000
      size: 65536
  kernel: 5.8.16-100.fc31.x86_64
  linkmode: dynamic
  memFree: 23024332800
  memTotal: 33572761600
  ociRuntime:
    name: crun
    package: crun-0.15-5.fc31.x86_64
    path: /usr/bin/crun
    version: |-
      crun version 0.15
      commit: 56ca95e61639510c7dbd39ff512f80f626404969
      spec: 1.0.0
      +SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +YAJL
  os: linux
  remoteSocket:
    path: /run/user/1000/podman/podman.sock
  rootless: true
  slirp4netns:
    executable: /usr/bin/slirp4netns
    package: slirp4netns-1.1.4-1.fc31.x86_64
    version: |-
      slirp4netns version 1.1.4
      commit: b66ffa8e262507e37fca689822d23430f3357fe8
      libslirp: 4.1.0
      SLIRP_CONFIG_VERSION_MAX: 1
  swapFree: 16890458112
  swapTotal: 16890458112
  uptime: 16m 10.85s
registries:
  search:
  - registry.fedoraproject.org
  - registry.access.redhat.com
  - registry.centos.org
  - docker.io
store:
  configFile: /home/lars/.config/containers/storage.conf
  containerStore:
    number: 8
    paused: 0
    running: 3
    stopped: 5
  graphDriverName: overlay
  graphOptions:
    overlay.mount_program:
      Executable: /usr/bin/fuse-overlayfs
      Package: fuse-overlayfs-1.2.0-1.fc31.x86_64
      Version: |-
        fusermount3 version: 3.6.2
        fuse-overlayfs: version 1.1.0
        FUSE library version 3.6.2
        using FUSE kernel interface version 7.29
  graphRoot: /home/lars/.local/share/containers/storage
  graphStatus:
    Backing Filesystem: extfs
    Native Overlay Diff: "false"
    Supports d_type: "true"
    Using metacopy: "false"
  imageStore:
    number: 783
  runRoot: /run/user/1000/containers
  volumePath: /home/lars/.local/share/containers/storage/volumes
version:
  APIVersion: 2.0.0
  Built: 1603805093
  BuiltTime: Tue Oct 27 09:24:53 2020
  GitCommit: ""
  GoVersion: go1.13.15
  OsArch: linux/amd64
  Version: 2.2.0-dev

Package info (e.g. output of rpm -q podman or apt list podman):

podman-2.2.0-0.39.dev.git287edd4.fc31.x86_64
@openshift-ci-robot openshift-ci-robot added the kind/bug Categorizes issue or PR as related to a bug. label Nov 2, 2020
@edsantiago
Copy link
Member

Confirmed on master. Debug log and stack trace:

$ ./bin/podman --log-level=debug  run -it --rm -v testvol:/vol1 -v testvol:/vol2 alpine sh
...
DEBU[0000] Creating new volume testvol for container
DEBU[0000] Validating options for local driver

[here is where it hangs. I press ^\ to kill it, and get]

^\SIGQUIT: quit
PC=0x471b01 m=0 sigcode=128

goroutine 0 [idle]:
runtime.futex(0x2a9a0a8, 0x80, 0x0, 0x0, 0x0, 0xc000000000, 0x1f0dbb8, 0x1f0dbb8, 0x7ffc45641d18, 0x4156df, ...)
        /usr/lib/golang/src/runtime/sys_linux_amd64.s:567 +0x21
runtime.futexsleep(0x2a9a0a8, 0x7ffc00000000, 0xffffffffffffffff)
        /usr/lib/golang/src/runtime/os_linux.go:45 +0x46
runtime.notesleep(0x2a9a0a8)
        /usr/lib/golang/src/runtime/lock_futex.go:151 +0x9f
runtime.stoplockedm()
        /usr/lib/golang/src/runtime/proc.go:2010 +0x88
runtime.schedule()
        /usr/lib/golang/src/runtime/proc.go:2493 +0x4a6
runtime.park_m(0xc000336300)
        /usr/lib/golang/src/runtime/proc.go:2729 +0x9d
runtime.mcall(0x16a9140)
        /usr/lib/golang/src/runtime/asm_amd64.s:318 +0x5b

goroutine 1 [syscall, 2 minutes, locked to thread]:
github.com/containers/podman/v2/libpod/lock/shm._Cfunc_lock_semaphore(0x7f09b8050000, 0x3f7, 0xc000000000)
        /dev/shm/go-build222687428/b524/_cgo_gotypes.go:197 +0x4d
github.com/containers/podman/v2/libpod/lock/shm.(*SHMLocks).LockSemaphore(0xc000489690, 0xc0000003f7, 0xc000795158, 0xc000000180)
        libpod/lock/shm/shm_lock.go:214 +0x63
github.com/containers/podman/v2/libpod/lock.(*SHMLock).Lock(0xc0005d0f60)
        libpod/lock/shm_lock_manager_linux.go:113 +0x38
github.com/containers/podman/v2/libpod.(*Runtime).setupContainer(0xc00054a9c0, 0x1db8280, 0xc0006d57d0, 0xc0001141e0, 0x0, 0x0, 0x0)
        libpod/runtime_ctr.go:350 +0xc82
github.com/containers/podman/v2/libpod.(*Runtime).newContainer(0xc00054a9c0, 0x1db8280, 0xc0006d57d0, 0xc000400c80, 0xc000400c00, 0x10, 0x10, 0x0, 0x0, 0x0)
        libpod/runtime_ctr.go:144 +0x272
github.com/containers/podman/v2/libpod.(*Runtime).NewContainer(0xc00054a9c0, 0x1db8280, 0xc0006d57d0, 0xc000400c80, 0xc000400c00, 0x10, 0x10, 0x0, 0x0, 0x0)
        libpod/runtime_ctr.go:46 +0xeb
github.com/containers/podman/v2/pkg/specgen/generate.MakeContainer(0x1db8280, 0xc0006d57d0, 0xc00054a9c0, 0xc0004c6b00, 0x2ac6778, 0x0, 0x0)
        pkg/specgen/generate/container_create.go:140 +0xb16
github.com/containers/podman/v2/pkg/domain/infra/abi.(*ContainerEngine).ContainerRun(0xc00041c400, 0x1db8280, 0xc0006d57d0, 0x0, 0x0, 0x0, 0x1a6f042, 0xd, 0xc000010020, 0xc000010010, ...)
        pkg/domain/infra/abi/containers.go:845 +0x19e
github.com/containers/podman/v2/cmd/podman/containers.run(0x2a0b9c0, 0xc00052ad80, 0x2, 0x8, 0x0, 0x0)
        cmd/podman/containers/run.go:181 +0x566
github.com/spf13/cobra.(*Command).execute(0x2a0b9c0, 0xc00003c0e0, 0x8, 0x8, 0x2a0b9c0, 0xc00003c0e0)
        vendor/github.com/spf13/cobra/command.go:850 +0x453
github.com/spf13/cobra.(*Command).ExecuteC(0x2a1ef80, 0xc000042198, 0x18489e0, 0x2ac6778)
        vendor/github.com/spf13/cobra/command.go:958 +0x349
github.com/spf13/cobra.(*Command).Execute(...)
        vendor/github.com/spf13/cobra/command.go:895
github.com/spf13/cobra.(*Command).ExecuteContext(...)
        vendor/github.com/spf13/cobra/command.go:888
main.Execute()
        cmd/podman/root.go:88 +0xec
main.main()
        cmd/podman/main.go:78 +0x18c

goroutine 38 [chan receive]:
k8s.io/klog.(*loggingT).flushDaemon(0x2a985a0)
        vendor/k8s.io/klog/klog.go:1010 +0x8b
created by k8s.io/klog.init.0
        vendor/k8s.io/klog/klog.go:411 +0xd6

goroutine 37 [chan receive]:
k8s.io/klog/v2.(*loggingT).flushDaemon(0x2a98780)
        vendor/k8s.io/klog/v2/klog.go:1131 +0x8b
created by k8s.io/klog/v2.init.0
        vendor/k8s.io/klog/v2/klog.go:416 +0xd6

goroutine 16 [syscall, 2 minutes]:
os/signal.signal_recv(0x0)
        /usr/lib/golang/src/runtime/sigqueue.go:147 +0x9c
os/signal.loop()
        /usr/lib/golang/src/os/signal/signal_unix.go:23 +0x22
created by os/signal.Notify.func1
        /usr/lib/golang/src/os/signal/signal.go:127 +0x44

goroutine 67 [select, 2 minutes]:
github.com/containers/podman/v2/libpod/shutdown.Start.func1()
        libpod/shutdown/handler.go:39 +0xcb
created by github.com/containers/podman/v2/libpod/shutdown.Start
        libpod/shutdown/handler.go:38 +0x114

rax    0xca
rbx    0x2a99f60
rcx    0x471b03
rdx    0x0
rdi    0x2a9a0a8
rsi    0x80
rbp    0x7ffc45641ce0
rsp    0x7ffc45641c98
r8     0x0
r9     0x0
r10    0x0
r11    0x286
r12    0x0
r13    0x1
r14    0xc0006a6180
r15    0x0
rip    0x471b01
rflags 0x286
cs     0x33
fs     0x0
gs     0x0

@mheon
Copy link
Member

mheon commented Nov 2, 2020

I'll take this.

@mheon mheon self-assigned this Nov 2, 2020
@rhatdan rhatdan added the In Progress This issue is actively being worked by the assignee, please do not work on this at this time. label Nov 3, 2020
@mheon
Copy link
Member

mheon commented Nov 11, 2020

This one's pretty bad - we're leaking locks. I think container creation is deadlocking somewhere - looking further.

@mheon
Copy link
Member

mheon commented Nov 11, 2020

Note to self: I should do a podman system locks or similar to list off in-use locks, what is using them, and identify locks that are marked as in-use but not assigned to any object.

mheon added a commit to mheon/libpod that referenced this issue Nov 11, 2020
When making containers, we want to lock all named volumes we are
adding the container to, to ensure they aren't removed from under
us while we are working. Unfortunately, this code did not account
for a container having the same volume mounted in multiple places
so it could deadlock. Add a map to ensure that we don't lock the
same name more than once to resolve this.

Fixes containers#8221

Signed-off-by: Matthew Heon <[email protected]>
@github-actions github-actions bot added the locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments. label Sep 22, 2023
@github-actions github-actions bot locked as resolved and limited conversation to collaborators Sep 22, 2023
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
In Progress This issue is actively being worked by the assignee, please do not work on this at this time. kind/bug Categorizes issue or PR as related to a bug. locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

5 participants