-
Notifications
You must be signed in to change notification settings - Fork 2.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Podman run fails to run when using named volume #3952
Comments
Can you add |
Thanks for the quick response.
|
This might also be helpful:
|
This also, sorry...
|
Alright, we are definitely calling the create volume code. The container is being created and mounted successfully, but Wait, why are we looking in the specific container's |
That from your |
We are definitely using a bad volume path here. Two issues here:
|
Eeek. Our static dir is also being set to a relative path. I'm honestly amazed things aren't exploding much earlier. |
/etc/containers/libpod.conf. No |
My suspicion here is that we have a @rhatdan @giuseppe How do you think we should handle this case? |
We should hard code the defaults if these are set to "". |
Ack. Easy enough. |
Issue also applies to 1.5.1 FYI. |
Fix pending in #3954 - however, it will only apply to new installs. It changes the locations of several core paths by default and could break existing installations if we applied it unconditionally. You'd have to delete |
You were right about storage.conf, here's the pastebin. I forgot about that until you mentioned it @mheon, that was done while troubleshooting a different issue for Buildah. |
If c/storage paths are explicitly set to "" (the empty string) it will use compiled-in defaults. However, it won't tell us this via `storage.GetDefaultStoreOptions()` - we just get the empty string (which can put our defaults, some of which are relative to c/storage, in a bad spot). Hardcode a sane default for cases like this. Furthermore, add some sanity checks to paths, to ensure we don't use relative paths for core parts of libpod. Fixes containers#3952 Signed-off-by: Matthew Heon <[email protected]>
Appears that the volume path was indeed the issue. Re-installed from distro packages with all configuration files removed, and after re-creating policy.json with defaults and registries.conf from defaults this is working. I'm closing this issue as it appears to be resolved to me. If you're troubleshooting and find this issue, ensure that the |
Great and useful summation @SwitchedToGitlab, thanks for adding it. |
Is this a BUG REPORT or FEATURE REQUEST? (leave only one on its own line)
/kind bug
Description
Podman runtime error when attempting to run container with named volume. Does not seem to matter if
podman run
is creating the volume orpodman volume create
. Does not matter whether container is a custom build or remote image.Steps to reproduce the issue:
Run any image, specifying a named volume, e.g.
podman run -it --rm -v test:/test hello-world
Get error output
Be sad.
Describe the results you received:
Describe the results you expected:
Container to mount with persistent named volume attached at specified mount point, world domination ensues shortly thereafter...
Additional information you deem important (e.g. issue happens only occasionally):
Consistent behavior every time on this machine.
Output of
podman version
:Output of
podman info --debug
:Package info (e.g. output of
rpm -q podman
orapt list podman
):Additional environment details (AWS, VirtualBox, physical, etc.):
Lenovo Thinkpad Flex 15, bare metal install
Thanks in advance!
The text was updated successfully, but these errors were encountered: