Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

lchown error on podman pull #2542

Closed
KamiQuasi opened this issue Mar 5, 2019 · 39 comments
Closed

lchown error on podman pull #2542

KamiQuasi opened this issue Mar 5, 2019 · 39 comments
Labels
kind/bug Categorizes issue or PR as related to a bug. locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments. rootless

Comments

@KamiQuasi
Copy link

Is this a BUG REPORT or FEATURE REQUEST? (leave only one on its own line)

/kind bug

Description
After logging in to our locally hosted repository and attempting to podman pull our latest image I received a couple of errors (one related to transport that was fixed by adding the docker:// to the call) the error below is still present (contact me for URL to image):

ERRO[0011] Error while applying layer: ApplyLayer exit status 1 stdout:  stderr: lchown /var/www/drupal/web/config/active: invalid argument 
Failed
(0x183b040,0xc00052b600)

Steps to reproduce the issue:

  1. podman login -p {SECRET KEY} -u unused {IMAGE REPO}

  2. podman pull docker://{IMAGE REPO}

  3. Error

Describe the results you received:
Error instead of an image

Describe the results you expected:
Image to be used

Additional information you deem important (e.g. issue happens only occasionally):

Output of podman version:

podman version 1.2.0-dev

Output of podman info --debug:

  MemFree: 511528960
  MemTotal: 5195935744
  OCIRuntime:
    package: Unknown
    path: /usr/local/sbin/runc
    version: |-
      runc version 1.0.0-rc6+dev
      commit: f79e211b1d5763d25fb8debda70a764ca86a0f23
      spec: 1.0.1-dev
  SwapFree: 0
  SwapTotal: 0
  arch: amd64
  cpus: 4
  hostname: penguin
  kernel: 4.19.4-02480-gd44d301822f0
  os: linux
  rootless: true
  uptime: 136h 10m 42.4s (Approximately 5.67 days)
insecure registries:
  registries: []
registries:
  registries:
  - docker.io
  - registry.fedoraproject.org
  - registry.access.redhat.com
  - {IMAGE REPO}
store:
  ConfigFile: /home/ldary/.config/containers/storage.conf
  ContainerStore:
    number: 0
  GraphDriverName: vfs
  GraphOptions: null
  GraphRoot: /home/ldary/.local/share/containers/storage
  GraphStatus: {}
  ImageStore:
    number: 0
  RunRoot: /run/user/1000
  VolumePath: /home/ldary/.local/share/containers/storage/volumes

Additional environment details (AWS, VirtualBox, physical, etc.):
This is a Debian sandbox on a Pixelbook. We found that one error was removed by adding the docker:// that was also displayed when run without the transport. @vbatts also had me run this command findmnt -T /home/ldary/.local/share/containers/storage
Output

/      /dev/vdb[/lxd/storage-pools/default/containers/penguin/rootfs] btrfs  rw,relatime,discard,space_cache,user_subvol_rm_allowed,subvolid=266,subvol=/lxd/storage-pools/default/containers/penguin/rootfs```
@openshift-ci-robot openshift-ci-robot added the kind/bug Categorizes issue or PR as related to a bug. label Mar 5, 2019
@mheon
Copy link
Member

mheon commented Mar 5, 2019

@giuseppe PTAL

@mheon mheon added the rootless label Mar 5, 2019
@giuseppe
Copy link
Member

giuseppe commented Mar 5, 2019

yes, probably not enough IDs mapped into the namespace (we require 65k) and the image is using some higher ID. What is {IMAGE REPO}?

@giuseppe
Copy link
Member

giuseppe commented Mar 5, 2019

if you cannot share the image, can you please create a container as root user using that image and run this command:

find / -xdev -printf "%U:%G\n" | sort | uniq

What is the output?

@KamiQuasi
Copy link
Author

@giuseppe I wasn't able to create it with root either. I'll email you the internal image repo details.

@KamiQuasi
Copy link
Author

@giuseppe here is the content of the Dockerfile for the image:

# This is a data container so keep the image as small as possible
FROM alpine:3.4

# Make the directory structure that will be exposed as volumes by this data container
RUN mkdir -p /var/www/drupal/web/sites/default/files \
    /var/www/drupal/web/config/active \
    /docker-entrypoint-initdb.d \
    /drupal-data

COPY drupal-db.sql.gz /docker-entrypoint-initdb.d
ADD drupal-filesystem.tar.gz /drupal-data
RUN rm -rf /drupal-data/files/css /drupal-data/files/js /drupal-data/files/php

RUN cp -r /drupal-data/config/lightning/* /var/www/drupal/web/config/active
RUN cp -r /drupal-data/files/* /var/www/drupal/web/sites/default/files
CMD true

@giuseppe
Copy link
Member

giuseppe commented Mar 5, 2019

What file from the host is copied to '/var/www/drupal/web/config/active'? Can you stat it?

@giuseppe
Copy link
Member

giuseppe commented Mar 5, 2019

do you get exactly the same error when running as root?

@KamiQuasi
Copy link
Author

@giuseppe same error when running as root, correct

@giuseppe
Copy link
Member

giuseppe commented Mar 5, 2019

@KamiQuasi can I get access to the image?

@KamiQuasi
Copy link
Author

@giuseppe let me see if I can find out who has that permission shouldn't be a problem though.

@KamiQuasi
Copy link
Author

@giuseppe I believe you should have access to the image now at the URL I sent in email

@giuseppe
Copy link
Member

giuseppe commented Mar 6, 2019

I've not received any email. Did you send to [email protected]?

@KamiQuasi
Copy link
Author

@giuseppe Subject is "Github Issue 2542" re-sent it again to make sure.

@giuseppe
Copy link
Member

giuseppe commented Mar 6, 2019

I confirm the issue is that there are not enough IDs in the namespace, it works for me as root:

$ sudo podman run --rm -ti drupal-data ls -ln /var/www/drupal/web/config
total 136
drwxrwx---    2 1001410000 0           135168 Feb 28 07:25 active

Could you change the image to use smaller IDs?

@KamiQuasi
Copy link
Author

@giuseppe sorry for my ignorance, but I don't actually know how to do that. Is it something I can modify in the Dockerfile?

@giuseppe
Copy link
Member

giuseppe commented Mar 7, 2019

@KamiQuasi you can chown the files to not have that GID.

What user is going to read them? Are they owned by root?

@giuseppe
Copy link
Member

giuseppe commented Mar 8, 2019

since we found out the issue is in the image, I am going to close this issue. Please feel free to reopen it or add more comments

@giuseppe giuseppe closed this as completed Mar 8, 2019
@vsoch
Copy link
Contributor

vsoch commented Apr 12, 2019

I just hit this issue as well - I'm not using a custom image, but just testing fedora:latest referenced in this post. I am on Ubuntu 16.04 so I installed podman via apt-get install... The version is podman version 1.3.0-dev.

Here is the non sudo pull attempt - note the same error reported above:

$ podman pull docker://fedora:latest
WARN[0000] using rootless single mapping into the namespace. This might break some images. Check /etc/subuid and /etc/subgid for adding subids 
Trying to pull docker://fedora:latest...Getting image source signatures
Copying blob 01eb078129a0 done
Copying config d09302f77c done
Writing manifest to image destination
Storing signatures
ERRO[0010] Error while applying layer: ApplyLayer exit status 1 stdout:  stderr: there might not be enough IDs available in the namespace (requested 192:192 for /run/systemd/netif): lchown /run/systemd/netif: invalid argument 
ERRO[0011] Error pulling image ref //fedora:latest: Error committing the finished image: error adding layer with blob "sha256:01eb078129a0d03c93822037082860a3fefdc15b0313f07c6e1c2168aef5401b": ApplyLayer exit status 1 stdout:  stderr: there might not be enough IDs available in the namespace (requested 192:192 for /run/systemd/netif): lchown /run/systemd/netif: invalid argument 
Failed
(0x189ade0,0xc0007caa20)

and then with sudo, all is well!

$ sudo podman pull fedora:latest
[sudo] password for vanessa: 
Trying to pull docker://fedora:latest...Getting image source signatures
Copying blob 01eb078129a0 done
Copying config d09302f77c done
Writing manifest to image destination
Storing signatures
d09302f77cfcc3e867829d80ff47f9e7738ffef69730d54ec44341a9fb1d359b

Thanks in advance for your help! This is the very first time I'm using podman, so I'm a super noob.

@vsoch
Copy link
Contributor

vsoch commented Apr 12, 2019

Let me know if it's better practice to open a new issue, happy to do that too!

@rhatdan
Copy link
Member

rhatdan commented Apr 12, 2019

This looks like you don't have any range of UIDs in /etc/subuid. Therefor you container only handle root content, any other UID is going to cause failures. Add a range of UIDs to /etc/subuid and you should be fine.

@vsoch
Copy link
Contributor

vsoch commented Apr 12, 2019

Thanks @rhatdan, I peeked at that but I do appear to have a range (should the range be different?)

$ cat /etc/subuid
vanessa:100000:65536
$ cat /etc/subgid
vanessa:100000:65536

They look similar to the ones in this example, but it's likely that I missed a step, if the above is not correct. Could you point me to the docs that mention to the user how to set this up correctly? Here is the trail that I followed:

  1. I started at the main podman site and clicked on the "Install" tab
  2. This took me to the install.md in this repo, where I scrolled down to Ubuntu
  3. I then didn't see any further setup, and jumped over to this post.

If there are additional steps required to get it working, currently some users will only figure this out via the error message. I'd like to suggest that some additional documentation is added to the install to address this.

@rhatdan
Copy link
Member

rhatdan commented Apr 12, 2019

What does
podman run fedora cat /proc/self/uid_map

@vsoch
Copy link
Contributor

vsoch commented Apr 12, 2019

Ah, more evidence! The original command needed docker:// to specify the registry:

$ podman run fedora cat /proc/self/uid_map
WARN[0000] using rootless single mapping into the namespace. This might break some images. Check /etc/subuid and /etc/subgid for adding subids 
Error: unable to pull fedora: image name provided is a short name and no search registries are defined in /etc/containers/registries.conf.

and then when specified, we get the same error (but with an extra tidbit of evidence!) See the last lines.

$ podman run docker://fedora cat /proc/self/uid_map
WARN[0000] using rootless single mapping into the namespace. This might break some images. Check /etc/subuid and /etc/subgid for adding subids 
Trying to pull docker://fedora...Getting image source signatures
Copying blob 01eb078129a0 done
Copying config d09302f77c done
Writing manifest to image destination
Storing signatures
ERRO[0012] Error while applying layer: ApplyLayer exit status 1 stdout:  stderr: there might not be enough IDs available in the namespace (requested 192:192 for /run/systemd/netif): lchown /run/systemd/netif: invalid argument 
ERRO[0012] Error pulling image ref //fedora:latest: Error committing the finished image: error adding layer with blob "sha256:01eb078129a0d03c93822037082860a3fefdc15b0313f07c6e1c2168aef5401b": ApplyLayer exit status 1 stdout:  stderr: there might not be enough IDs available in the namespace (requested 192:192 for /run/systemd/netif): lchown /run/systemd/netif: invalid argument 
Failed
Error: unable to pull docker://fedora: unable to pull image: Error committing the finished image: error adding layer with blob "sha256:01eb078129a0d03c93822037082860a3fefdc15b0313f07c6e1c2168aef5401b": ApplyLayer exit status 1 stdout:  stderr: there might not be enough IDs available in the namespace (requested 192:192 for /run/systemd/netif): lchown /run/systemd/netif: invalid argument

So you don't have to scroll..

"sha256:01eb078129a0d03c93822037082860a3fefdc15b0313f07c6e1c2168aef5401b": ApplyLayer exit status 1 stdout: stderr: there might not be enough IDs available in the namespace (requested 192:192 for /run/systemd/netif): lchown /run/systemd/netif: invalid argument

@giuseppe
Copy link
Member

we downgraded the error of not having multiple uids to the warning you are getting:

WARN[0000] using rootless single mapping into the namespace. This might break some images. Check /etc/subuid and /etc/subgid for adding subids

Are newuidmap and newgidmap installed? I think you may need to install them separately on Ubuntu

@vsoch
Copy link
Contributor

vsoch commented Apr 12, 2019

Boum! That did the trick :)

$ sudo apt-get install -y uidmap
Reading package lists... Done
Building dependency tree       
Reading state information... Done
The following package was automatically installed and is no longer required:
  snapd-login-service
Use 'sudo apt autoremove' to remove it.
The following NEW packages will be installed:
  uidmap
0 upgraded, 1 newly installed, 0 to remove and 5 not upgraded.
Need to get 64.8 kB of archives.
After this operation, 336 kB of additional disk space will be used.
Get:1 http://us.archive.ubuntu.com/ubuntu xenial-updates/main amd64 uidmap amd64 1:4.2-3.1ubuntu5.3 [64.8 kB]
Fetched 64.8 kB in 0s (204 kB/s)
Selecting previously unselected package uidmap.
(Reading database ... 455142 files and directories currently installed.)
Preparing to unpack .../uidmap_1%3a4.2-3.1ubuntu5.3_amd64.deb ...
Unpacking uidmap (1:4.2-3.1ubuntu5.3) ...
Processing triggers for man-db (2.7.5-1) ...
Setting up uidmap (1:4.2-3.1ubuntu5.3) ...
$ podman pull docker://fedora:latest
Trying to pull docker://fedora:latest...Getting image source signatures
Copying blob 01eb078129a0 done
Copying config d09302f77c done
Writing manifest to image destination
Storing signatures
d09302f77cfcc3e867829d80ff47f9e7738ffef69730d54ec44341a9fb1d359b

Should we add this to here? (this is in install.md)

image

@vbatts
Copy link
Collaborator

vbatts commented Apr 12, 2019

@vsoch yes please!

@rhatdan
Copy link
Member

rhatdan commented Apr 12, 2019

We need more contributors running on ubuntu desktops...

@vsoch
Copy link
Contributor

vsoch commented Apr 12, 2019

I got lots of those :)

@jcaesar
Copy link

jcaesar commented Jul 18, 2019

I had this same issue (on ArchLinux). I think the cause was that I had run podman before creating /etc/sub{u,g}id. After killing all running podman-related process and a (probably over-zealous) sudo rm -rf ~/.{config,local/share}/containers /run/user/$(id -u)/{libpod,runc,vfs-*}, the issue disappeared.

@runapp
Copy link

runapp commented Jul 23, 2019

I'm on openSUSE Leap 15.1 and confirms @jcaesar's steps are effective. To be more specific I found killing existing podman (cache process?) and rm /run/user/$UID/libpod/pause.pid is enough for me. I guess it'll force a reload of podman to /etc/sub?id.

@qmeeus
Copy link

qmeeus commented Aug 20, 2019

Full procedure:

sudo touch /etc/sub{u,g}id
sudo usermod --add-subuids 10000-75535 $(whoami)
sudo usermod --add-subgids 10000-75535 $(whoami)
rm /run/user/$(id -u)/libpod/pause.pid

@runapp
Copy link

runapp commented Aug 21, 2019

It seems that running podman system migrate instead of deleting the pid file should be more elegant?

@kailin4u
Copy link

I had this same issue (on ArchLinux). I think the cause was that I had run podman before creating /etc/sub{u,g}id. After killing all running podman-related process and a (probably over-zealous) sudo rm -rf ~/.{config,local/share}/containers /run/user/$(id -u)/{libpod,runc,vfs-*}, the issue disappeared.


works for me at ubuntu 18.04

@dinokov
Copy link

dinokov commented Oct 6, 2019

Wanted to build simple local Wordpress environment for development according to https://docs.docker.com/compose/wordpress/
Was getting this error when using podman-compose on Manjaro 5.1.21-1:

ERRO[0085] Error while applying layer: ApplyLayer exit status 1 stdout:  stderr: there might not be enough IDs available in the namespace (requested 0:42 for /etc/gshadow): lchown /etc/gshadow: invalid argument 
  ApplyLayer exit status 1 stdout:  stderr: there might not be enough IDs available in the namespace (requested 0:42 for /etc/gshadow): lchown /etc/gshadow: invalid argument

what I did to get rid of the error:

  • aurman -S crun ---------installed crun
  • enable the user namespaces permanently:
    echo 'kernel.unprivileged_userns_clone=1' > /etc/sysctl.d/userns.conf
  • restart
  • podman-compose down ---------stop the pod
  • buildah images ---------find out which images were created
  • buildah rmi da86e6ba6ca1 ---------delete previously created image
  • pkill -9 podman ---------kill podman proceses
  • sudo touch /etc/sub{u,g}id ---------create missing folders
  • sudo usermod --add-subuids 10000-75535 $(whoami) --------create subuids
  • sudo usermod --add-subgids 10000-75535 $(whoami) --------create subgids
  • rm /run/user/$(id -u)/libpod/pause.pid --------delete locking files
  • cd /home/damir/Containers/wordpress-1 -----go where the docker-compose.yaml file is
  • podman-compose -t 1podfw -f ./docker-compose.yaml up ---------recreate the pod

Thank you all for helping me figure this out !

@hscspring
Copy link

Full procedure:

sudo touch /etc/sub{u,g}id
sudo usermod --add-subuids 10000-75535 $(whoami)
sudo usermod --add-subgids 10000-75535 $(whoami)
rm /run/user/$(id -u)/libpod/pause.pid

It's suit for ubuntu.

The reason is mainly because username changed.

@giuseppe
Copy link
Member

rm /run/user/$(id -u)/libpod/pause.pid

it is safer to use podman system migrate as containers need to be restarted as well

@corbym
Copy link

corbym commented Sep 17, 2021

I am getting the same error on macOS:

> podman build -t logos_grizzly -f path/to/Dockerfile path/to
Error: potentially insufficient UIDs or GIDs available in user namespace (requested 110536116:110536116 for /var/tmp/libpod_builder528965084/build/Dockerfile): Check /etc/subuid and /etc/subgid: lchown /var/tmp/libpod_builder528965084/build/Dockerfile: invalid argument

Dockerfile:

FROM alpine:3.10 AS jq-builder
# Based on https://github.com/wesley-dean-flexion/busybox-jq-latest/blob/master/Dockerfile
WORKDIR /workdir
RUN apk update && apk add --no-cache git autoconf automake libtool build-base
RUN git clone https://github.com/stedolan/jq.git
WORKDIR /workdir/jq
RUN git submodule update --init && autoreconf -fi && ./configure --disable-docs --disable-maintainer-mode --with-oniguruma && make -j8 LDFLAGS=-all-static && strip jq

FROM golang:1.16.5
COPY --from=jq-builder /workdir/jq/jq /bin/

RUN go get github.com/grafana/grizzly/cmd/grr
ADD run.sh /run.sh

ENTRYPOINT /run.sh
> podman system info 
host:
  arch: amd64
  buildahVersion: 1.22.3
  cgroupControllers: []
  cgroupManager: systemd
  cgroupVersion: v2
  conmon:
    package: conmon-2.0.29-2.fc34.x86_64
    path: /usr/bin/conmon
    version: 'conmon version 2.0.29, commit: '
  cpus: 1
  distribution:
    distribution: fedora
    version: "34"
  eventLogger: journald
  hostname: localhost
  idMappings:
    gidmap:
    - container_id: 0
      host_id: 1000
      size: 1
    - container_id: 1
      host_id: 100000
      size: 65536
    uidmap:
    - container_id: 0
      host_id: 1000
      size: 1
    - container_id: 1
      host_id: 100000
      size: 65536
  kernel: 5.13.13-200.fc34.x86_64
  linkmode: dynamic
  memFree: 1522155520
  memTotal: 2061852672
  ociRuntime:
    name: crun
    package: crun-1.0-1.fc34.x86_64
    path: /usr/bin/crun
    version: |-
      crun version 1.0
      commit: 139dc6971e2f1d931af520188763e984d6cdfbf8
      spec: 1.0.0
      +SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +CRIU +YAJL
  os: linux
  remoteSocket:
    exists: true
    path: /run/user/1000/podman/podman.sock
  security:
    apparmorEnabled: false
    capabilities: CAP_CHOWN,CAP_DAC_OVERRIDE,CAP_FOWNER,CAP_FSETID,CAP_KILL,CAP_NET_BIND_SERVICE,CAP_SETFCAP,CAP_SETGID,CAP_SETPCAP,CAP_SETUID,CAP_SYS_CHROOT
    rootless: true
    seccompEnabled: true
    seccompProfilePath: /usr/share/containers/seccomp.json
    selinuxEnabled: true
  serviceIsRemote: true
  slirp4netns:
    executable: /usr/bin/slirp4netns
    package: slirp4netns-1.1.12-2.fc34.x86_64
    version: |-
      slirp4netns version 1.1.12
      commit: 7a104a101aa3278a2152351a082a6df71f57c9a3
      libslirp: 4.4.0
      SLIRP_CONFIG_VERSION_MAX: 3
      libseccomp: 2.5.0
  swapFree: 0
  swapTotal: 0
  uptime: 1h 12m 15.9s (Approximately 0.04 days)
registries:
  search:
  - registry.fedoraproject.org
  - registry.access.redhat.com
  - docker.io
  - quay.io
store:
  configFile: /var/home/core/.config/containers/storage.conf
  containerStore:
    number: 1
    paused: 0
    running: 0
    stopped: 1
  graphDriverName: overlay
  graphOptions: {}
  graphRoot: /var/home/core/.local/share/containers/storage
  graphStatus:
    Backing Filesystem: xfs
    Native Overlay Diff: "true"
    Supports d_type: "true"
    Using metacopy: "false"
  imageStore:
    number: 1
  runRoot: /run/user/1000/containers
  volumePath: /var/home/core/.local/share/containers/storage/volumes
version:
  APIVersion: 3.3.1
  Built: 1630356396
  BuiltTime: Mon Aug 30 20:46:36 2021
  GitCommit: ""
  GoVersion: go1.16.6
  OsArch: linux/amd64
  Version: 3.3.1

The same thing happens if I follow these instructions: https://github.com/containers/podman/blob/main/docs/tutorials/mac_experimental.md

Should I open a new issue instead of commenting here? Forgive my ignorance.

@mheon
Copy link
Member

mheon commented Sep 17, 2021

That is an unrelated error. It should already be fixed upstream. We are cutting a 3.3.2 release either today or Monday that includes the fix.

@senorsmile
Copy link

I had the same error, and after trying lots of stuff, I finally found that the perms on /etc/subuid and /etc/subgid were -rw-rw----. I did a chmod 0644 /etc/sub*id, then got errors about inaccessible files under ~/.local/share/containers. I sudo rm'd that dir and now rootless is working for me!

@github-actions github-actions bot added the locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments. label Sep 20, 2023
@github-actions github-actions bot locked as resolved and limited conversation to collaborators Sep 20, 2023
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
kind/bug Categorizes issue or PR as related to a bug. locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments. rootless
Projects
None yet
Development

No branches or pull requests