Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Podman 2.2.1 rootless: cannot play podman-generated YAML file #8710

Closed
ghost opened this issue Dec 14, 2020 · 34 comments
Closed

Podman 2.2.1 rootless: cannot play podman-generated YAML file #8710

ghost opened this issue Dec 14, 2020 · 34 comments
Assignees
Labels
kind/bug Categorizes issue or PR as related to a bug. locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments.

Comments

@ghost
Copy link

ghost commented Dec 14, 2020

Is this a BUG REPORT or FEATURE REQUEST? (leave only one on its own line)

/kind bug

Description
YAML files generated by podman cannot be run using podman play kube.

Steps to reproduce the issue:

  1. I have a YAML file I generated a few months ago and it worked until recently when I updated from 2.1.1 to 2.2.1 (the file has been anonymized):
---

apiVersion: v1
kind: Pod
metadata:
  labels:
    app: pod
  name: pod
spec:
  containers:
  - name: myapp
    image: myapp:1.1.1
    ports:
    - containerPort: 5555
      hostPort: 6666
      protocol: TCP
    securityContext:
      allowPrivilegeEscalation: false
      capabilities: {}
      privileged: false
      readOnlyRootFilesystem: true
      runAsUser: 1111
      seLinuxOptions:
        type: spc_t
    volumeMounts:
    - mountPath: /data/myfolder/
      name:  myfolder

  volumes:
  - name: myfolder
    hostPath:
      path: /data/myfolder/
      type: Directory

When I play it I get:

Error: error starting some containers: some containers failed

It creats the pod but the main container is not running.
When I try to podman start myapp I get this error:

podman start myapp
Error: OCI runtime error: unable to start container "*****************************": container_linux.go:367: starting container proce
ss caused: process_linux.go:459: container init caused: failed to set /proc/self/attr/keycreate on procfs: write /proc/self/attr/keycreate: invalid argument
  1. I played around a bit: I created the pod and the container manually:
podman pod create -n pod
podman run --pod pod -d --rm -ti -u 1111 -v /data/myfolder/:/data/myfolder/:Z myapp:1.1.1

Then I exported it : podman generate kube <pod_id> > generate.yaml. Now the YAML file looks similar to this:

...
    securityContext:
      allowPrivilegeEscalation: true
      capabilities:
        drop:
        - CAP_MKNOD
        - CAP_NET_RAW
        - CAP_AUDIT_WRITE
      privileged: false
      readOnlyRootFilesystem: false
      runAsUser: 1111
      seLinuxOptions: {}
...

But when I try to play it although the container starts the process inside has no access to the volume (permission denied).

Describe the results you expected:
I understand the problem is somewhere around the SELinux settings (SELinux is in enforcing mode).
I would expect the spc_t type to continue working or at least the generated YAML file should be playable without modification.

Output of podman version:

Version:      2.2.1
API Version:  2.1.0
Go Version:   go1.13.15
Built:        Tue Dec  8 22:10:52 2020
OS/Arch:      linux/amd64

Output of podman info --debug:

host:
  arch: amd64
  buildahVersion: 1.18.0
  cgroupManager: cgroupfs
  cgroupVersion: v1
  conmon:
    package: conmon-2.0.21-1.el8.x86_64
    path: /usr/bin/conmon
    version: 'conmon version 2.0.21, commit: 3701b9d2b0bd16229427f6f372cb3b96243fe19b-dirty'
  cpus: 4
  distribution:
    distribution: '"centos"'
    version: "8"
  eventLogger: journald
  hostname: myhost
  idMappings:
    gidmap:
    - container_id: 0
      host_id: 795
      size: 1
    - container_id: 1
      host_id: 100000
      size: 10001
    uidmap:
    - container_id: 0
      host_id: 989
      size: 1
    - container_id: 1
      host_id: 100000
      size: 10001
  kernel: 4.18.0-240.1.1.el8_3.x86_64
  linkmode: dynamic
  memFree: 148029440
  memTotal: 4118962176
  ociRuntime:
    name: runc
    package: runc-1.0.0-145.rc91.git24a3cf8.el8.x86_64
    path: /usr/bin/runc
    version: 'runc version spec: 1.0.2-dev'
  os: linux
  remoteSocket:
    path: /tmp/podman-run-989/podman/podman.sock
  rootless: true
  slirp4netns:
    executable: /usr/bin/slirp4netns
    package: slirp4netns-1.1.8-1.el8.x86_64
    version: |-
      slirp4netns version 1.1.8
      commit: d361001f495417b880f20329121e3aa431a8f90f
      libslirp: 4.3.1
      SLIRP_CONFIG_VERSION_MAX: 3
      libseccomp: 2.4.3
  swapFree: 1887543296
  swapTotal: 2147479552
  uptime: 65h 49m 5.95s (Approximately 2.71 days)
registries:
  search:
  - registry.fedoraproject.org
  - registry.access.redhat.com
  - registry.centos.org
  - docker.io
store:
  configFile: /home/myuser/.config/containers/storage.conf
  containerStore:
    number: 3
    paused: 0
    running: 1
    stopped: 2
  graphDriverName: vfs
  graphOptions: {}
  graphRoot: /home/myuser/.local/share/containers/storage
  graphStatus: {}
  imageStore:
    number: 19
  runRoot: /tmp/run-989/containers
  volumePath: /home/myuser/.local/share/containers/storage/volumes
version:
  APIVersion: 2.1.0
  Built: 1607461852
  BuiltTime: Tue Dec  8 22:10:52 2020
  GitCommit: ""
  GoVersion: go1.13.15
  OsArch: linux/amd64
  Version: 2.2.1

Package info (e.g. output of rpm -q podman or apt list podman):

podman-2.2.1-1.el8.x86_64

Have you tested with the latest version of Podman and have you checked the Podman Troubleshooting Guide?

Yes

@openshift-ci-robot openshift-ci-robot added the kind/bug Categorizes issue or PR as related to a bug. label Dec 14, 2020
@mheon
Copy link
Member

mheon commented Dec 14, 2020

Can you provide more details - are you running Podman as root, or rootless? Does the container you are running generate kube on in the second example work, but the play kube container fail? Any chance you can try running the first container with the crun OCI runtime instead (podman play kube --runtime crun with it installed ought to work) as this could get us a better error message?

@ghost ghost changed the title Podman 2.1.1 rootless: cannot play podman-generated YAML file Podman 2.2.1 rootless: cannot play podman-generated YAML file Dec 15, 2020
@ghost
Copy link
Author

ghost commented Dec 15, 2020

Hi @mheon , thanks for the advice. I am testing it with crun now. Yes, in the second scenario I was able to start the container with podman pod create... and podman run... and it worked but when I exported it and try to play it it failed with "permission denied".

I need some time for crun. I'll let you know the results soon.

@ghost
Copy link
Author

ghost commented Dec 15, 2020

Hi @mheon.
Sorry I made a typo in the title, it is about podman version 2.2.1. I run podman in rootless mode.
Unfortunately using crun does not make much difference.

Scenario 1:
When I try to play the original YAML file (generated a few months ago) I get this:

podman play kube pod.yaml --runtime=crun
Error: error starting some containers: some containers failed

Here I end up having a pod in "Degraded" status where only the infra container is running, the main container is "Created".
Then I can try to start it:

podman start myapp
Error: OCI runtime error: unable to start container "**********": write file `/proc/thread-self/attr/exec`: Invalid argument
podman inspect myapp | grep OCIRun
       "OCIRuntime": "crun",

Scenario 2:
I conducted this test as well with crun: create pod, run container in pod, generate yaml from this pod, remove pod, play yaml.
This again leads to a container that has no access to the volume (permission denied) but also the environment variables are mixed up: the ones set in the Dockerfile are not there when I do podman exec <container_name> env however the ARGs from the Dockerfile are present.

I cannot find more information in the logs, other than container start... container start....

@ghost
Copy link
Author

ghost commented Dec 16, 2020

Hi, I did some more research - this time only with the original YAML file (scenario 1).
When I run the pod without crun like this:

podman play kube /app/podman/pod.yaml --log-level=debug

it leads to this error message:

WARN[0002] Failed to add conmon to cgroupfs sandbox cgroup: error creating cgroup for cpu: mkdir /sys/fs/cgroup/cpu/libpod_parent: permission denied
[conmon:d]: failed to write to /proc/self/oom_score_adj: Permission denied

DEBU[0002] Received: -1
DEBU[0002] Cleaning up container 82d0c0fa495e6ec79ae98f3e5b1a7e9cf4f67dd07e25b4a6346ff5b26a76aaf4
DEBU[0002] unmounted container "82d0c0fa495e6ec79ae98f3e5b1a7e9cf4f67dd07e25b4a6346ff5b26a76aaf4"
Error: error starting some containers: some containers failed

The same with crun (I had to install kata-runtime first):

podman play kube /app/podman/pod.yaml --log-level=debug --runtime=crun
WARN[0002] Failed to add conmon to cgroupfs sandbox cgroup: error creating cgroup for cpuset: mkdir /sys/fs/cgroup/cpuset/libpod_parent: permission denied
[conmon:d]: failed to write to /proc/self/oom_score_adj: Permission denied

DEBU[0002] Received: -1
DEBU[0002] Cleaning up container d8c93b618bd5ff343b32330d41252190835d5b5191635821de6db6e18cbc1a45
DEBU[0002] unmounted container "d8c93b618bd5ff343b32330d41252190835d5b5191635821de6db6e18cbc1a45"
Error: error starting some containers: some containers failed

@mheon
Copy link
Member

mheon commented Dec 16, 2020

Can you check the journal for anything from conmon? It looks like the error is not being reported via the command line. Alternatively, you could try starting the containers individually with podman start (instead of podman pod start).

@ghost
Copy link
Author

ghost commented Dec 18, 2020

All I can see in the journal is what I see when I play kube --log-level=debug.
In the meantime I also tried podman system migrate and tried setting SELinux to permissive but it didn't solve the problem. I get the same error messages.
After trying play and I have a degraded pod I can try to start the failing container but it leads to the same error messages.

@marcwittke
Copy link

I think I have a similar problem. Still new to podman and was developing a pod containing mssql-2019, 2x nginx, aspnetcore-3.. Had a problem with volumes and tried podman 2.2 (update from 2.1), now nothing works any more. Well, thought "let's downgrade back to 2.1". But:

sudo dnf -y install podman-2.1.1
Last metadata expiration check: 0:15:53 ago on Fr 18 Dez 2020 22:29:43 -03.
Dependencies resolved.
==================================================================================================================================================================================================================================================================================
 Package                                                                     Architecture                                           Version                                                                         Repository                                               Size
==================================================================================================================================================================================================================================================================================
Installing:
 podman                                                                      x86_64                                                 2:2.1.1-10.fc33                                                                 fedora                                                   11 M
Installing dependencies:
 conmon                                                                      x86_64                                                 2:2.0.21-3.fc33                                                                 fedora                                                   46 k
 container-selinux                                                           noarch                                                 2:2.151.0-1.fc33                                                                updates                                                  38 k
 containernetworking-plugins                                                 x86_64                                                 0.9.0-1.fc33                                                                    updates                                                 9.3 M
 crun                                                                        x86_64                                                 0.16-1.fc33                                                                     updates                                                 155 k
 libnet                                                                      x86_64                                                 1.1.6-20.fc33                                                                   fedora                                                   64 k
Installing weak dependencies:
 catatonit                                                                   x86_64                                                 0.1.5-3.fc33                                                                    fedora                                                  304 k
 criu                                                                        x86_64                                                 3.15-1.fc33                                                                     updates                                                 519 k
 fuse-overlayfs                                                              x86_64                                                 1.3.0-1.fc33                                                                    updates                                                  74 k
 podman-plugins                                                              x86_64                                                 2:2.1.1-10.fc33                                                                 fedora                                                  1.4 M
 runc                                                                        x86_64                                                 2:1.0.0-279.dev.gitdedadbf.fc33                                                 fedora                                                  3.1 M
 slirp4netns                                                                 x86_64                                                 1.1.8-1.fc33                                                                    updates                                                  56 k

Transaction Summary
==================================================================================================================================================================================================================================================================================
Install  12 Packages

Total size: 26 M
Installed size: 122 M
Downloading Packages:
[SKIPPED] container-selinux-2.151.0-1.fc33.noarch.rpm: Already downloaded                                                                                                                                                                                                        
[SKIPPED] containernetworking-plugins-0.9.0-1.fc33.x86_64.rpm: Already downloaded                                                                                                                                                                                                
[SKIPPED] criu-3.15-1.fc33.x86_64.rpm: Already downloaded                                                                                                                                                                                                                        
[SKIPPED] crun-0.16-1.fc33.x86_64.rpm: Already downloaded                                                                                                                                                                                                                        
[SKIPPED] fuse-overlayfs-1.3.0-1.fc33.x86_64.rpm: Already downloaded                                                                                                                                                                                                             
[SKIPPED] slirp4netns-1.1.8-1.fc33.x86_64.rpm: Already downloaded                                                                                                                                                                                                                
[SKIPPED] catatonit-0.1.5-3.fc33.x86_64.rpm: Already downloaded                                                                                                                                                                                                                  
[SKIPPED] conmon-2.0.21-3.fc33.x86_64.rpm: Already downloaded                                                                                                                                                                                                                    
[SKIPPED] libnet-1.1.6-20.fc33.x86_64.rpm: Already downloaded                                                                                                                                                                                                                    
[SKIPPED] podman-2.1.1-10.fc33.x86_64.rpm: Already downloaded                                                                                                                                                                                                                    
[SKIPPED] podman-plugins-2.1.1-10.fc33.x86_64.rpm: Already downloaded                                                                                                                                                                                                            
[SKIPPED] runc-1.0.0-279.dev.gitdedadbf.fc33.x86_64.rpm: Already downloaded                                                                                                                                                                                                      
Running transaction check
Transaction check succeeded.
Running transaction test
The downloaded packages were saved in cache until the next successful transaction.
You can remove cached packages by executing 'dnf clean packages'.
Error: Transaction test error:
  file /usr/share/man/man5/containers-mounts.conf.5.gz from install of podman-2:2.1.1-10.fc33.x86_64 conflicts with file from package containers-common-1:1.2.0-10.fc33.x86_64

So, as I am stuck anyway.... how can I help? All containers except the asp.net core behave very strange. This is the log of mssql2019 when started via podman play kube:

SQL Server 2019 will run as non-root by default.
This container is running as user root.
To learn more visit https://go.microsoft.com/fwlink/?linkid=2099216.
[repeated every 2 seconds]

note the contradiction between the statements about the user running the container process. When starting the container directly via podman run --rm image it starts like this:

SQL Server 2019 will run as non-root by default.
This container is running as user mssql.
To learn more visit https://go.microsoft.com/fwlink/?linkid=2099216.
2020-12-19 01:53:14.00 Server      The licensing PID was successfully processed. The new edition is [Express Edition].
2020-12-19 01:53:14.38 Server      Setup step is copying system data file 'C:\templatedata\master.mdf' to '/var/opt/mssql/data/master.mdf'.
2020-12-19 01:53:14.39 Server      Did not find an existing master data file /var/opt/mssql/data/master.mdf, copying the missing default master and other system database files. If you have moved the database location, but not moved the database files, startup may fail. To repair: shutdown SQL Server, move the master database to configured location, and restart.
2020-12-19 01:53:14.40 Server      Setup step is copying system data file 'C:\templatedata\mastlog.ldf' to '/var/opt/mssql/data/mastlog.ldf'.
2020-12-19 01:53:14.41 Server      Setup step is copying system data file 'C:\templatedata\model.mdf' to '/var/opt/mssql/data/model.mdf'.
2020-12-19 01:53:14.42 Server      Setup step is copying system data file 'C:\templatedata\modellog.ldf' to '/var/opt/mssql/data/modellog.ldf'.
2020-12-19 01:53:14.44 Server      Setup step is copying system data file 'C:\templatedata\msdbdata.mdf' to '/var/opt/mssql/data/msdbdata.mdf'.
2020-12-19 01:53:14.47 Server      Setup step is copying system data file 'C:\templatedata\msdblog.ldf' to '/var/opt/mssql/data/msdblog.ldf'.
...and so on ...

both nginx's aren't outputting anything at all before dying. However, those can be started directly via podman run without problems

Fedora 33 btw.

@oleastre
Copy link

I've got a lot of issues with play kube after updating to podman 2.2.1, searched a bit and found this issue.
To help people reaching this issue, some problems are potentially solved by the following PR: #8654 and #8666 .

@zhangguanzhang
Copy link
Collaborator

Can you try to compile it with the latest code branch?
I think the latest pr may contain fixes

@dzintars
Copy link

dzintars commented Dec 24, 2020

in general same issue there on fresh Fedora 33 and stock Podman 2.2.1 install. Rootless.
My old pod.yml's fail to start. If i remove securityContext stanza i can create containers, but they fail to start.
podman start web-nginx --log-level=debug highlights 2 issues:

WARN[0000] Error initializing configured OCI runtime kata: no valid executable found for OCI runtime kata: invalid argument 

and

[conmon:d]: failed to write to /proc/self/oom_score_adj: Permission denied

I tried to sudo dnf -y install kata-runtime kata-proxy kata-shim, but it just helps with the first warning and i think it's not relevant there.

--runtime crun does not help.

$ podman run docker.io/nginx:latest runs the same container just fine.
So it's something relevant to play kube.

But $ podman play kube pod.yml with the simple content of:

apiVersion: v1
kind: Pod
metadata:
  name: web
spec:
  containers:
    - name: nginx
      image: docker.io/nginx:latest

fails to run.

UPDATE: 1
Seems that upgrading to the deployment instead of pod works at first glance.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
  labels:
    app: nginx
spec:
  replicas: 1
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:1.14.2
        ports:
        - containerPort: 80
          hostPort: 8080

Strange...
nginx:1.14.2 definitely works for me. but docker.io/nginx:latest or nginx:1.19.6 fails.
And it's not related to the Pod vs Deployment.
As soon as i downgrade my nginx version to the docker.io/nginx:1.14.2 my old pod.yml works.

UPDATE: 2
As soon as i add resources stanza in the pod.yml

      resources:
        requests:
          cpu: 1
          memory: 1Gi
        limits:
          cpu: 2
          memory: 1Gi

$ play kube fails with

Error: error starting some containers: some containers failed

UPDATE: 3
It's definitely something related to the versions. For example docker.io/php:7.0-fpm and some others which i tried fails. But for example docker.io/php:7.0.9-fpm works just fine. Really strange. Not sure how to debug this.

UPDATE: 4

$ podman exec -it web-nginx bash
root@web:/# ls /var/www/html\:Z/
ls: cannot open directory '/var/www/html:Z/': Permission denied

@mheon
Copy link
Member

mheon commented Dec 24, 2020 via email

@mheon
Copy link
Member

mheon commented Dec 24, 2020 via email

@mheon
Copy link
Member

mheon commented Dec 24, 2020

There are no errors out of Podman here - so it's definitely the app in the container.

@dzintars
Copy link

dzintars commented Dec 25, 2020

sudo setenforce 0 highlighted that one of the my issues is SElinux. So... at least in disabled SElinux i can run the php/nginx containers with volumeMounts.
But what's strange is that as soon as only I swap single image line of docker.io/nginx:1.14.2 for docker.io/nginx:latest in pod.yml i immediately get:

[conmon:d]: failed to write to /proc/self/oom_score_adj: Permission denied

subuid and subgid are dzintars:100000:65536
Shared volume are shared with podman unshare chown 33:33 -R (in my case that's the ID)

....
volumeMounts:
  - name: public
    mountPath: /var/www/html
    readOnly: true

What is this [conmon:d]: thing and how can it relate to the version of the image? How to debug it?

@oleastre
Copy link

If you are running stock podman from Fedora 33, you have to specify the full nginx command due to the two bugs I already mentioned.
With stock Fedora33 podman 2.2.1, I have to add his in my nginx pod definition:

      command:
        - nginx
        - -g
        - daemon off;

@dzintars
Copy link

dzintars commented Dec 27, 2020

@oleastre Yes. Seems you are right. I tried this on Jenkins container and got it finally running.
There is what i did:

  1. sudo setenforce 1

  2. Get to know under what UID Jenkins service runs

» podman exec -it jenkins-server bash
jenkins@jenkins:/$ cat /etc/passwd | grep jenkins
jenkins:x:1000:1000::/var/jenkins_home:/bin/bash
exit

or read the docs or examine podman inspect output.

  1. Change the ownership of my Jenkins volume:
» podman unshare chown 1000:1000 -R /home/dzintars/containers/jenkins/volume/jenkins_home
  1. Get to know what commands/arguments are executed:
» podman inspect jenkins-server

image

  1. List those arguments explicitly in the pod/deployment.yml command stanza:
      command:
        - /sbin/tini
        - --
        - /usr/local/bin/jenkins.sh
        - /sbin/tini
        - --
        - /usr/local/bin/jenkins.sh
  1. Make sure to add :Z label to the mount
      volumeMounts:
        - name: jenkins-home
          mountPath: /var/jenkins_home:Z
  1. Add runAsUser property
      securityContext:
        runAsUser: 1000
        allowPrivilegeEscalation: true
        capabilities: {}
        privileged: false
        readOnlyRootFilesystem: false
        seLinuxOptions: {}

I hope i didn't miss anything there as i did also other system tweaks. Will check these steps on other containers tomorrow.

@dzintars
Copy link

dzintars commented Dec 28, 2020

Any tips why this happens on some containers?
image
I mean that html:z which is not accessible.
UPDATE: Heck... i just gave up and dnf -y swap podman docker. Kidding. :) As volumes was pretty tiny and static, i just baked them directly into images. I have no idea, why most of my old pod.yml stopped to work after 3 of 4 stock Fedora releases and i have no more gun powder to spend time on learning all those obviously over-engineered things from my simple user needs perspective. I really hope this is some play kube issue or API change, if so... it will be solved eventually. At least now i have some workaround for my use-cases.

@ghost
Copy link
Author

ghost commented Jan 8, 2021

OK, I managed to take a step ahead.
Based on this issue I ran loginctl enable-linger podman and the permission denied error is gone.
But now I get this:

...
DEBU[0002] slirp4netns command: /usr/bin/slirp4netns --disable-host-loopback --mtu 65520 --enable-sandbox --enable-seccomp -c -e 3 -r 4 --netns-type=path /run/user/991/netns/cni-fb23b6ea-1523-5c29-3ef8-c997f0aeb258 tap0
DEBU[0002] rootlessport: time="2021-01-08T13:32:44+01:00" level=info msg="starting parent driver"
time="2021-01-08T13:32:44+01:00" level=info msg="opaque=map[builtin.readypipepath:/tmp/run-991/libpod/tmp/rootlessport170953142/.bp-ready.pipe builtin.socketpath:/tmp/run-991/libpod/tmp/rootlessport170953142/.bp.sock]"
DEBU[0002] rootlessport: time="2021-01-08T13:32:44+01:00" level=info msg="starting child driver in child netns (\"/proc/self/exe\" [containers-rootlessport-child])"
DEBU[0002] rootlessport: time="2021-01-08T13:32:44+01:00" level=info msg="waiting for initComplete"
DEBU[0002] rootlessport: time="2021-01-08T13:32:45+01:00" level=info msg="initComplete is closed; parent and child established the communication channel"
time="2021-01-08T13:32:45+01:00" level=info msg="exposing ports [{**** **** tcp }]"
time="2021-01-08T13:32:45+01:00" level=info msg="parent: listen: listen tcp 0.0.0.0:****: bind: address already in use\n"
time="2021-01-08T13:32:45+01:00" level=info msg="stopping parent driver"
...

Here's how my yaml file looks like:

---

apiVersion: v1
kind: Pod
metadata:
  labels:
    app: pod
  name: pod
spec:
  containers:
  - name: myapp
    image: myapp:1.1.1
    ports:
    - containerPort: 5555
      hostPort: 6666
      protocol: TCP
    securityContext:
      allowPrivilegeEscalation: false
      capabilities: {}
      privileged: false
      readOnlyRootFilesystem: true
      runAsUser: 1111
      seLinuxOptions:
        type: spc_t
    volumeMounts:
    - mountPath: /data/myfolder/
      name:  myfolder

  volumes:
  - name: myfolder
    hostPath:
      path: /data/myfolder/
      type: Directory

@mheon
Copy link
Member

mheon commented Jan 8, 2021

As the message says, the port (you're obscured the exact number) you want to bind to seems to be in use on the host. Likely another program on your system is already using it?

@ghost
Copy link
Author

ghost commented Jan 8, 2021

hi @mheon , there was no other process, as I only use that port for podman. If it had been podman using the port I would have got a "pod is already there"-kind of message.
But anyway, I cannot reproduce it anymore, now even if I use the "loginctl" command I am back to where I started:

WARN[0002] Failed to add conmon to cgroupfs sandbox cgroup: error creating cgroup for blkio: mkdir /sys/fs/cgroup/blkio/libpod_parent: permission denied
[conmon:d]: failed to write to /proc/self/oom_score_adj: Permission denied

As it complains for some OOM thing can it not be related to this: #7853 ? I understand it's a new feature in 2.2.0 -- 2.1.1 still works for me.
However, I don't use memory or cpu limits in the yaml file.

@mheon
Copy link
Member

mheon commented Jan 8, 2021

Both of those are expected warnings in rootless mode and are not really concerning.

@ghost
Copy link
Author

ghost commented Jan 8, 2021

So it fails for some other reason, doesn't it?
I looked at the logs again:

WARN[0002] Failed to add conmon to cgroupfs sandbox cgroup: error creating cgroup for cpu: mkdir /sys/fs/cgroup/cpu/libpod_parent: permission denied
[conmon:d]: failed to write to /proc/self/oom_score_adj: Permission denied

DEBU[0002] Received: -1
DEBU[0002] Cleaning up container c397df109116ac080e21f221586c363b9eb5b21ddd8d6eac8bf4a3aa785b200d
DEBU[0003] unmounted container "c397df109116ac080e21f221586c363b9eb5b21ddd8d6eac8bf4a3aa785b200d"
Error: error starting some containers: some containers failed

So the permission denied is not the problem, then we get Received: -1, finally the container is cleaned up.
I guess the container died at the Received: -1 step. But what is that? It doesn't say anything to me.

@mheon
Copy link
Member

mheon commented Jan 8, 2021

Can you provide the full log? I'm not seeing the actual case of that error anywhere.

@baude
Copy link
Member

baude commented Jan 8, 2021

perhaps you can provide a generic yaml file that reproduces for you so we can run it

@ghost
Copy link
Author

ghost commented Jan 8, 2021

OK, so now I tried to run nginx - I removed the volume part from the file.
The following file has not been anonymized:

---

apiVersion: v1
kind: Pod
metadata:
  labels:
    app: pod
  name: pod
spec:
  containers:
  - name: nginx
    image: nginx
    env:
     - name: MYENV
       value: "MYVALUE"
    ports:
    - containerPort: 80
      hostPort: 8080
      protocol: TCP
    securityContext:
      allowPrivilegeEscalation: false
      capabilities: {}
      privileged: false
      readOnlyRootFilesystem: true
      runAsGroup: 101
      runAsUser: 101
      seLinuxOptions:
        type: spc_t
    resources: {}
podman play kube /app/podman/pod.yaml --log-level=debug
time="2021-01-08T16:40:59+01:00" level=info msg="podman filtering at log level debug"
time="2021-01-08T16:40:59+01:00" level=debug msg="Called kube.PersistentPreRunE(podman play kube /app/podman/pod.yaml --log-level=debug)"
time="2021-01-08T16:40:59+01:00" level=debug msg="Reading configuration file \"/usr/share/containers/containers.conf\""
time="2021-01-08T16:40:59+01:00" level=debug msg="Merged system config \"/usr/share/containers/containers.conf\": &{Containers:{Devices:[] Volumes:[] ApparmorProfile:containers-default-0.29.0 Annotations:[] CgroupNS:host Cgroups:enabled DefaultCapabilities:[CHOWN DAC_OVERRIDE FOWNER FSETID KILL NET_BIND_SERVICE SETFCAP SETGID SETPCAP SETUID SYS_CHROOT] DefaultSysctls:[net.ipv4.ping_group_range=0 0] DefaultUlimits:[] DefaultMountsFile: DNSServers:[] DNSOptions:[] DNSSearches:[] EnableKeyring:true EnableLabeling:true Env:[PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin TERM=xterm] EnvHost:false HTTPProxy:false Init:false InitPath: IPCNS:private LogDriver:k8s-file LogSizeMax:-1 NetNS:slirp4netns NoHosts:false PidsLimit:2048 PidNS:private SeccompProfile:/usr/share/containers/seccomp.json ShmSize:65536k TZ: Umask:0022 UTSNS:private UserNS:host UserNSSize:65536} Engine:{ImageBuildFormat:oci CgroupCheck:false CgroupManager:cgroupfs ConmonEnvVars:[PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin] ConmonPath:[/usr/libexec/podman/conmon /usr/local/libexec/podman/conmon /usr/local/lib/podman/conmon /usr/bin/conmon /usr/sbin/conmon /usr/local/bin/conmon /usr/local/sbin/conmon /run/current-system/sw/bin/conmon] DetachKeys:ctrl-p,ctrl-q EnablePortReservation:true Env:[] EventsLogFilePath:/run/user/991/libpod/tmp/events/events.log EventsLogger:journald HooksDir:[/usr/share/containers/oci/hooks.d] ImageDefaultTransport:docker:// InfraCommand: InfraImage:k8s.gcr.io/pause:3.2 InitPath:/usr/libexec/podman/catatonit LockType:shm MultiImageArchive:false Namespace: NetworkCmdPath: NoPivotRoot:false NumLocks:2048 OCIRuntime:runc OCIRuntimes:map[crun:[/usr/bin/crun /usr/sbin/crun /usr/local/bin/crun /usr/local/sbin/crun /sbin/crun /bin/crun /run/current-system/sw/bin/crun] kata:[/usr/bin/kata-runtime /usr/sbin/kata-runtime /usr/local/bin/kata-runtime /usr/local/sbin/kata-runtime /sbin/kata-runtime /bin/kata-runtime /usr/bin/kata-qemu /usr/bin/kata-fc] runc:[/usr/bin/runc /usr/sbin/runc /usr/local/bin/runc /usr/local/sbin/runc /sbin/runc /bin/runc /usr/lib/cri-o-runc/sbin/runc /run/current-system/sw/bin/runc]] PullPolicy:missing Remote:false RemoteURI: RemoteIdentity: ActiveService: ServiceDestinations:map[] RuntimePath:[] RuntimeSupportsJSON:[crun runc] RuntimeSupportsNoCgroups:[crun] RuntimeSupportsKVM:[kata kata-runtime kata-qemu kata-fc] SetOptions:{StorageConfigRunRootSet:false StorageConfigGraphRootSet:false StorageConfigGraphDriverNameSet:false StaticDirSet:false VolumePathSet:false TmpDirSet:false} SignaturePolicyPath:/etc/containers/policy.json SDNotify:false StateType:3 StaticDir:/home/podman/.local/share/containers/storage/libpod StopTimeout:10 TmpDir:/run/user/991/libpod/tmp VolumePath:/home/podman/.local/share/containers/storage/volumes} Network:{CNIPluginDirs:[/usr/libexec/cni /usr/lib/cni /usr/local/lib/cni /opt/cni/bin] DefaultNetwork:podman NetworkConfigDir:/home/podman/.config/cni/net.d}}"
time="2021-01-08T16:40:59+01:00" level=debug msg="Using conmon: \"/usr/bin/conmon\""
time="2021-01-08T16:40:59+01:00" level=debug msg="Initializing boltdb state at /home/podman/.local/share/containers/storage/libpod/bolt_state.db"
time="2021-01-08T16:40:59+01:00" level=debug msg="Overriding run root \"/run/user/991/containers\" with \"/tmp/run-991/containers\" from database"
time="2021-01-08T16:40:59+01:00" level=debug msg="Overriding tmp dir \"/run/user/991/libpod/tmp\" with \"/tmp/run-991/libpod/tmp\" from database"
time="2021-01-08T16:40:59+01:00" level=debug msg="Using graph driver vfs"
time="2021-01-08T16:40:59+01:00" level=debug msg="Using graph root /home/podman/.local/share/containers/storage"
time="2021-01-08T16:40:59+01:00" level=debug msg="Using run root /tmp/run-991/containers"
time="2021-01-08T16:40:59+01:00" level=debug msg="Using static dir /home/podman/.local/share/containers/storage/libpod"
time="2021-01-08T16:40:59+01:00" level=debug msg="Using tmp dir /tmp/run-991/libpod/tmp"
time="2021-01-08T16:40:59+01:00" level=debug msg="Using volume path /home/podman/.local/share/containers/storage/volumes"
time="2021-01-08T16:40:59+01:00" level=debug msg="Set libpod namespace to \"\""
time="2021-01-08T16:40:59+01:00" level=debug msg="Not configuring container store"
time="2021-01-08T16:40:59+01:00" level=debug msg="Initializing event backend journald"
time="2021-01-08T16:40:59+01:00" level=debug msg="using runtime \"/usr/bin/runc\""
time="2021-01-08T16:40:59+01:00" level=warning msg="Error initializing configured OCI runtime crun: no valid executable found for OCI runtime crun: invalid argument"
time="2021-01-08T16:40:59+01:00" level=warning msg="Error initializing configured OCI runtime kata: no valid executable found for OCI runtime kata: invalid argument"
time="2021-01-08T16:40:59+01:00" level=info msg="Setting parallel job count to 13"
time="2021-01-08T16:40:59+01:00" level=info msg="podman filtering at log level debug"
time="2021-01-08T16:40:59+01:00" level=debug msg="Called kube.PersistentPreRunE(podman play kube /app/podman/pod.yaml --log-level=debug)"
time="2021-01-08T16:40:59+01:00" level=debug msg="Reading configuration file \"/usr/share/containers/containers.conf\""
time="2021-01-08T16:40:59+01:00" level=debug msg="Merged system config \"/usr/share/containers/containers.conf\": &{Containers:{Devices:[] Volumes:[] ApparmorProfile:containers-default-0.29.0 Annotations:[] CgroupNS:host Cgroups:enabled DefaultCapabilities:[CHOWN DAC_OVERRIDE FOWNER FSETID KILL NET_BIND_SERVICE SETFCAP SETGID SETPCAP SETUID SYS_CHROOT] DefaultSysctls:[net.ipv4.ping_group_range=0 0] DefaultUlimits:[] DefaultMountsFile: DNSServers:[] DNSOptions:[] DNSSearches:[] EnableKeyring:true EnableLabeling:true Env:[PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin TERM=xterm] EnvHost:false HTTPProxy:false Init:false InitPath: IPCNS:private LogDriver:k8s-file LogSizeMax:-1 NetNS:slirp4netns NoHosts:false PidsLimit:2048 PidNS:private SeccompProfile:/usr/share/containers/seccomp.json ShmSize:65536k TZ: Umask:0022 UTSNS:private UserNS:host UserNSSize:65536} Engine:{ImageBuildFormat:oci CgroupCheck:false CgroupManager:cgroupfs ConmonEnvVars:[PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin] ConmonPath:[/usr/libexec/podman/conmon /usr/local/libexec/podman/conmon /usr/local/lib/podman/conmon /usr/bin/conmon /usr/sbin/conmon /usr/local/bin/conmon /usr/local/sbin/conmon /run/current-system/sw/bin/conmon] DetachKeys:ctrl-p,ctrl-q EnablePortReservation:true Env:[] EventsLogFilePath:/run/user/991/libpod/tmp/events/events.log EventsLogger:journald HooksDir:[/usr/share/containers/oci/hooks.d] ImageDefaultTransport:docker:// InfraCommand: InfraImage:k8s.gcr.io/pause:3.2 InitPath:/usr/libexec/podman/catatonit LockType:shm MultiImageArchive:false Namespace: NetworkCmdPath: NoPivotRoot:false NumLocks:2048 OCIRuntime:runc OCIRuntimes:map[crun:[/usr/bin/crun /usr/sbin/crun /usr/local/bin/crun /usr/local/sbin/crun /sbin/crun /bin/crun /run/current-system/sw/bin/crun] kata:[/usr/bin/kata-runtime /usr/sbin/kata-runtime /usr/local/bin/kata-runtime /usr/local/sbin/kata-runtime /sbin/kata-runtime /bin/kata-runtime /usr/bin/kata-qemu /usr/bin/kata-fc] runc:[/usr/bin/runc /usr/sbin/runc /usr/local/bin/runc /usr/local/sbin/runc /sbin/runc /bin/runc /usr/lib/cri-o-runc/sbin/runc /run/current-system/sw/bin/runc]] PullPolicy:missing Remote:false RemoteURI: RemoteIdentity: ActiveService: ServiceDestinations:map[] RuntimePath:[] RuntimeSupportsJSON:[crun runc] RuntimeSupportsNoCgroups:[crun] RuntimeSupportsKVM:[kata kata-runtime kata-qemu kata-fc] SetOptions:{StorageConfigRunRootSet:false StorageConfigGraphRootSet:false StorageConfigGraphDriverNameSet:false StaticDirSet:false VolumePathSet:false TmpDirSet:false} SignaturePolicyPath:/etc/containers/policy.json SDNotify:false StateType:3 StaticDir:/home/podman/.local/share/containers/storage/libpod StopTimeout:10 TmpDir:/run/user/991/libpod/tmp VolumePath:/home/podman/.local/share/containers/storage/volumes} Network:{CNIPluginDirs:[/usr/libexec/cni /usr/lib/cni /usr/local/lib/cni /opt/cni/bin] DefaultNetwork:podman NetworkConfigDir:/home/podman/.config/cni/net.d}}"
time="2021-01-08T16:40:59+01:00" level=debug msg="Using conmon: \"/usr/bin/conmon\""
time="2021-01-08T16:40:59+01:00" level=debug msg="Initializing boltdb state at /home/podman/.local/share/containers/storage/libpod/bolt_state.db"
time="2021-01-08T16:40:59+01:00" level=debug msg="Overriding run root \"/run/user/991/containers\" with \"/tmp/run-991/containers\" from database"
time="2021-01-08T16:40:59+01:00" level=debug msg="Overriding tmp dir \"/run/user/991/libpod/tmp\" with \"/tmp/run-991/libpod/tmp\" from database"
time="2021-01-08T16:40:59+01:00" level=debug msg="Using graph driver vfs"
time="2021-01-08T16:40:59+01:00" level=debug msg="Using graph root /home/podman/.local/share/containers/storage"
time="2021-01-08T16:40:59+01:00" level=debug msg="Using run root /tmp/run-991/containers"
time="2021-01-08T16:40:59+01:00" level=debug msg="Using static dir /home/podman/.local/share/containers/storage/libpod"
time="2021-01-08T16:40:59+01:00" level=debug msg="Using tmp dir /tmp/run-991/libpod/tmp"
time="2021-01-08T16:40:59+01:00" level=debug msg="Using volume path /home/podman/.local/share/containers/storage/volumes"
time="2021-01-08T16:40:59+01:00" level=debug msg="Set libpod namespace to \"\""
time="2021-01-08T16:40:59+01:00" level=debug msg="[graphdriver] trying provided driver \"vfs\""
time="2021-01-08T16:40:59+01:00" level=debug msg="Initializing event backend journald"
time="2021-01-08T16:40:59+01:00" level=debug msg="using runtime \"/usr/bin/runc\""
time="2021-01-08T16:40:59+01:00" level=warning msg="Error initializing configured OCI runtime crun: no valid executable found for OCI runtime crun: invalid argument"
time="2021-01-08T16:40:59+01:00" level=warning msg="Error initializing configured OCI runtime kata: no valid executable found for OCI runtime kata: invalid argument"
time="2021-01-08T16:40:59+01:00" level=info msg="Setting parallel job count to 13"
time="2021-01-08T16:40:59+01:00" level=debug msg="Pod using default network mode"
time="2021-01-08T16:40:59+01:00" level=debug msg="Got pod cgroup as /libpod_parent/06861045b0ee56c19cf0849b8ac14351d00d6600db9673ec9f605c2473d5d0a1"
time="2021-01-08T16:40:59+01:00" level=debug msg="parsed reference into \"[vfs@/home/podman/.local/share/containers/storage+/tmp/run-991/containers]@80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c\""
time="2021-01-08T16:40:59+01:00" level=debug msg="exporting opaque data as blob \"sha256:80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c\""
time="2021-01-08T16:40:59+01:00" level=debug msg="Using [\"/pause\"] as infra container entrypoint"
time="2021-01-08T16:40:59+01:00" level=debug msg="Allocated lock 1 for container 64d80e60a1d5c79a4b9aa45e1b27bac453b2942c4cae90a1dafe67fd2fac8d7b"
time="2021-01-08T16:40:59+01:00" level=debug msg="parsed reference into \"[vfs@/home/podman/.local/share/containers/storage+/tmp/run-991/containers]@80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c\""
time="2021-01-08T16:40:59+01:00" level=debug msg="exporting opaque data as blob \"sha256:80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c\""
time="2021-01-08T16:40:59+01:00" level=debug msg="created container \"64d80e60a1d5c79a4b9aa45e1b27bac453b2942c4cae90a1dafe67fd2fac8d7b\""
time="2021-01-08T16:40:59+01:00" level=debug msg="container \"64d80e60a1d5c79a4b9aa45e1b27bac453b2942c4cae90a1dafe67fd2fac8d7b\" has work directory \"/home/podman/.local/share/containers/storage/vfs-containers/64d80e60a1d5c79a4b9aa45e1b27bac453b2942c4cae90a1dafe67fd2fac8d7b/userdata\""
time="2021-01-08T16:40:59+01:00" level=debug msg="container \"64d80e60a1d5c79a4b9aa45e1b27bac453b2942c4cae90a1dafe67fd2fac8d7b\" has run directory \"/tmp/run-991/containers/vfs-containers/64d80e60a1d5c79a4b9aa45e1b27bac453b2942c4cae90a1dafe67fd2fac8d7b/userdata\""
time="2021-01-08T16:40:59+01:00" level=debug msg="Loading registries configuration \"/etc/containers/registries.conf\""
time="2021-01-08T16:40:59+01:00" level=debug msg="Loading registries configuration \"/etc/containers/registries.conf.d/shortnames.conf\""
time="2021-01-08T16:40:59+01:00" level=debug msg="parsed reference into \"[vfs@/home/podman/.local/share/containers/storage+/tmp/run-991/containers]@ae2feff98a0cc5095d97c6c283dcd33090770c76d63877caa99aefbbe4343bdd\""
time="2021-01-08T16:40:59+01:00" level=debug msg="exporting opaque data as blob \"sha256:ae2feff98a0cc5095d97c6c283dcd33090770c76d63877caa99aefbbe4343bdd\""
time="2021-01-08T16:40:59+01:00" level=debug msg="parsed reference into \"[vfs@/home/podman/.local/share/containers/storage+/tmp/run-991/containers]@ae2feff98a0cc5095d97c6c283dcd33090770c76d63877caa99aefbbe4343bdd\""
time="2021-01-08T16:40:59+01:00" level=debug msg="exporting opaque data as blob \"sha256:ae2feff98a0cc5095d97c6c283dcd33090770c76d63877caa99aefbbe4343bdd\""
time="2021-01-08T16:40:59+01:00" level=debug msg="using systemd mode: false"
time="2021-01-08T16:40:59+01:00" level=debug msg="setting container name pod-nginx"
time="2021-01-08T16:40:59+01:00" level=debug msg="adding container to pod pod"
time="2021-01-08T16:40:59+01:00" level=debug msg="Loading default seccomp profile"
time="2021-01-08T16:40:59+01:00" level=debug msg="Allocated lock 2 for container 2497f31ded36be7ba2cb4c4b6ef9a530cacf36199b3789f6b76d11b1237e34dd"
time="2021-01-08T16:40:59+01:00" level=debug msg="parsed reference into \"[vfs@/home/podman/.local/share/containers/storage+/tmp/run-991/containers]@ae2feff98a0cc5095d97c6c283dcd33090770c76d63877caa99aefbbe4343bdd\""
time="2021-01-08T16:40:59+01:00" level=debug msg="exporting opaque data as blob \"sha256:ae2feff98a0cc5095d97c6c283dcd33090770c76d63877caa99aefbbe4343bdd\""
time="2021-01-08T16:41:00+01:00" level=debug msg="created container \"2497f31ded36be7ba2cb4c4b6ef9a530cacf36199b3789f6b76d11b1237e34dd\""
time="2021-01-08T16:41:00+01:00" level=debug msg="container \"2497f31ded36be7ba2cb4c4b6ef9a530cacf36199b3789f6b76d11b1237e34dd\" has work directory \"/home/podman/.local/share/containers/storage/vfs-containers/2497f31ded36be7ba2cb4c4b6ef9a530cacf36199b3789f6b76d11b1237e34dd/userdata\""
time="2021-01-08T16:41:00+01:00" level=debug msg="container \"2497f31ded36be7ba2cb4c4b6ef9a530cacf36199b3789f6b76d11b1237e34dd\" has run directory \"/tmp/run-991/containers/vfs-containers/2497f31ded36be7ba2cb4c4b6ef9a530cacf36199b3789f6b76d11b1237e34dd/userdata\""
time="2021-01-08T16:41:00+01:00" level=debug msg="Strongconnecting node 2497f31ded36be7ba2cb4c4b6ef9a530cacf36199b3789f6b76d11b1237e34dd"
time="2021-01-08T16:41:00+01:00" level=debug msg="Pushed 2497f31ded36be7ba2cb4c4b6ef9a530cacf36199b3789f6b76d11b1237e34dd onto stack"
time="2021-01-08T16:41:00+01:00" level=debug msg="Recursing to successor node 64d80e60a1d5c79a4b9aa45e1b27bac453b2942c4cae90a1dafe67fd2fac8d7b"
time="2021-01-08T16:41:00+01:00" level=debug msg="Strongconnecting node 64d80e60a1d5c79a4b9aa45e1b27bac453b2942c4cae90a1dafe67fd2fac8d7b"
time="2021-01-08T16:41:00+01:00" level=debug msg="Pushed 64d80e60a1d5c79a4b9aa45e1b27bac453b2942c4cae90a1dafe67fd2fac8d7b onto stack"
time="2021-01-08T16:41:00+01:00" level=debug msg="Finishing node 64d80e60a1d5c79a4b9aa45e1b27bac453b2942c4cae90a1dafe67fd2fac8d7b. Popped 64d80e60a1d5c79a4b9aa45e1b27bac453b2942c4cae90a1dafe67fd2fac8d7b off stack"
time="2021-01-08T16:41:00+01:00" level=debug msg="Finishing node 2497f31ded36be7ba2cb4c4b6ef9a530cacf36199b3789f6b76d11b1237e34dd. Popped 2497f31ded36be7ba2cb4c4b6ef9a530cacf36199b3789f6b76d11b1237e34dd off stack"
time="2021-01-08T16:41:00+01:00" level=debug msg="Made network namespace at /run/user/991/netns/cni-41cd6280-62a3-30d4-0799-e03113692fb6 for container 64d80e60a1d5c79a4b9aa45e1b27bac453b2942c4cae90a1dafe67fd2fac8d7b"
time="2021-01-08T16:41:00+01:00" level=debug msg="mounted container \"64d80e60a1d5c79a4b9aa45e1b27bac453b2942c4cae90a1dafe67fd2fac8d7b\" at \"/home/podman/.local/share/containers/storage/vfs/dir/c3837cb09652948c930f43857e9a9f1e071a81a377460bd52b8ff836a82bfea3\""
time="2021-01-08T16:41:00+01:00" level=debug msg="slirp4netns command: /usr/bin/slirp4netns --disable-host-loopback --mtu 65520 --enable-sandbox --enable-seccomp -c -e 3 -r 4 --netns-type=path /run/user/991/netns/cni-41cd6280-62a3-30d4-0799-e03113692fb6 tap0"
time="2021-01-08T16:41:00+01:00" level=debug msg="rootlessport: time=\"2021-01-08T16:41:00+01:00\" level=info msg=\"starting parent driver\"\ntime=\"2021-01-08T16:41:00+01:00\" level=info msg=\"opaque=map[builtin.readypipepath:/tmp/run-991/libpod/tmp/rootlessport060625427/.bp-ready.pipe builtin.socketpath:/tmp/run-991/libpod/tmp/rootlessport060625427/.bp.sock]\"\n"
time="2021-01-08T16:41:00+01:00" level=debug msg="rootlessport: time=\"2021-01-08T16:41:00+01:00\" level=info msg=\"starting child driver in child netns (\\\"/proc/self/exe\\\" [containers-rootlessport-child])\"\n"
time="2021-01-08T16:41:00+01:00" level=debug msg="rootlessport: time=\"2021-01-08T16:41:00+01:00\" level=info msg=\"waiting for initComplete\"\n"
time="2021-01-08T16:41:00+01:00" level=debug msg="rootlessport: time=\"2021-01-08T16:41:00+01:00\" level=info msg=\"initComplete is closed; parent and child established the communication channel\"\ntime=\"2021-01-08T16:41:00+01:00\" level=info msg=\"exposing ports [{8080 80 tcp }]\"\n"
time="2021-01-08T16:41:00+01:00" level=debug msg="rootlessport: time=\"2021-01-08T16:41:00+01:00\" level=info msg=ready\n"
time="2021-01-08T16:41:00+01:00" level=debug msg="rootlessport: time=\"2021-01-08T16:41:00+01:00\" level=info msg=\"waiting for exitfd to be closed\"\n"
time="2021-01-08T16:41:00+01:00" level=debug msg="rootlessport is ready"
time="2021-01-08T16:41:00+01:00" level=debug msg="Created root filesystem for container 64d80e60a1d5c79a4b9aa45e1b27bac453b2942c4cae90a1dafe67fd2fac8d7b at /home/podman/.local/share/containers/storage/vfs/dir/c3837cb09652948c930f43857e9a9f1e071a81a377460bd52b8ff836a82bfea3"
time="2021-01-08T16:41:00+01:00" level=debug msg="/etc/system-fips does not exist on host, not mounting FIPS mode secret"
time="2021-01-08T16:41:00+01:00" level=debug msg="reading hooks from /usr/share/containers/oci/hooks.d"
time="2021-01-08T16:41:00+01:00" level=debug msg="Created OCI spec for container 64d80e60a1d5c79a4b9aa45e1b27bac453b2942c4cae90a1dafe67fd2fac8d7b at /home/podman/.local/share/containers/storage/vfs-containers/64d80e60a1d5c79a4b9aa45e1b27bac453b2942c4cae90a1dafe67fd2fac8d7b/userdata/config.json"
time="2021-01-08T16:41:00+01:00" level=debug msg="/usr/bin/conmon messages will be logged to syslog"
time="2021-01-08T16:41:00+01:00" level=debug msg="running conmon: /usr/bin/conmon" args="[--api-version 1 -c 64d80e60a1d5c79a4b9aa45e1b27bac453b2942c4cae90a1dafe67fd2fac8d7b -u 64d80e60a1d5c79a4b9aa45e1b27bac453b2942c4cae90a1dafe67fd2fac8d7b -r /usr/bin/runc -b /home/podman/.local/share/containers/storage/vfs-containers/64d80e60a1d5c79a4b9aa45e1b27bac453b2942c4cae90a1dafe67fd2fac8d7b/userdata -p /tmp/run-991/containers/vfs-containers/64d80e60a1d5c79a4b9aa45e1b27bac453b2942c4cae90a1dafe67fd2fac8d7b/userdata/pidfile -n 06861045b0ee-infra --exit-dir /tmp/run-991/libpod/tmp/exits --socket-dir-path /tmp/run-991/libpod/tmp/socket -l k8s-file:/home/podman/.local/share/containers/storage/vfs-containers/64d80e60a1d5c79a4b9aa45e1b27bac453b2942c4cae90a1dafe67fd2fac8d7b/userdata/ctr.log --log-level debug --syslog --conmon-pidfile /tmp/run-991/containers/vfs-containers/64d80e60a1d5c79a4b9aa45e1b27bac453b2942c4cae90a1dafe67fd2fac8d7b/userdata/conmon.pid --exit-command /usr/bin/podman --exit-command-arg --root --exit-command-arg /home/podman/.local/share/containers/storage --exit-command-arg --runroot --exit-command-arg /tmp/run-991/containers --exit-command-arg --log-level --exit-command-arg debug --exit-command-arg --cgroup-manager --exit-command-arg cgroupfs --exit-command-arg --tmpdir --exit-command-arg /tmp/run-991/libpod/tmp --exit-command-arg --runtime --exit-command-arg runc --exit-command-arg --storage-driver --exit-command-arg vfs --exit-command-arg --events-backend --exit-command-arg journald --exit-command-arg --syslog --exit-command-arg container --exit-command-arg cleanup --exit-command-arg 64d80e60a1d5c79a4b9aa45e1b27bac453b2942c4cae90a1dafe67fd2fac8d7b]"
time="2021-01-08T16:41:00+01:00" level=warning msg="Failed to add conmon to cgroupfs sandbox cgroup: error creating cgroup for cpu: mkdir /sys/fs/cgroup/cpu/libpod_parent: permission denied"
[conmon:d]: failed to write to /proc/self/oom_score_adj: Permission denied

time="2021-01-08T16:41:01+01:00" level=debug msg="Received: 5552"
time="2021-01-08T16:41:01+01:00" level=info msg="Got Conmon PID as 5540"
time="2021-01-08T16:41:01+01:00" level=debug msg="Created container 64d80e60a1d5c79a4b9aa45e1b27bac453b2942c4cae90a1dafe67fd2fac8d7b in OCI runtime"
time="2021-01-08T16:41:01+01:00" level=debug msg="Starting container 64d80e60a1d5c79a4b9aa45e1b27bac453b2942c4cae90a1dafe67fd2fac8d7b with command [/pause]"
time="2021-01-08T16:41:01+01:00" level=debug msg="Started container 64d80e60a1d5c79a4b9aa45e1b27bac453b2942c4cae90a1dafe67fd2fac8d7b"
time="2021-01-08T16:41:01+01:00" level=debug msg="mounted container \"2497f31ded36be7ba2cb4c4b6ef9a530cacf36199b3789f6b76d11b1237e34dd\" at \"/home/podman/.local/share/containers/storage/vfs/dir/c35f73549a90950286b1f1882afba006fa2ca612ce725bbd73823091bdcbd40b\""
time="2021-01-08T16:41:01+01:00" level=debug msg="Created root filesystem for container 2497f31ded36be7ba2cb4c4b6ef9a530cacf36199b3789f6b76d11b1237e34dd at /home/podman/.local/share/containers/storage/vfs/dir/c35f73549a90950286b1f1882afba006fa2ca612ce725bbd73823091bdcbd40b"
time="2021-01-08T16:41:01+01:00" level=debug msg="Not modifying container 2497f31ded36be7ba2cb4c4b6ef9a530cacf36199b3789f6b76d11b1237e34dd /etc/passwd"
time="2021-01-08T16:41:01+01:00" level=debug msg="Not modifying container 2497f31ded36be7ba2cb4c4b6ef9a530cacf36199b3789f6b76d11b1237e34dd /etc/group"
time="2021-01-08T16:41:01+01:00" level=debug msg="/etc/system-fips does not exist on host, not mounting FIPS mode secret"
time="2021-01-08T16:41:01+01:00" level=debug msg="reading hooks from /usr/share/containers/oci/hooks.d"
time="2021-01-08T16:41:01+01:00" level=debug msg="Created OCI spec for container 2497f31ded36be7ba2cb4c4b6ef9a530cacf36199b3789f6b76d11b1237e34dd at /home/podman/.local/share/containers/storage/vfs-containers/2497f31ded36be7ba2cb4c4b6ef9a530cacf36199b3789f6b76d11b1237e34dd/userdata/config.json"
time="2021-01-08T16:41:01+01:00" level=debug msg="/usr/bin/conmon messages will be logged to syslog"
time="2021-01-08T16:41:01+01:00" level=debug msg="running conmon: /usr/bin/conmon" args="[--api-version 1 -c 2497f31ded36be7ba2cb4c4b6ef9a530cacf36199b3789f6b76d11b1237e34dd -u 2497f31ded36be7ba2cb4c4b6ef9a530cacf36199b3789f6b76d11b1237e34dd -r /usr/bin/runc -b /home/podman/.local/share/containers/storage/vfs-containers/2497f31ded36be7ba2cb4c4b6ef9a530cacf36199b3789f6b76d11b1237e34dd/userdata -p /tmp/run-991/containers/vfs-containers/2497f31ded36be7ba2cb4c4b6ef9a530cacf36199b3789f6b76d11b1237e34dd/userdata/pidfile -n pod-nginx --exit-dir /tmp/run-991/libpod/tmp/exits --socket-dir-path /tmp/run-991/libpod/tmp/socket -l k8s-file:/home/podman/.local/share/containers/storage/vfs-containers/2497f31ded36be7ba2cb4c4b6ef9a530cacf36199b3789f6b76d11b1237e34dd/userdata/ctr.log --log-level debug --syslog --conmon-pidfile /tmp/run-991/containers/vfs-containers/2497f31ded36be7ba2cb4c4b6ef9a530cacf36199b3789f6b76d11b1237e34dd/userdata/conmon.pid --exit-command /usr/bin/podman --exit-command-arg --root --exit-command-arg /home/podman/.local/share/containers/storage --exit-command-arg --runroot --exit-command-arg /tmp/run-991/containers --exit-command-arg --log-level --exit-command-arg debug --exit-command-arg --cgroup-manager --exit-command-arg cgroupfs --exit-command-arg --tmpdir --exit-command-arg /tmp/run-991/libpod/tmp --exit-command-arg --runtime --exit-command-arg runc --exit-command-arg --storage-driver --exit-command-arg vfs --exit-command-arg --events-backend --exit-command-arg journald --exit-command-arg --syslog --exit-command-arg container --exit-command-arg cleanup --exit-command-arg 2497f31ded36be7ba2cb4c4b6ef9a530cacf36199b3789f6b76d11b1237e34dd]"
time="2021-01-08T16:41:01+01:00" level=warning msg="Failed to add conmon to cgroupfs sandbox cgroup: error creating cgroup for cpu: mkdir /sys/fs/cgroup/cpu/libpod_parent: permission denied"
[conmon:d]: failed to write to /proc/self/oom_score_adj: Permission denied

time="2021-01-08T16:41:01+01:00" level=debug msg="Received: -1"
time="2021-01-08T16:41:01+01:00" level=debug msg="Cleaning up container 2497f31ded36be7ba2cb4c4b6ef9a530cacf36199b3789f6b76d11b1237e34dd"
time="2021-01-08T16:41:01+01:00" level=debug msg="unmounted container \"2497f31ded36be7ba2cb4c4b6ef9a530cacf36199b3789f6b76d11b1237e34dd\""
Error: error starting some containers: some containers failed

current versions:

podman version
Version:      2.2.1
API Version:  2.1.0
Go Version:   go1.14.7
Built:        Thu Dec 10 19:26:48 2020
OS/Arch:      linux/amd64
rpm -qa container*
container-selinux-2.145.0-1.el8.noarch
containernetworking-plugins-0.9.0-1.el8.x86_64
containers-common-1.2.0-15.el8.x86_64
rpm -qa runc
runc-1.0.0-145.rc91.git24a3cf8.el8.x86_64

@mheon
Copy link
Member

mheon commented Jan 8, 2021

Found (part) of the problem: podman play kube as of 2.2.0 is completely discarding errors that occur when starting containers.

@mheon
Copy link
Member

mheon commented Jan 8, 2021

After patching Podman to actually report errors, I'm getting the following:
error starting container 0912a70ca34175a20ce78da29aabb7b9014ad2074e39bd77de7ce1c3f32ab323: write file /proc/thread-self/attr/exec: Invalid argument: OCI runtime error

@mheon
Copy link
Member

mheon commented Jan 8, 2021

I don't think I've ever seen this one before...

@mheon
Copy link
Member

mheon commented Jan 8, 2021

Error handling fix in #8917

@mheon
Copy link
Member

mheon commented Jan 8, 2021

(This will not resolve your issue, but it will print the actual errors that are occurring)

@st1971
Copy link
Contributor

st1971 commented Jan 31, 2021

Think the problem is being caused by an invalid selinux label being generated, when i use play kube with a similar setup to above the container ends up with a label something like

system_u:spc_t:container_t:s0:c616,c890

which for some reason contains 2 types and role is missing. so i think there may be an issue with the mapping from the selinux config in the kube yaml and the context object on the container.

@st1971
Copy link
Contributor

st1971 commented Feb 1, 2021

had a look through some code last night and would seem to be a problem in pkg/specgen/generate/kube/kube.go, regardless of what's put in the selinux options in the yaml file it always ends up in the role position in the label.

will try to submit a PR this week for this.

	if seopt := containerYAML.SecurityContext.SELinuxOptions; seopt != nil {
		if seopt.User != "" {
			s.SelinuxOpts = append(s.SelinuxOpts, fmt.Sprintf("role:%s", seopt.User))
		}
		if seopt.Role != "" {
			s.SelinuxOpts = append(s.SelinuxOpts, fmt.Sprintf("role:%s", seopt.Role))
		}
		if seopt.Type != "" {
			s.SelinuxOpts = append(s.SelinuxOpts, fmt.Sprintf("role:%s", seopt.Type))
		}
		if seopt.Level != "" {
			s.SelinuxOpts = append(s.SelinuxOpts, fmt.Sprintf("role:%s", seopt.Level))
		}
	}

@mheon
Copy link
Member

mheon commented Feb 1, 2021

SGTM! Thanks for taking this one.

@mheon
Copy link
Member

mheon commented Feb 5, 2021

Fix has merged

@mheon mheon closed this as completed Feb 5, 2021
@github-actions github-actions bot added the locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments. label Sep 22, 2023
@github-actions github-actions bot locked as resolved and limited conversation to collaborators Sep 22, 2023
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
kind/bug Categorizes issue or PR as related to a bug. locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments.
Projects
None yet
Development

No branches or pull requests

9 participants