Skip to content
This repository has been archived by the owner on May 12, 2021. It is now read-only.

podman: how to start a pod with kata? #2147

Closed
alicefr opened this issue Oct 23, 2019 · 4 comments
Closed

podman: how to start a pod with kata? #2147

alicefr opened this issue Oct 23, 2019 · 4 comments
Labels
bug Incorrect behaviour needs-review Needs to be assessed by the team.

Comments

@alicefr
Copy link

alicefr commented Oct 23, 2019

Hi,

I'm experimenting kata together with podman (on s390x). I successfully managed to start a single container. Important is to remember to use the --security-opt label=disable to avoid the error Error: container create failed: rpc error: code = Unknown desc = selinux label is specified in config, but selinux is disabled or not supported. Next step is to try multiple containers inside a pod. Any ideas on how to do it?
What I tried:

$ podman pod create --infra=false
60ea15ddfa8a0200ed0bfa8e082ece6ee2d121cf3df01236509160299f466639
$ podman pod ps
POD ID         NAME               STATUS    CREATED         # OF CONTAINERS   INFRA ID
60ea15ddfa8a   admiring_meitner   Created   6 seconds ago   0                 
$ podman create --pod admiring_meitner --runtime kata -ti --security-opt label=disable alpine sh
afde02094f7deaf4d1e59d2d3c386e7557b88a2bd499c311d4188d8bf63f8aea
$ podman create --pod admiring_meitner --runtime kata -ti --security-opt label=disable alpine sh
2d80b6bad1f86a6bf92e11da2f613f222b3a4c199199e850c1cc53807cec3ba8
$ podman pod inspect admiring_meitner
{
     "Config": {
          "id": "60ea15ddfa8a0200ed0bfa8e082ece6ee2d121cf3df01236509160299f466639",
          "name": "admiring_meitner",
          "labels": {
               
          },
          "cgroupParent": "machine.slice",
          "sharesCgroup": true,
          "infraConfig": {
               "makeInfraContainer": false,
               "infraPortBindings": null
          },
          "created": "2019-10-23T13:21:02.697816099+02:00",
          "lockID": 58
     },
     "State": {
          "cgroupPath": "machine.slice/machine-libpod_pod_60ea15ddfa8a0200ed0bfa8e082ece6ee2d121cf3df01236509160299f466639.slice",
          "infraContainerID": ""
     },
     "Containers": [
          {
               "id": "2d80b6bad1f86a6bf92e11da2f613f222b3a4c199199e850c1cc53807cec3ba8",
               "state": "configured"
          },
          {
               "id": "afde02094f7deaf4d1e59d2d3c386e7557b88a2bd499c311d4188d8bf63f8aea",
               "state": "configured"
          }
     ]
}
$ podman pod start admiring_meitner
60ea15ddfa8a0200ed0bfa8e082ece6ee2d121cf3df01236509160299f466639
$ podman ps
CONTAINER ID  IMAGE                            COMMAND  CREATED             STATUS             PORTS  NAMES
2d80b6bad1f8  docker.io/library/alpine:latest  sh       About a minute ago  Up 13 seconds ago         confident_khorana
afde02094f7d  docker.io/library/alpine:latest  sh       About a minute ago  Up 11 seconds ago         practical_kare
$ ps -ef | grep qemu
root     53412 53387  4 13:22 ?        00:00:01 /usr/bin/qemu-system-s390x -name sandbox-2d80b6bad1f86a6bf92e11da2f613f222b3a4c199199e850c1cc53807cec3ba8 -uuid 0ca98455-fb08-4923-9d5a-e324b312a96f -machine s390-ccw-virtio,accel=kvm -cpu host -qmp unix:/run/vc/vm/2d80b6bad1f86a6bf92e11da2f613f222b3a4c199199e850c1cc53807cec3ba8/qmp.sock,server,nowait -m 2048M,slots=10,maxmem=20140M -device virtio-serial-ccw,id=serial0,devno=fe.0.0001 -device virtconsole,chardev=charconsole0,id=console0 -chardev socket,id=charconsole0,path=/run/vc/vm/2d80b6bad1f86a6bf92e11da2f613f222b3a4c199199e850c1cc53807cec3ba8/console.sock,server,nowait -device virtio-scsi-ccw,id=scsi0,devno=fe.0.0002 -object rng-random,id=rng0,filename=/dev/urandom -device virtio-rng,rng=rng0,devno=fe.0.0003 -device virtserialport,chardev=charch0,id=channel0,name=agent.channel.0 -chardev socket,id=charch0,path=/run/vc/vm/2d80b6bad1f86a6bf92e11da2f613f222b3a4c199199e850c1cc53807cec3ba8/kata.sock,server,nowait -device virtio-9p-ccw,fsdev=extra-9p-kataShared,mount_tag=kataShared,devno=fe.0.0004 -fsdev local,id=extra-9p-kataShared,path=/run/kata-containers/shared/sandboxes/2d80b6bad1f86a6bf92e11da2f613f222b3a4c199199e850c1cc53807cec3ba8,security_model=none -netdev tap,id=network-0,fds=3 -device driver=virtio-net-ccw,netdev=network-0,mac=3a:bc:b3:bf:f2:c3,mq=on,devno=fe.0.0005 -global kvm-pit.lost_tick_policy=discard -vga none -no-user-config -nodefaults -nographic -daemonize -kernel /usr/share/kata-containers/vmlinuz-4.19.75-54 -initrd /usr/share/kata-containers/kata-containers-initrd.img -append console=ttysclp0 quiet panic=1 nr_cpus=32 agent.use_vsock=false -pidfile /run/vc/vm/2d80b6bad1f86a6bf92e11da2f613f222b3a4c199199e850c1cc53807cec3ba8/pid -smp 1,cores=1,threads=1,sockets=32,maxcpus=32
root     53541 53516  4 13:22 ?        00:00:01 /usr/bin/qemu-system-s390x -name sandbox-afde02094f7deaf4d1e59d2d3c386e7557b88a2bd499c311d4188d8bf63f8aea -uuid 5cdd93b3-ec88-4588-a2e5-ae9a2e751b90 -machine s390-ccw-virtio,accel=kvm -cpu host -qmp unix:/run/vc/vm/afde02094f7deaf4d1e59d2d3c386e7557b88a2bd499c311d4188d8bf63f8aea/qmp.sock,server,nowait -m 2048M,slots=10,maxmem=20140M -device virtio-serial-ccw,id=serial0,devno=fe.0.0001 -device virtconsole,chardev=charconsole0,id=console0 -chardev socket,id=charconsole0,path=/run/vc/vm/afde02094f7deaf4d1e59d2d3c386e7557b88a2bd499c311d4188d8bf63f8aea/console.sock,server,nowait -device virtio-scsi-ccw,id=scsi0,devno=fe.0.0002 -object rng-random,id=rng0,filename=/dev/urandom -device virtio-rng,rng=rng0,devno=fe.0.0003 -device virtserialport,chardev=charch0,id=channel0,name=agent.channel.0 -chardev socket,id=charch0,path=/run/vc/vm/afde02094f7deaf4d1e59d2d3c386e7557b88a2bd499c311d4188d8bf63f8aea/kata.sock,server,nowait -device virtio-9p-ccw,fsdev=extra-9p-kataShared,mount_tag=kataShared,devno=fe.0.0004 -fsdev local,id=extra-9p-kataShared,path=/run/kata-containers/shared/sandboxes/afde02094f7deaf4d1e59d2d3c386e7557b88a2bd499c311d4188d8bf63f8aea,security_model=none -netdev tap,id=network-0,fds=3 -device driver=virtio-net-ccw,netdev=network-0,mac=32:2d:62:f2:b4:bb,mq=on,devno=fe.0.0005 -global kvm-pit.lost_tick_policy=discard -vga none -no-user-config -nodefaults -nographic -daemonize -kernel /usr/share/kata-containers/vmlinuz-4.19.75-54 -initrd /usr/share/kata-containers/kata-containers-initrd.img -append console=ttysclp0 quiet panic=1 nr_cpus=32 agent.use_vsock=false -pidfile /run/vc/vm/afde02094f7deaf4d1e59d2d3c386e7557b88a2bd499c311d4188d8bf63f8aea/pid -smp 1,cores=1,threads=1,sockets=32,maxcpus=32

I'm getting 2 containers but not in the same VM. Am I do something wrong?

@alicefr alicefr added bug Incorrect behaviour needs-review Needs to be assessed by the team. labels Oct 23, 2019
@alicefr
Copy link
Author

alicefr commented Oct 23, 2019

I'm using podman pod create --infra=false because I didn't find a way how to specify the --security-opt label=disable for the infra container

@alicefr alicefr changed the title podman: how to start a pod with kata podman: how to start a pod with kata? Oct 23, 2019
@zer0def
Copy link

zer0def commented Oct 26, 2019

Correct me, if I'm wrong, but this question might be better raised to containers/libpod, since there are working examples with containerd/CRI, which would roughly equate to:

#!/bin/sh -ex
POD_NAME=asdf

# create *and* launch pod, as `crictl runp` would
podman pod create -n ${POD_NAME} && podman pod start ${POD_NAME}

for i in `seq 2`; do podman run --pod ${POD_NAME} alpine /bin/sh -c 'sleep 3600'; done

Except the last line results in this, while leaving created, but non-started containers:

Error: Failed to add qdisc for network index 4 : file exists: OCI runtime error
Error: Failed to add qdisc for network index 4 : file exists: OCI runtime error

With that said, I'd be interested in a solution to this, as well.

@alicefr
Copy link
Author

alicefr commented Oct 27, 2019

@zer0def, my goal is not to start a pod, but to start a pod using podman. I've already been able to start a pod using cri-containerd and cri-o. What I'm getting it's a container per VM not in the same VM.

@zer0def
Copy link

zer0def commented Oct 27, 2019

I understand, but since cri-containerd already does this in a working manner, I would suspect it's podman confusing whether it should just launch a container shim and not a distinct pod sandbox.

A quick auditd watch on kata-runtime shows that containerd calls kata-runtime create only for the pod's infra container, while podman does so per-container (with distinct bundles), which might lead to the case you're outlining.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
bug Incorrect behaviour needs-review Needs to be assessed by the team.
Projects
None yet
Development

No branches or pull requests

3 participants