-
Notifications
You must be signed in to change notification settings - Fork 304
UserGuide
THIS DOCUMENT IS A WorkInProgress
- Kata Containers User Guide
- Installation
- Configuration
- Workloads
- Appendix
This Kata Containers User Guide aims to be a comprehensive guide to the explanation of, installation, configuration, and use of Kata Containers.
The Kata Containers source repositories contain significant amounts of other documentation, covering some subjects in more detail.
Kata Containers is, as defined in the community repo:
Kata Containers is an open source project and community working to build a standard implementation of lightweight Virtual Machines (VMs) that feel and perform like containers, but provide the workload isolation and security advantages of VMs.
It is a drop in additional OCI compatible container runtime, which can therefore be used with Docker and Kubernetes.
Kata Containers is primarily a Linux based application. It can be installed on the most common Linux distributions, using the common Linux packaging tools.
Details on installation can be found in documentation repository.
For the curious, adventurous, developers or those using a distribution not presently supported with pre-built pacakges, Kata Containers can be installed from source. If you are on a distribution that is not presently supported, please feel free to reach out to the community to discuss adding support. Of course, contributions are most welcome.
Kata Containers can be installed into Docker as an additional container runtime. This does
not remove any functionality from Docker. You can choose which container runtime is the
default for Docker if none is specified. You can run Kata Container runtime containers
in parallel with other contianers using a different container runtime (such as the default
Docker runc
runtime).
Instructions on how to configure Docker to add Kata Containers as a runtime can be found in the documentation repository
It should be noted, that presently Kata Containers may not function fully in all
docker compose
situations. In particular, Docker compose makes use of network links to
supply its own internal DNS service, which is difficult for Kata Containers to replicate.
Work is on-going, and the Kata Containers limitations
document can be checked for the present state.
Kata Containers can be integrated as a runtime into Kubernetes. Kata Containers can be integrated via either CRI-containerd or CRI-O.
For details on configuring Kata Containers with CRI-containerd see this document
Note that pods have some different functionality from straight docker - and note them throughout the document (such as memory and cpu scaling).
Kata Containers can be used as a runtime for OpenStack by integrating with Zun. Details on how to set this integration up can be found in this document
Kata Containers has a comprehensive TOML based configuration file. Much of the information on the available configuration options is contained directly in that file. This section expands on some of the details and finer points of the configuration.
Kata Containers supports rootfs images and initrd images. It also supports running with either the kata-agent
as the init process, or systemd.
The kata-containers-image
package includes both a rootfs-based image and an initrd-based image. Currently, the default configuration.toml
configuration file specifies a rootfs image using systemd as the init daemon.
To help decide which combination of image and init daemon is appropriate for your uses, consider the following table:
Image type | init | Boot speed | Image Size | Supports Factory? | Supports agent tracing? | Supports debug console? | Notes |
---|---|---|---|---|---|---|---|
rootfs | systemd | good | small | no | yes | yes | Flexible as easy to customise |
rootfs | agent | fastest | smaller | no | yes | no | |
initrd | agent | faster | smallest | yes | no | no | Not as flexible as systemd-based image |
initrd | systemd | n/a | n/a | n/a | n/a | no | Not supported |
Note:
To determine what type of image your system is configured for, run the following command and look at the "Image details" information:
$ sudo kata-collect-data.sh
Or, to just see the details, run:
$ sudo kata-collect-data.sh | sed -ne "/osbuilder:/, /\`\`\`/ p" | egrep "description:|agent-is-init-daemon:"
As Kata Containers runs containers inside VMs it differs from software containers in how memory is allocated and restricted to the container. VMs are allocated an amount of memory, whereas software containers can run either unconstrained (they can access, and share with other containers, all of the host memory), or they can have some constraints imposed upon them via hard or soft limits.
If no constraints are set, then Kata Containers will set the VM memory size using a combination of the value set in the runtime config file, which is 2048 MiB by default, plus the addition of the requested constraint.
Kata Containers gets the memory constraint information from the OCI JSON file passed to it by the orchestration layer. In the case of Docker, these can be set on the command line For Kubernetes, you can set up memory limits and requests
Note We should detail how limits and requests map into Kata VMs.
If the container orchestrator provides CPU constraints, then Kata Containers configures the VM per those constraints (rounded up to the nearest whole CPU), plus one extra CPU (to cater for any VM overheads). More details can be found in the cpu constraints document
Briefly explain that Kata maps in rootfs differently depending on the host side graph driver. block or 9p.
See this issue for more details on how to configure per-pod kernels
See this issue for more details on how to configure per-pod images
NEMU is a version of the QEMU hypervisor specifically tailored for lightweight cloud use cases. NEMU can be integrated and used with Kata Containers. A guide can be found in this document
Kata Containers supports passthrough of SR-IOV devices to the container workloads. A guide on configuration can be found in this document
Kata Containers supports direct GPU assignment to containers. Documentation can be found here
ptys, file handles, network size.
Some legacy workloads, such as centos:6
and debian:7
require the kernel CONFIG_LEGACY_VSYSCALL_EMULATE
option
to be enabled in order to work with their older versions of (pre-version 2.15) glibc
's for instance. By default
Kata Containers kernel does not enable this feature, which may result in such workloads failing (such as bash
creating
a core dump etc.).
The vsyscall
feature can be enabled in the Kata kernel without a recompile, by adding vsyscall=emulate
to the
kernel parameters in the Kata Containers config file.
Note, this change will affect all Kata Containers launched, and may reduce the security of your containers.
This section covers the configurations and variations of networking supported by Kata Containers.
It covers the default networking cases, as well as advanced use cases and acceleration techniques.
Kata Containers supports the CNI networking plugins by default. This is the preferred networking model for use with Kata Containers and Kubernetes.
Kata Containers does support the CNM networking model, but the CNI is the preferred model.
Kata Containers can be used with DPDK. An example can be found in the Kata Containers VPP guide
Kata Containers can use VPP. Instructions can be found in this document
There are a number of different security tools and layers that can be applied at a number of different levels (such as on the host or inside the container) in Kata Containers. This section details which layers are supported and where, and if they are enabled by default or not.
There are plans to construct a host side SELinux profile for Kata Containers.
There has also been discussion and valid use cases proposed for enabling SELinux inside the containers, in particular in the case of multi-container pods, where SELinux isolation between the containers may be desirable.
seccomp is supported by Kata Containers inside the guest container, but is not enabled by default in the shipped rootfs (as it adds overheads to the system that not all users may want).
seccomp can be enabled by building a new rootfs image using osbuilder, whilst setting SECCOMP=true
in your environment.
AppArmor support is not currently present in Kata Containers.
Some containers (workloads) may require special treatment to run under Kata Containers - in particular
workloads that have close interactions with the host, such as those using --privileged
mode or handing
in host side items such as sockets
.
This section will detail known 'special' use cases, and where possible, additions, tweaks and workarounds that can be used to enable such workloads to function under Kata Containers.
The x11docker is known to be able to run at least a subset of X11 applications under Kata Containers. See the project documentation for more details.
entropy