A fast and low-memory footprint OCI Container Runtime fully written in C.
crun conforms to the OCI Container Runtime specifications (https://github.com/opencontainers/runtime-spec).
The user documentation is available here.
While most of the tools used in the Linux containers ecosystem are written in Go, I believe C is a better fit for a lower level tool like a container runtime. runc, the most used implementation of the OCI runtime specs written in Go, re-execs itself and use a module written in C for setting up the environment before the container process starts.
crun aims to be also usable as a library that can be easily included in programs without requiring an external process for managing OCI containers.
crun is faster than runc and has a much lower memory footprint.
This is the elapsed time on my machine for running sequentially 100
containers, the containers run /bin/true
:
crun | runc | % | |
---|---|---|---|
100 /bin/true | 0:01.69 | 0:3.34 | -49.4% |
crun requires fewer resources, so it is also possible to set stricter limits on the memory and number of PIDs allowed in the container:
# podman --runtime /usr/bin/runc run --rm --pids-limit 1 fedora echo it works
Error: container_linux.go:346: starting container process caused "process_linux.go:319: getting the final child's pid from pipe caused \"EOF\"": OCI runtime error
# podman --runtime /usr/bin/crun run --rm --pids-limit 1 fedora echo it works
it works
# podman --runtime /usr/bin/runc run --rm --memory 4M fedora echo it works
Error: container_linux.go:346: starting container process caused "process_linux.go:327: getting pipe fds for pid 13859 caused \"readlink /proc/13859/fd/0: no such file or directory\"": OCI runtime command not found error
# podman --runtime /usr/bin/crun run --rm --memory 4M fedora echo it works
it works
crun could go much lower than that, and require < 1M. The used 4MB is a hard limit set directly in Podman before calling the OCI runtime.
These dependencies are required for the build:
$ sudo dnf install -y make python git gcc automake autoconf libcap-devel \
systemd-devel yajl-devel libseccomp-devel \
go-md2man glibc-static python3-libmount libtool
$ sudo yum --enablerepo='*' install -y make automake autoconf gettext \
libtool gcc libcap-devel systemd-devel yajl-devel \
libseccomp-devel python36 libtool
go-md2man is not available on RHEL/CentOS 8, so if you'd like to build the man page, you also need to manually install go-md2man. It can be installed with:
$ sudo yum --enablerepo='*' install -y golang
$ export GOPATH=$HOME/go
$ go get github.com/cpuguy83/go-md2man
$ export PATH=$PATH:$GOPATH/bin
$ sudo apt-get install -y make git gcc build-essential pkgconf libtool \
libsystemd-dev libcap-dev libseccomp-dev libyajl-dev \
go-md2man libtool autoconf python3 automake
# apk add gcc automake autoconf libtool gettext pkgconf git make musl-dev \
python3 libcap-dev libseccomp-dev yajl-dev argp-standalone go-md2man
# zypper install make automake autoconf gettext libtool gcc libcap-devel \
systemd-devel yajl-devel libseccomp-devel python3 libtool go-md2man;
Note that Tumbleweed requires you to specify libseccomp's header file location as a compiler flag.
# ./autogen.sh
# ./configure CFLAGS='-I/usr/include/libseccomp'
# make
Unless you are also building the Python bindings, Python is needed only by libocispec to generate the C parser at build time, it won't be used afterwards.
Once all the dependencies are installed:
$ ./autogen.sh
$ ./configure
$ make
To install into default PREFIX (/usr/local
):
$ sudo make install
The previous build instructions do not enable shared libraries, therefore you will be unable to use libcrun. If you wish to build the shared libraries you can change the previous ./configure.sh
statement to ./configure --enable-shared
.
It is possible to build a statically linked binary of crun by using the officially provided nix package and the derivation of it within this repository. The builds are completely reproducible and will create a x86_64/amd64 stripped ELF binary for glibc.
To build the binaries by locally installing the nix package manager:
$ nix build -f nix/
An Ansible Role is also available to automate the installation of the above statically linked binary on its supported OS:
$ sudo su -
# mkdir -p ~/.ansible/roles
# cd ~/.ansible/roles
# git clone https://github.com/alvistack/ansible-role-crun.git crun
# cd ~/.ansible/roles/crun
# pip3 install --upgrade --ignore-installed --requirement requirements.txt
# molecule converge
# molecule verify