diff --git a/README.md b/README.md index 19ef641da87..57ddbc2328a 100644 --- a/README.md +++ b/README.md @@ -4,6 +4,8 @@ containerOS (**cOS**) is a toolkit to build, ship and maintain cloud-init driven It is designed to reduce the maintenance surface, with a flexible approach to provide upgrades from container registries. It is cloud-init driven and also designed to be adaptive-first, allowing easily to build changes on top. +[Documentation is available at https://rancher-sandbox.github.io/cos-toolkit-docs/docs](https://rancher-sandbox.github.io/cos-toolkit-docs/docs) + - [containerOS toolkit](#containeros-toolkit) @@ -12,8 +14,6 @@ It is designed to reduce the maintenance surface, with a flexible approach to pr - [Design goals](#design-goals) - [Build cOS Locally](#build-cos-locally) - [First steps](#first-steps) - - [References](#references) - - [Derivatives](#derivatives) - [Samples](#samples) - [cOS development](#cos-development) - [License](#license) @@ -22,7 +22,7 @@ It is designed to reduce the maintenance surface, with a flexible approach to pr ## In a nutshell -cOS derivatives are built from containers, and completely hosted on image registries. The build process results in a single container image used to deliver regular upgrades in OTA approach. Each derivative built with `cos-toolkit` inherits by default the [following featuresets](/docs/derivatives_featureset.md). +cOS derivatives are built from containers, and completely hosted on image registries. The build process results in a single container image used to deliver regular upgrades in OTA approach. Each derivative built with `cos-toolkit` inherits a default featureset. cOS supports different release channels, all the final and cache images used are tagged and pushed regularly [to Quay Container Registry](https://quay.io/repository/costoolkit/releases-opensuse) and can be pulled for inspection from the registry as well. @@ -58,9 +58,9 @@ ISO [from the Github Actions page](https://github.com/rancher-sandbox/cOS-toolki ### Build cOS Locally -The starting point to use cos-toolkit is to see it in action with our [sample repository](https://github.com/rancher-sandbox/cos-toolkit-sample-repo) or check out our `examples` folder, see also [creating bootable images](/docs/creating_bootable_images.md). +The starting point to use cos-toolkit is to see it in action with our [sample repository](https://github.com/rancher-sandbox/cos-toolkit-sample-repo) or check out our `examples` folder, see also [creating bootable images](https://rancher-sandbox.github.io/cos-toolkit-docs/docs/creating-derivatives/creating_bootable_images/). -The only requirement to build derivatives with `cos-toolkit` is docker installed, see [Development notes](/docs/dev.md) for more details on how to build `cos` instead. +The only requirement to build derivatives with `cos-toolkit` is docker installed, see [Development notes](https://rancher-sandbox.github.io/cos-toolkit-docs/docs/development/) for more details on how to build `cos` instead. ## First steps @@ -75,17 +75,10 @@ $ source .envrc $ cos-build ``` -This command will build a container image which contains the required dependencies to build the custom OS, and will later be used to build the OS itself. The result will be a set of container images and an ISO which you can boot with your environment of choice. See [Creating derivatives](/docs/creating_derivatives.md) for more details about the process. - -If you are looking after only generating a container image that can be used for upgrades from the cOS vanilla images, see [creating bootable images](/docs/creating_bootable_images.md) and see also [how to drive upgrades with Fleet](https://github.com/rancher-sandbox/cos-fleet-upgrades-sample). +This command will build a container image which contains the required dependencies to build the custom OS, and will later be used to build the OS itself. The result will be a set of container images and an ISO which you can boot with your environment of choice. See [Creating derivatives](https://rancher-sandbox.github.io/cos-toolkit-docs/docs/creating-derivatives/creating_derivatives/) for more details about the process. -## References +If you are looking after only generating a container image that can be used for upgrades from the cOS vanilla images, see [creating bootable images](https://rancher-sandbox.github.io/cos-toolkit-docs/docs/creating-derivatives/creating_bootable_images/) and see also [how to drive upgrades with Fleet](https://rancher-sandbox.github.io/cos-toolkit-docs/docs/tutorials/trigger_upgrades_with_fleet/). -### Derivatives -- [Creating derivatives](/docs/creating_derivatives.md) -- [Creating bootable images](/docs/creating_bootable_images.md) -- [Derivatives featureset](/docs/derivatives_featureset.md) -- [Building AMI machines in AWS](/docs/building_aws_ami.md) ### Samples - [Sample repository](https://github.com/rancher-sandbox/cos-toolkit-sample-repo) @@ -94,14 +87,8 @@ If you are looking after only generating a container image that can be used for - [Deploy Fleet on a cOS vanilla image](/docs/k3s_and_fleet_on_vanilla_image_example.md) ### cOS development -- [Development notes](/docs/dev.md) -- [High Level architecture](/docs/high_level_architecture.md) - [Github project](https://github.com/mudler/cOS/projects/1) for a short-term Roadmap -### Usage hints - -- [Grub2 default boot entry setup](/docs/configure_grub.md) - ## License Copyright (c) 2020-2021 [SUSE, LLC](http://suse.com) diff --git a/docs/building_aws_ami.md b/docs/building_aws_ami.md deleted file mode 100644 index 6ae43e0be97..00000000000 --- a/docs/building_aws_ami.md +++ /dev/null @@ -1,96 +0,0 @@ -# Build AMI machines for AWS based on cOS Vanilla image - -This section documents the procedure to deploy cOS (or derivatives) images -in AWS public cloud provider by using the cOS Vanilla image. - -Requirements: - -* Packer -* AWS access keys with the appropriate roles and permissions -* A Vanilla AMI - -The suggested approach is based on using Packer templates to customize the -deployment and automate the upload and publish to AWS. For all the details -and possibilties of Packer check the [official documentation](https://www.packer.io/guides/hcl). - -## Run the build with Packer - -Publishing an AMI image in AWS based on top of the latest cOS Vanilla image is -fairly simple. In fact, it is only needed to set the AWS credentials -and run a `packer build` process to trigger the deployment and register the -resulting snapshot as an AMI. In such case the lastest cOS image will be -deployed and configured with pure defaults. Consider: - -```bash -# From the root of a cOS-toolkit repository checkout - -> export AWS_ACCESS_KEY_ID= -> export AWS_SECRET_ACCESS_KEY= -> export AWS_DEFAULT_REGION= - -> cd packer -> packer build -only amazon-ebs.cos . -``` - -AWS keys can be passed as environment variables as it is above or packer -picks them from aws-cli configuration files (`~/.aws`) if any. Alternatively, -one can define them in the variables file. - -The `-only amazon-ebs.cos` flag is just to tell packer which of the sources -to make use for the build. Note the `packer/images.json.pkr.hcl` file defines -few other sources such as `qemu` and `virtualbox`. - -## Customize the build with a variables file - -The packer template can be customized with the variables defined in -`packer/variables.pkr.hcl`. These are the variables that can be set on run -time using the `-var key=value` or `-var-file=path` flags. The variable file -can be a json file including desired varibles. Consider the following example: - -```bash -# From the packer folder of the cOS-toolkit repository checkout - -> cat << EOF > test.json -{ - "aws_cos_install_args": "cos-deploy", - "aws_launch_volume_size": 16, - "name": "MyTest" -} -EOF - -> packer build -only amazon-ebs.cos -var-file=test.json . -``` - -The above example runs the AMI Vanilla image on a 16GiB disk and calls the -command `cos-deploy` to deploy the main OS, once deployed an snapshot is -created and an AMI out this snapshot is registered in EC2. The created -AMI artifact will be called `MyTest`, the name has no impact in the underlaying -OS. - -### Available variables for cusomization - -All the customizable variables are listed in `packer/variables.pkr.hcl`, -variables with the `aws_` prefix are the ones related to the AWS Packer -template. These are some of the relevant ones: - -* `aws_cos_install_args`: This the command that will be executed once the - Vanilla image booted. In this stage it is expected that user sets a command - to install the desired cOS or derivative image. By default it is set to - `cos-deploy` which will deploy the latest cOS image in cOS repositories. - To deploy custom derivatives something like - `cos-deploy --docker-image ` should be sufficient. - -* `aws_launch_volume_size`: This sets the disk size of the VM that Packer - launches for the build. During Vanilla image first boot the system will - expand to the disk geometry. The layout is configurable with the user-data. - -* `aws_user_data_file`: This sets the user-data file that will be used for the - aws instance during the build process. It defaults to `aws/setup-disk.yaml` and - the defauklt file basically includes the disk expansion configuration. It - adds a `COS_STATE` partition that should be big enough to store about three times - the size of the image to deploy. Then it also creates a `COS_PERSISTENT` - partition with all the rest of the available space in disk. - -* `aws_source_ami_filter_name`: This a filter to choose the AMI image for the - build process. It defaults to `*cOS*Vanilla*` pattern to pick the latest cOS - Vanilla image available. diff --git a/docs/configure_grub.md b/docs/configure_grub.md deleted file mode 100644 index ffefe8fcf16..00000000000 --- a/docs/configure_grub.md +++ /dev/null @@ -1,34 +0,0 @@ -# Grub2 default boot entry setup - -cOS (since v0.5.8) makes use of the grub2 environment block which can used to define -persistent grub2 variables across reboots. - -The default grub configuration loads the `/grubenv` of any available device -and evaluates on `next_entry` variable and `saved_entry` variable. By default -none is set. - -The default boot entry is set to the value of `saved_entry`, in case the variable -is not set grub just defaults to the first menu entry. - -`next_entry` variable can be used to overwrite the default boot entry for a single -boot. If `next_entry` variable is set this is only being used once, grub2 will -unset it after reading it for the first time. This is helpful to define the menu entry -to reboot to without having to make any permanent config change. - -Use `grub2-editenv` command line utility to define desired values. - -For instance use the following command to reboot to recovery system only once: - -```bash -> grub2-editenv /oem/grubenv set next_entry=recovery -``` - -Or to set the default entry to `fallback` system: - -```bash -> grub2-editenv /oem/grubenv set default=fallback -``` - -These examples make of the `COS_OEM` device, however it could use any device -detected by grub2 that includes the file `/grubenv`. First match wins. - diff --git a/docs/creating_bootable_images.md b/docs/creating_bootable_images.md deleted file mode 100644 index e607483cf22..00000000000 --- a/docs/creating_bootable_images.md +++ /dev/null @@ -1,115 +0,0 @@ -# Creating bootable images - -This document describes the requirements to create standard container images that can be used to deploy a `cOS` derivative and which are supported. - -You can find the examples below in the `examples` folder. - -## From standard images - -Besides using the `cos-toolkit` toolchain, it's possible to create standard container images which are consumable by the vanilla `cOS` images (ISO, Cloud Images, etc.) during the upgrade and deploy phase. - -An example of a Dockerfile image can be: - -``` -ARG LUET_VERSION=0.16.7 - -FROM quay.io/luet/base:$LUET_VERSION AS luet - -FROM opensuse/leap:15.3 -ARG ARCH=amd64 -ENV ARCH=${ARCH} -RUN zypper in -y \ - ... - -# Copy the luet config file pointing to the upgrade repository -COPY conf/luet.yaml /etc/luet/luet.yaml - -# Copy luet from the official images -COPY --from=luet /usr/bin/luet /usr/bin/luet - -RUN luet install -y \ - toolchain/yip \ - utils/installer \ - system/cos-setup \ - system/immutable-rootfs \ - system/grub-config \ - system/cloud-config \ - utils/k9s \ - utils/nerdctl - -COPY files/ / -RUN mkinitrd - -``` - -The important piece is that an image needs to ship at least `toolchain/yip` ,`utils/installer` and `system/cos-setup`. `system/grub-config` is a default grub configuration, while you could likewise supply your own with in `/etc/cos/bootargs.cfg`. - -You can also generate an image directly from the ones that the CI is publishing, or also from scratch. See the full example in [examples/standard](/examples/standard). - -## Generating from CI image - -You can just use the published final images: - -``` -# Pick one version from https://quay.io/repository/costoolkit/releases-opensuse?tab=tags -FROM quay.io/costoolkit/releases-opensuse:cos-system-0.5.3-5 - -COPY files/ / - -... -``` - -See the full example in [examples/cos-official](/examples/cos-official). - -## From scratch - -The luet image `quay.io/luet/base` contains just luet, and can be used to boostrap the base system from scratch: - -conf/luet.yaml: -```yaml -logging: - color: false - enable_emoji: false -general: - debug: false - spinner_charset: 9 -repositories: -- name: "cos" - description: "cOS official" - type: "docker" - enable: true - cached: true - priority: 1 - verify: false - urls: - - "quay.io/costoolkit/releases-opensuse" -``` - -Dockerfile: -``` -FROM quay.io/luet/base:latest -# Copy the luet config file pointing to the cOS repository -ADD conf/luet.yaml /etc/luet/luet.yaml - -ENV USER=root - -SHELL ["/usr/bin/luet", "install", "-y", "-d"] - -RUN system/cos-container - -SHELL ["/bin/sh", "-c"] -RUN rm -rf /var/cache/luet/packages/ /var/cache/luet/repos/ - -ENV TMPDIR=/tmp -ENTRYPOINT ["/bin/sh"] -``` - -See the full example in [examples/scratch](/examples/scratch). - -## Customizations - -All the method above imply that the image generated will be the booting one, there are however several configuration entrypoint that you should keep in mind while building the image: - -- Everything under `/system/oem` will be loaded during the various stage (boot, network, initramfs). You can check [here](https://github.com/rancher-sandbox/cOS-toolkit/tree/e411d8b3f0044edffc6fafa39f3097b471ef46bc/packages/cloud-config/oem) for the `cOS` defaults. See `00_rootfs.yaml` to customize the booting layout. -- `/etc/cos/bootargs.cfg` contains the booting options required to boot the image with GRUB -- `/etc/cos-upgrade-image` contains the default upgrade configuration for recovery and the booting system image \ No newline at end of file diff --git a/docs/creating_derivatives.md b/docs/creating_derivatives.md deleted file mode 100644 index 77b47abd24d..00000000000 --- a/docs/creating_derivatives.md +++ /dev/null @@ -1,389 +0,0 @@ -# Creating derivatives - -This document summarize references to create Immutable derivatives with `cos-toolkit`. - -`cos-toolkit` is a manifest to share a common abstract layer between derivatives inheriting the same [featureset](/docs/derivatives_featureset.md). - -`cos` is a [Luet tree](https://luet-lab.github.io/docs/docs/concepts/packages/specfile/#specfiles) and derivatives are Luet trees as well that inherit part of the compilation specs from `cos`. - -Those trees are then post-processed and converted to Dockerfiles when building packages, that in turn are used to build docker images and final artefacts. - - - -- [Creating derivatives](#creating-derivatives) - - [High level workflow](#high-level-workflow) - - [Example](#example) - - [Single image OS](#single-image-os) - - [Building](#building) - - [Additional packages](#additional-packages) - - [Templating](#templating) - - [Upgrades](#upgrades) - - [OEM Customizations](#oem-customizations) - - [Customizing GRUB boot cmdline](#customizing-grub-boot-cmdline) - - [Separate image recovery](#separate-image-recovery) - - [Building ISOs, Vagrant Boxes, OVA](#building-isos-vagrant-boxes-ova) - - [Known issues](#known-issues) - - [Building SELinux fails](#building-selinux-fails) - - [Multi-stage copy build fails](#multi-stage-copy-build-fails) - - - -## High level workflow - -The building workflow can be resumed in the following steps: - -- Build packages from container images. This step generates build metadata (`luet build`) -- Add repository metadata and create a repository from the build phase (`luet create-repo`) -- (otherwise, optionally) publish the repository and the artefacts along (`luet create-repo --push-images`) - -While on the client side, the upgrade workflow is: -- `luet install` (when upgrading from release channels) latest cos on a pristine image file -- or `luet util unpack` (when upgrading from specific docker images) - -*Note*: The manual build steps are not stable and will likely change until [we build a single CLI](https://github.com/rancher-sandbox/cOS-toolkit/issues/108) to encompass the `cos-toolkit` components, rather use `source .envrc && cos-build` for the moment being while iterating locally. - -## Example - -[The sample repository](https://github.com/rancher-sandbox/cos-toolkit-sample-repo) has the following layout: - -``` -├── Dockerfile -├── .envrc -├── .github -│   └── workflows -│   ├── build.yaml -│   └── test.yaml -├── .gitignore -├── iso.yaml -├── LICENSE -├── .luet.yaml -├── Makefile -├── packages -│   ├── sampleOS -│   │   ├── 02_upgrades.yaml -│   │   ├── 03_branding.yaml -│   │   ├── 04_accounting.yaml -│   │   ├── build.yaml -│   │   ├── definition.yaml -│   │   └── setup.yaml -│   └── sampleOSService -│   ├── 10_sampleOSService.yaml -│   ├── build.yaml -│   ├── definition.yaml -│   └── main.go -└── README.md -``` - -In the detail: -- the `packages` directory is the sample Luet tree that contains the [package definitions](https://luet-lab.github.io/docs/docs/concepts/packages/specfile/#build-specs) [1] which composes the derivative. -For an overview of the package syntax and build process, see the [official luet documentation](https://luet-lab.github.io/docs/docs/concepts/packages/) -- `.luet.yaml` contains a configuration file for `luet` pointing to the `cos` repositories, used to fetch packages required in order to build the iso [2] **and** to fetch definitions from [3]. -- `Makefile` and `.envrc` are just wrappers around `luet build` and `luet create-repo` -- `iso.yaml` a YAML file that describes what packages to embed in the final ISO - -*Note*: There is nothing special in the layout, and neither the `packages` folder naming is special. By convention we have chosen to put the compilation specs in the `packages` folder, the `Makefile` is just calling `luet` with a set of default parameters according to this setup. - -The `.envrc` is provided as an example to automatize the build process: it will build a docker image with the required dependencies, check the [development docs](/docs/dev.md) about the local requirements if you plan to build outside of docker. - -**[1]** _In the sample above we just declare two packages: `sampleOS` and `sampleOSService`. Their metadata are respectively in `packages/sampleOS/definition.yaml` and `packages/sampleOSService/definition.yaml`_ - -**[2]** _We consume `live/systemd-boot` and `live/syslinux` from `cos` instead of building them from the sample repository_ - -**[3]** _see also [using git submodules](https://github.com/rancher-sandbox/epinio-appliance-demo-sample#main-difference-with-cos-toolkit-sample-repo) instead_ - -## Single image OS - -Derivatives are composed by a combination of specs to form a final package that is consumed as a single image OS. - -The container image during installation and upgrade, is converted to an image file with a backing ext2 fs. - -In the sample repository [we have defined `system/sampleOS`](https://github.com/rancher-sandbox/cos-toolkit-sample-repo/blob/master/packages/sampleOS/definition.yaml) as our package, that will later on will be converted to image. - -Packages in luet have `runtime` and `buildtime` specifications into `definition.yaml` and `build.yaml` respectively, and in the buildtime we set: - -```yaml -join: -- category: "system" - name: "cos" - version: ">=0" -- category: "app" - name: "sampleOSService" - version: ">=0" -``` - -This instruct `luet` to compose a new image from the results of the compilation of the specified packages, without any version constraints, and use it to run any `steps` and `prelude` on top of it. - -We later run arbitrary steps to tweak the image: - -```yaml -steps: -- ... -``` - -And we instruct luet to compose the final artifact as a `squash` of the resulting container image, composed of all the files: - -```yaml -unpack: true -``` - -A detailed explaination of all the keywords available [is in the luet docs](https://luet-lab.github.io/docs/docs/concepts/packages/specfile/#keywords) along with the [supported build strategies](https://luet-lab.github.io/docs/docs/concepts/packages/specfile/#building-strategies). - -We exclude then a bunch of file that we don't want to be in the final package (regexp supported): - -```yaml -excludes: -- .. -``` - -__Note__: In the [EpinioOS sample](https://github.com/rancher-sandbox/epinio-appliance-demo-sample/blob/19c530ea53ad577e60adbae1d419781fcea808f5/packages/epinioOS/build.yaml#L1), we use `requires` instead of `join`: - -```yaml -requires: -- category: "system" - name: "cos" - version: ">=0" -- name: "k3s" - category: "app" - version: ">=0" -- name: "policy" - category: "selinux" - version: ">=0" -``` - -The difference is that with `requires` we use the _building_ container that was used to build the packages instead of creating a new image from their results: we are not consuming their artifacts in this case, but the environment used to build them. See also [the luet docs](https://luet-lab.github.io/docs/docs/concepts/packages/specfile/#package-source-image) for more details. - -### Building - -Refering to the `sampleOS` example, we set the [Makefile](https://github.com/rancher-sandbox/cos-toolkit-sample-repo/blob/8ed369c6ca76f1fc69e49d8001c689c8d0371d30/Makefile#L13) accordingly to compile the system package. - -With luet installed locally and docker running, in your git checkout you can build it also by running `luet build --tree packages system/sampleOS`. This will produce an artifact of `system/sampleOS`. Similary, we could also build separately the sample application with `luet build --tree packages app/sampleOSService`. - -The build process by default results in a `build` folder containing the package and the compilation metadata in order to generate a repository. - -_Note on reproducibility_: See [the difference between our two samples repositories](https://github.com/rancher-sandbox/epinio-appliance-demo-sample#main-difference-with-cos-toolkit-sample-repo) for an explanation of what are the implications of using a `.luet.yaml` file for building instead of a git submodule. - -## Additional packages - -In our sample repo we have split the logic of a separate application in `app/sampleOSService`. - -`sampleOSService` is just an HTTP server that we would like to have permanently in the system and on boot. - -Thus we define it as a dependency in the `system/sampleOS`'s `requires` section: - -```yaml -requires: -... -- category: "app" - name: "sampleOSService" - version: ">=0" -``` - -_Note_ If you are wondering about copying just single files, there is [an upstream open issue](https://github.com/mudler/luet/issues/190) about it. - -In this way, when building our `sampleOS` package, `luet` will automatically apply the compilation spec of our package on top. - -## Templating - -The package `build` definition supports [templating](https://luet-lab.github.io/docs/docs/concepts/packages/templates/), and global interpolation of build files with multiple values files. - -Values file can be specified during build time in luet with the ```--values``` flag (also multiple files are allowed) and, if you are familiar with `helm` it using the same engine under the hood, so all the functions are available as well. - -`cos-toolkit` itself uses [default values files](https://github.com/rancher-sandbox/cOS-toolkit/tree/master/values) for every supported distributions. - -For a more complex example involving values file, [see the epinio appliance example](https://github.com/rancher-sandbox/epinio-appliance-demo-sample). - -Templates uses cases are for: resharing common pieces between flavors, building for different platforms and architectures, ... - -## Upgrades - -In order for the derivative to upgrade, it needs to be configured in order to download upgrades from a source. - -By default, `cos` derivatives if not specified will point to latest `cos-toolkit`. To override, you need to or overwrite the content of `/system/oem/02_upgrades.yaml` or supply an additional one, e.g. `/system/oem/03_upgrades.yaml` in the final image, see [an example here](https://github.com/rancher-sandbox/epinio-appliance-demo-sample/blob/master/packages/epinioOS/02_upgrades.yaml). - -The configuration need to point to a specific docker image or an upgrade channel, [a complete example and documentation is here](https://github.com/rancher-sandbox/epinio-appliance-demo-sample#images). - -## OEM Customizations - -There are several way to customize a cos-toolkit derivative: - -- declaratively in runtime with cloud-config file (by overriding, or extending) -- stateful, via build definition when running `luet build`. - -For runtime persistence configuration, the only supported way is with cloud-config files, [see the relevant docs](https://github.com/rancher-sandbox/cOS-toolkit/blob/master/docs/derivatives_featureset.md#persistent-changes). - -A derivative automatically loads and executes cloud-config files which are hooking into system stages. - -In this way the cloud-config mechanism works also as an emitter event pattern - running services or programs can emit new custom `stages` in runtime by running `cos-setup stage_name`. - -For an extensive list of the default OEM files that can be reused or replaced [see here](https://github.com/rancher-sandbox/cOS-toolkit/blob/master/docs/derivatives_featureset.md#oem-customizations). - -## Customizing GRUB boot cmdline - -Each bootable image have a default boot arguments which are defined in `/etc/cos/bootargs.cfg`. This file is used by GRUB to parse the cmdline used to boot the image. - -For example: -``` -set kernel=/boot/vmlinuz -if [ -n "$recoverylabel" ]; then - # Boot arguments when the image is used as recovery - set kernelcmd="console=tty1 root=live:CDLABEL=$recoverylabel rd.live.dir=/ rd.live.squashimg=$img panic=5" -else - # Boot arguments when the image is used as active/passive - set kernelcmd="console=tty1 root=LABEL=$label iso-scan/filename=$img panic=5 security=selinux selinux=1" -fi - -set initramfs=/boot/initrd -``` - -You can tweak that file to suit your needs if you need to specify persistent boot arguments. - -## Separate image recovery - -A separate image recovery can be used during upgrades. - -To set a default recovery image or a package, set `RECOVERY_IMAGE` into /etc/cos-upgrade-image. It allows to override the default image/package used during upgrades. - -To make an ISO with a separate recovery image as squashfs, you can either use the default from `cOS`, by adding it in the iso yaml file: - - -```yaml -packages: - rootfs: - .. - uefi: - .. - isoimage: - ... - - recovery/cos-img -``` - -The installer will detect the squashfs file in the iso, and will use it when installing the system. You can customize the recovery image as well by providing your own: see the `recovery/cos-img` package definition as a reference. - -## Building ISOs, Vagrant Boxes, OVA - -In order to build an iso at the moment of writing, we first rely on [luet-makeiso](https://github.com/mudler/luet-makeiso). It accepts a YAML file denoting the packages to bundle in an ISO and a list of luet repositories where to download the packages from. - -A sample can be found [here](https://github.com/rancher-sandbox/cos-toolkit-sample-repo/blob/master/iso.yaml). - -To build an iso from a local repository (the build process, automatically produces a repository in `build` in the local checkout): - -```bash -luet-makeiso ./iso.yaml --local build -``` - -Where `iso.yaml` is the iso specification file, and `--local build` is an optional argument to use also the local repository in the build process. - -We are then free to refer to packages in the tree in the `iso.yaml` file. - -For Vagrant Boxes, OVA and QEMU images at the moment of writing we are relying on [packer templates](https://github.com/rancher-sandbox/cOS-toolkit/tree/master/packer). - -## Known issues - -When building cOS or a cOS derivative, you could face different issues, this section provides a description of the most known ones, and way to workaround them. - -### Building SELinux fails - -`cOS` by default has SELinux enabled in permissive mode. If you are building parts of cOS or cOS itself from scratch, you might encounter issues while building the SELinux module, like so: - -``` -Step 12/13 : RUN checkmodule -M -m -o cOS.mod cOS.te && semodule_package -o cOS.pp -m cOS.mod - ---> Using cache - ---> 1be520969ead -Step 13/13 : RUN semodule -i cOS.pp - ---> Running in c5bfa5ae92e2 - libsemanage.semanage_commit_sandbox: Error while renaming /var/lib/selinux/targeted/active to /var/lib/selinux/targeted/previous. (Invalid cross-device link). -semodule: Failed! - The command '/bin/sh -c semodule -i cOS.pp' returned a non-zero code: 1 - Error: while resolving join images: failed building join image: Failed compiling system/selinux-policies-0.0.6+3: failed building package image: Could not push image: raccos/sampleos:ffc8618ecbfbffc11cc3bca301cc49867eb7dccb623f951dd92caa10ced29b68 selinux-policies-system-0.0.6+3.dockerfile: Could not build image: raccos/sampleos:ffc8618ecbfbffc11cc3bca301cc49867eb7dccb623f951dd92caa10ced29b68 selinux-policies-system-0.0.6+3.dockerfile: Failed running command: : exit status 1 - Bailing out -make: *** [Makefile:45: build] Error 1 -``` - -The issue is possibly caused by https://github.com/docker/for-linux/issues/480 . A workaround is to switch the storage driver of Docker. Check if your storage driver is overlay2, and switch it to `devicemapper` - -### Multi-stage copy build fails - -While processing images with several stage copy, you could face the following: - - -``` - 🐋 Building image raccos/sampleos:cc0aee4ff6c194f920a945c45ebcb487c3e22c5ab40e2634ea70c064dfab206d done - 📦 8/8 system/cos-0.5.3+1 ⤑ 🔨 build system/selinux-policies-0.0.6+3 ✅ Done - 🚀 All dependencies are satisfied, building package requested by the user system/cos-0.5.3+1 - 📦 system/cos-0.5.3+1 Using image: raccos/sampleos:cc0aee4ff6c194f920a945c45ebcb487c3e22c5ab40e2634ea70c064dfab206d - 📦 system/cos-0.5.3+1 🐋 Generating 'builder' image from raccos/sampleos:cc0aee4ff6c194f920a945c45ebcb487c3e22c5ab40e2634ea70c064dfab206d as raccos/sampleos:builder-8533d659df2505a518860bd010b7a8ed with prelude steps -🚧 warning Failed to download 'raccos/sampleos:builder-8533d659df2505a518860bd010b7a8ed'. Will keep going and build the image unless you use --fatal -🚧 warning Failed pulling image: Error response from daemon: manifest for raccos/sampleos:builder-8533d659df2505a518860bd010b7a8ed not found: manifest unknown: manifest unknown -: exit status 1 - 🐋 Building image raccos/sampleos:builder-8533d659df2505a518860bd010b7a8ed - Sending build context to Docker daemon 9.728kB - Step 1/10 : FROM raccos/sampleos:cc0aee4ff6c194f920a945c45ebcb487c3e22c5ab40e2634ea70c064dfab206d - ---> f1122e79b17e -Step 2/10 : COPY . /luetbuild - ---> 4ff3e202951b - Step 3/10 : WORKDIR /luetbuild - ---> Running in 7ec571b96c6f - Removing intermediate container 7ec571b96c6f - ---> 9e05366f830a -Step 4/10 : ENV PACKAGE_NAME=cos - ---> Running in 30297dbd21a3 - Removing intermediate container 30297dbd21a3 - ---> 4c4838b629f4 - Step 5/10 : ENV PACKAGE_VERSION=0.5.3+1 - ---> Running in 36361b617252 - Removing intermediate container 36361b617252 - ---> 6ac0d3a2ff9a -Step 6/10 : ENV PACKAGE_CATEGORY=system - ---> Running in f20c2cf3cf34 - Removing intermediate container f20c2cf3cf34 - ---> a902ff95d273 - Step 7/10 : COPY --from=quay.io/costoolkit/build-cache:f3a333095d9915dc17d7f0f5629a638a7571a01dcf84886b48c7b2e5289a668a /usr/bin/yip /usr/bin/yip - ---> 42fa00d9c990 - Step 8/10 : COPY --from=quay.io/costoolkit/build-cache:e3bbe48c6d57b93599e592c5540ee4ca7916158461773916ce71ef72f30abdd1 /usr/bin/luet /usr/bin/luet - e3bbe48c6d57b93599e592c5540ee4ca7916158461773916ce71ef72f30abdd1: Pulling from costoolkit/build-cache - 3599716b36e7: Already exists - 24a39c0e5d06: Already exists - 4f4fb700ef54: Already exists - 4f4fb700ef54: Already exists - 4f4fb700ef54: Already exists - 378615c429f5: Already exists - c28da22d3dfd: Already exists - ddb4dd5c81b0: Already exists - 92db41c0c9ab: Already exists - 4f4fb700ef54: Already exists - 6e0ca71a6514: Already exists - 47debb886c7d: Already exists - 4f4fb700ef54: Already exists - 4f4fb700ef54: Already exists - 4f4fb700ef54: Already exists - d0c9d0f8ddb6: Already exists - e5a48f1f72ad: Pulling fs layer - 4f4fb700ef54: Pulling fs layer - 7d603b2e4a37: Pulling fs layer - 64c4d787e344: Pulling fs layer - f8835d2e60d1: Pulling fs layer - 64c4d787e344: Waiting - f8835d2e60d1: Waiting - e5a48f1f72ad: Download complete - e5a48f1f72ad: Pull complete - 4f4fb700ef54: Verifying Checksum - 4f4fb700ef54: Download complete - 4f4fb700ef54: Pull complete - 7d603b2e4a37: Verifying Checksum -7d603b2e4a37: Download complete - 64c4d787e344: Verifying Checksum -64c4d787e344: Download complete - 7d603b2e4a37: Pull complete - 64c4d787e344: Pull complete - f8835d2e60d1: Verifying Checksum - f8835d2e60d1: Download complete - f8835d2e60d1: Pull complete - Digest: sha256:9b58bed47ff53f2d6cc517a21449cae686db387d171099a4a3145c8a47e6a1e0 - Status: Downloaded newer image for quay.io/costoolkit/build-cache:e3bbe48c6d57b93599e592c5540ee4ca7916158461773916ce71ef72f30abdd1 - failed to export image: failed to create image: failed to get layer sha256:118537d8997a08750ab1ac3d8e8575e40fe60e8337e02633b0d8a1287117fe78: layer does not exist - Error: while resolving join images: failed building join image: failed building package image: Could not push image: raccos/sampleos:cc0aee4ff6c194f920a945c45ebcb487c3e22c5ab40e2634ea70c064dfab206d cos-system-0.5.3+1-builder.dockerfile: Could not build image: raccos/sampleos:cc0aee4ff6c194f920a945c45ebcb487c3e22c5ab40e2634ea70c064dfab206d cos-system-0.5.3+1-builder.dockerfile: Failed running command: : exit status 1 - Bailing out -make: *** [Makefile:45: build] Error 1 -``` - -There is a issue open [upstream](https://github.com/moby/moby/issues/37965) about it. A workaround is to enable Docker buildkit with `DOCKER_BUILDKIT=1`. \ No newline at end of file diff --git a/docs/dependencies.md b/docs/dependencies.md deleted file mode 100644 index 909786122b0..00000000000 --- a/docs/dependencies.md +++ /dev/null @@ -1,57 +0,0 @@ -### Installing required dependencies for local build - -To get requirements installed locally, run: - -```bash -$> make deps -``` - -or you need: - -- [`luet`](https://github.com/mudler/luet) -- [`luet-makeiso`](https://github.com/mudler/luet-makeiso) -- [`squashfs-tools`](https://github.com/plougher/squashfs-tools) - - `zypper in squashfs` on SLES or openSUSE -- [`xorriso`](https://dev.lovelyhq.com/libburnia/web/wiki/Xorriso) - - `zypper in xorriso` on SLES or openSUSE -- `yq` ([version `3.x`](https://github.com/mikefarah/yq/releases/tag/3.4.1)), installed via [packages/toolchain/yq](https://github.com/rancher-sandbox/cOS-toolkit/tree/master/packages/toolchain/yq) (optional) -- [`jq`](https://stedolan.github.io/jq), installed via [packages/utils/jq](https://github.com/rancher-sandbox/cOS-toolkit/tree/master/packages/utils/jq) (optional) - -_Note_: Running `make` deps will install only `luet`, `luet-makeiso`, `yq` and `jq`. `squashfs-tools` and `xorriso` needs to be provided by the OS. - -### Manually install dependencies - -To install luet locally, you can also run as root: -```bash -# curl https://raw.githubusercontent.com/rancher-sandbox/cOS-toolkit/master/scripts/get_luet.sh | sh -``` -or build [luet from source](https://github.com/mudler/luet)). - -You can find more luet components in the official [Luet repository](https://github.com/Luet-lab/luet-repo). - - -#### luet-makeiso - -`luet-makeiso` comes [with cOS-toolkit](https://github.com/rancher-sandbox/cOS-toolkit/tree/master/packages/toolchain/luet-makeiso) -and can be installed with `luet` locally: - -```bash -$> luet install -y toolchain/luet-makeiso -``` - -You can also grab the binary from [luet-makeiso](https://github.com/mudler/luet-makeiso) releases. - - -#### yq and jq -`yq` (version `3.x`) and `jq` are used to retrieve the list of -packages to build in order to produce the final ISOs. Those are not -strictly required, see the Note below. - - -They are installable with: - -```bash -$> luet install -y utils/jq toolchain/yq -``` - -_Note_: `yq` and `jq` are just used to generate the list of packages to build, and you don't need to have them installed if you manually specify the packages to be compiled. diff --git a/docs/derivatives_featureset.md b/docs/derivatives_featureset.md deleted file mode 100644 index 417747091fc..00000000000 --- a/docs/derivatives_featureset.md +++ /dev/null @@ -1,640 +0,0 @@ -# Derivatives featureset - -This document describes the shared featureset between derivatives that directly depend on `system/cos`. - -Every derivative share a common configuration layer, along few packages by default in the stack. - - - -- [Derivatives featureset](#derivatives-featureset) - - [Package stack](#package-stack) - - [Login](#login) - - [Examples](#examples) - - [Install](#install) - - [Upgrades](#upgrades) - - [Reset state](#reset-state) - - [Recovery partition](#recovery-partition) - - [Upgrading the recovery partition](#upgrading-the-recovery-partition) - - [From ISO](#from-iso) - - [File system layout](#file-system-layout) - - [Persistent changes](#persistent-changes) - - [Available stages](#available-stages) - - [initramfs](#initramfs) - - [boot](#boot) - - [fs](#fs) - - [network](#network) - - [reconcile](#reconcile) - - [Runtime features](#runtime-features) - - [OEM customizations](#oem-customizations) - - [Default OEM](#default-oem) - - [SELinux policy](#selinux-policy) - - [Configuration reference](#configuration-reference) - - [Compatibility with Cloud Init format](#compatibility-with-cloud-init-format) - - [stages.STAGE_ID.STEP_NAME.name](#stagesstage_idstep_namename) - - [stages.STAGE_ID.STEP_NAME.files](#stagesstage_idstep_namefiles) - - [stages.STAGE_ID.STEP_NAME.directories](#stagesstage_idstep_namedirectories) - - [stages.STAGE_ID.STEP_NAME.dns](#stagesstage_idstep_namedns) - - [stages.STAGE_ID.STEP_NAME.hostname](#stagesstage_idstep_namehostname) - - [stages.STAGE_ID.STEP_NAME.sysctl](#stagesstage_idstep_namesysctl) - - [stages.STAGE_ID.STEP_NAME.authorized_keys](#stagesstage_idstep_nameauthorized_keys) - - [stages.STAGE_ID.STEP_NAME.node](#stagesstage_idstep_namenode) - - [stages.STAGE_ID.STEP_NAME.users](#stagesstage_idstep_nameusers) - - [stages.STAGE_ID.STEP_NAME.ensure_entities](#stagesstage_idstep_nameensure_entities) - - [stages.STAGE_ID.STEP_NAME.delete_entities](#stagesstage_idstep_namedelete_entities) - - [stages.STAGE_ID.STEP_NAME.modules](#stagesstage_idstep_namemodules) - - [stages.STAGE_ID.STEP_NAME.systemctl](#stagesstage_idstep_namesystemctl) - - [stages.STAGE_ID.STEP_NAME.environment](#stagesstage_idstep_nameenvironment) - - [stages.STAGE_ID.STEP_NAME.environment_file](#stagesstage_idstep_nameenvironment_file) - - [stages.STAGE_ID.STEP_NAME.timesyncd](#stagesstage_idstep_nametimesyncd) - - [stages.STAGE_ID.STEP_NAME.commands](#stagesstage_idstep_namecommands) - - [stages.STAGE_ID.STEP_NAME.datasource](#stagesstage_idstep_namedatasource) - - - -## Package stack - -When building a `cos-toolkit` derivative, a common set of packages are provided already with a common default configuration. Some of the most notably are: - -- systemd as init system -- grub for boot loader -- dracut for initramfs - -Each `cos-toolkit` flavor (opensuse, ubuntu, fedora) ships their own set of base packages depending on the distribution they are based against. You can find the list of packages in the `packages` keyword in the corresponding [values file for each flavor](https://github.com/rancher-sandbox/cOS-toolkit/tree/master/values) - -## Login - -By default you can login with the user `root` and `cos`. - -You can change this by overriding `/system/oem/04_accounting.yaml` in the derivative spec file. - -### Examples -- [Changing root password](https://github.com/rancher-sandbox/cos-toolkit-sample-repo/blob/00c0b4abf8225224c1c177f5b3bd818c7b091eaf/packages/sampleOS/build.yaml#L13) -- [Example accounting file](https://github.com/rancher-sandbox/epinio-appliance-demo-sample/blob/master/packages/epinioOS/04_accounting.yaml) - -## Install - -To install run `cos-installer ` to start the installation process. Remove the ISO and reboot. - -_Note_: `cos-installer` supports other options as well. Run `cos-installer --help` to see a complete help. - -## Upgrades - -To upgrade an installed system, just run `cos-upgrade` and reboot. - -cOS during installation sets two `.img` images files in the `COS_STATE` partition: -- `/cOS/active.img` labeled `COS_ACTIVE`: Where `cOS` typically boots from -- `/cOS/passive.img` labeled `COS_PASSIVE`: Where `cOS` boots for fallback - -Those are used by the upgrade mechanism to prepare and install a pristine `cOS` each time an upgrade is attempted. - -To specify a single docker image to upgrade to instead of the regular upgrade channels, run `cos-upgrade --docker-image image`. - -_Note_ by default `cos-upgrade --docker-image` checks images against the notary registry server for valid signatures for the images tag. To disable image verification, run `cos-upgrade --no-verify --docker-image`. - -See the [sample repository](https://github.com/rancher-sandbox/cos-toolkit-sample-repo#system-upgrades) readme on how to tweak the upgrade channels for the derivative and [a further description is available here](https://github.com/rancher-sandbox/epinio-appliance-demo-sample#images) - -## Reset state - -cOS derivatives have a recovery mechanism built-in which can be leveraged to restore the system to a known point. At installation time, the recovery partition is created from the installation medium. - -### Recovery partition - -A derivative can be recovered anytime by booting into the ` recovery` partition and by running `cos-reset` from it. - -This command will regenerate the bootloader and the images in the `COS_STATE` partition by using the recovery image. - -### Upgrading the recovery partition - -The recovery partition can also be upgraded by running - -```bash -cos-upgrade --recovery -``` - -It also supports to specify docker images directly: - -```bash -cos-upgrade --recovery --docker-image -``` - -*Note*: the command has to be run in the standard partitions used for boot (Active or Fallback). - -### From ISO -The ISO can be also used as a recovery medium: type `cos-upgrade` from a LiveCD. It will then try to upgrade the image of the active partition installed in the system. - -## File system layout - -![Partitioning layout](https://docs.google.com/drawings/d/e/2PACX-1vR-I5ZwwB5EjpsymUfcNADRTTKXrNMnlZHgD8RjDpzYhyYiz_JrWJwvpcfMcwfYet1oWCZVWH22aj1k/pub?w=533&h=443) - -By default, `cos` derivative will inherit an immutable setup. -A running system will look like as follows: - -``` -/usr/local - persistent (COS_PERSISTENT) -/oem - persistent (COS_OEM) -/etc - ephemeral -/usr - read only -/ immutable -``` - -Any changes that are not specified by cloud-init are not persisting across reboots. - -## Persistent changes - -By default a derivative reads and executes cloud-init files in (lexicopgrahic) sequence present in `/system/oem`, `/usr/local/cloud-config` and `/oem` during boot. It is also possible to run cloud-init file in a different location from boot cmdline by using the `cos.setup=..` option. - -For example, if you want to change `/etc/issue` of the system persistently, you can create `/usr/local/cloud-config/90_after_install.yaml` with the following content: - -```yaml -# The following is executed before fs is setted up: -stages: - fs: - - name: "After install" - files: - - path: /etc/issue - content: | - Welcome, have fun! - permissions: 0644 - owner: 0 - group: 0 - systemctl: - disable: - - wicked - - name: "After install (second step)" - files: - - path: /etc/motd - content: | - Welcome, have more fun! - permissions: 0644 - owner: 0 - group: 0 -``` - -For more examples, `/system/oem` contains files used to configure on boot a pristine `cOS`. Mind to not edit those directly, but copy them or apply local changes to `/usr/local/cloud-config` or `/oem` in case of system-wide persistent changes. See the OEM section below. - -### Available stages - -Cloud-init files are applied in 5 different phases: `boot`, `network`, `fs`, `initramfs` and `reconcile`. All the available cloud-init keywords can be used in each stage. Additionally, it's possible also to hook before or after a stage has run, each one has a specific stage which is possible to run steps: `boot.after`, `network.before`, `fs.after` etc. - -#### initramfs - -This is the earliest stage, running before switching root. Here you can apply radical changes to the booting setup of `cOS`. - -#### boot - -This stage is executed after initramfs has switched root, during the `systemd` bootup process. - -#### fs - -This stage is executed when fs is mounted and is guaranteed to have access to `COS_STATE` and `COS_PERSISTENT`. - -#### network - -This stage is executed when network is available - -#### reconcile - -This stage is executed `5m` after boot and periodically each `60m`. - -## Runtime features - -There are present default cloud-init configurations files available under `/system/features` for example purposes, and to quickly enable testing features. - -Features are simply cloud-config yaml files in the above folder and can be enabled/disabled with `cos-feature`. For example, after install, to enable `k3s` it's sufficient to type `cos-feature enable k3s` and reboot. Similarly, by adding a yaml file in the above folder will make it available for listing/enable/disable. - -See `cos-feature list` for the available features. - - -``` -$> cos-feature list - -==================== -cOS features list - -To enable, run: cos-feature enable -To disable, run: cos-feature disable -==================== - -- carrier -- harvester -- k3s -- vagrant (enabled) -... -``` - -## OEM customizations - -It is possible to install a custom cloud-init file during install with `--config` to `cos-installer` or, it's possible to add more files manually to the `/oem` folder after installation. - -### Default OEM - -The default cloud-init configuration files can be found under `/system/oem`. This is to setup e.g. the default root password and the upgrade channel. - - -``` -/system/oem/00_rootfs.yaml - defines the rootfs mountpoint layout setting -/system/oem/01_defaults.yaml - systemd defaults (keyboard layout, timezone) -/system/oem/02_upgrades.yaml - Settings for channel upgrades -/system/oem/03_branding.yaml - Branding setting, Derivative name, /etc/issue content -/system/oem/04_accounting.yaml - Default user/pass -/system/oem/05_network.yaml - Default network setup -/system/oem/06_recovery.yaml - Executes additional commands when booting in recovery mode -``` - -If you are building a cOS derivative, and plan to release upgrades, you must override (or create a new file under `/system/oem`) the `/system/oem/02_upgrades.yaml` pointing to the docker registry used to deliver upgrades. - -[See also the example appliance](https://github.com/rancher-sandbox/epinio-appliance-demo-sample#images) - -## SELinux policy - -By default, derivatives have `SELinux` enabled in permissive mode. You can use the [cos-toolkit](https://github.com/rancher-sandbox/cOS-toolkit/tree/master/packages/selinux-policies) default policy as a kickstart to customize on top. - -Copy the package (create a new folder with `build.yaml`, `definition.yaml` and `cOS.te`) into the derivative tree and customize to suit your needs, and add it as a build requirement to your OS package. - -_Note_: the [cOS.te](https://github.com/rancher-sandbox/cOS-toolkit/blob/master/packages/selinux-policies/cOS.te) sample policy was created using the utility `audit2allow` after running some -basic operations in permissive mode using system default policies. `allow2audit` -translates audit messages into allow/dontaudit SELinux policies which can be later -compiled as a SELinux module. This is the approach used in this illustration -example and mostly follows `audit2allow` [man pages](https://linux.die.net/man/1/audit2allow). - -## Configuration reference - -Below is a reference of all keys available in the cloud-init style files. - -```yaml -stages: - # "network" is the stage where network is expected to be up - # It is called internally when network is available from - # the cos-setup-network unit. - network: - # Here there are a list of - # steps to be run in the network stage - - name: "Some setup happening" - files: - - path: /tmp/foo - content: | - test - permissions: 0777 - owner: 1000 - group: 100 - commands: - - echo "test" - modules: - - nvidia - environment: - FOO: "bar" - systctl: - debug.exception-trace: "0" - hostname: "foo" - systemctl: - enable: - - foo - disable: - - bar - start: - - baz - mask: - - foobar - authorized_keys: - user: - - "github:mudler" - - "ssh-rsa ...." - dns: - path: /etc/resolv.conf - nameservers: - - 8.8.8.8 - ensure_entities: - - path: /etc/passwd - entity: | - kind: "user" - username: "foo" - password: "pass" - uid: 0 - gid: 0 - info: "Foo!" - homedir: "/home/foo" - shell: "/bin/bash" - delete_entities: - - path: /etc/passwd - entity: | - kind: "user" - username: "foo" - password: "pass" - uid: 0 - gid: 0 - info: "Foo!" - homedir: "/home/foo" - shell: "/bin/bash" - datasource: - providers: - - "aws" - - "digitalocean" - path: "/etc/cloud-data" -``` - -The default cloud-config format is split into *stages* (*initramfs*, *boot*, *network*, *initramfs*, *reconcile*, called generically **STAGE_ID** below) that are emitted internally during the various phases by calling `cos-setup STAGE_ID` and *steps* (**STEP_NAME** below) defined for each stage that are executed in order. - -Each cloud-config file is loaded and executed only at the apprioriate stage. - -This allows further components to emit their own stages at the desired time. - -### Compatibility with Cloud Init format - -A subset of the official [cloud-config spec](http://cloudinit.readthedocs.org/en/latest/topics/format.html#cloud-config-data) is implemented. - -If a yaml file starts with `#cloud-config` it is parsed as a standard cloud-init and automatically associated it to the `boot` stage. For example: - -```yaml -#cloud-config -users: -- name: "bar" - passwd: "foo" - groups: "users" - ssh_authorized_keys: - - faaapploo -ssh_authorized_keys: - - asdd -runcmd: -- foo -hostname: "bar" -write_files: -- encoding: b64 - content: CiMgVGhpcyBmaWxlIGNvbnRyb2xzIHRoZSBzdGF0ZSBvZiBTRUxpbnV4 - path: /foo/bar - permissions: "0644" - owner: "bar" -``` - -Is executed at boot, by using the standard `cloud-config` format. - -### `stages.STAGE_ID.STEP_NAME.name` - -A description of the stage step. Used only when printing output to console. - -### `stages.STAGE_ID.STEP_NAME.files` - -A list of files to write to disk. - -```yaml -stages: - default: - - files: - - path: /tmp/bar - content: | - #!/bin/sh - echo "test" - permissions: 0777 - owner: 1000 - group: 100 -``` - -### `stages.STAGE_ID.STEP_NAME.directories` - -A list of directories to be created on disk. Runs before `files`. - -```yaml -stages: - default: - - name: "Setup folders" - directories: - - path: "/etc/foo" - permissions: 0600 - owner: 0 - group: 0 -``` - -### `stages.STAGE_ID.STEP_NAME.dns` - -A way to configure the `/etc/resolv.conf` file. - -```yaml -stages: - default: - - name: "Setup dns" - dns: - nameservers: - - 8.8.8.8 - - 1.1.1.1 - search: - - foo.bar - options: - - .. - path: "/etc/resolv.conf.bak" -``` -### `stages.STAGE_ID.STEP_NAME.hostname` - -A string representing the machine hostname. It sets it in the running system, updates `/etc/hostname` and adds the new hostname to `/etc/hosts`. - -```yaml -stages: - default: - - name: "Setup hostname" - hostname: "foo" -``` -### `stages.STAGE_ID.STEP_NAME.sysctl` - -Kernel configuration. It sets `/proc/sys/` accordingly, similarly to `sysctl`. - -```yaml -stages: - default: - - name: "Setup exception trace" - systctl: - debug.exception-trace: "0" -``` - -### `stages.STAGE_ID.STEP_NAME.authorized_keys` - -A list of SSH authorized keys that should be added for each user. -SSH keys can be obtained from GitHub user accounts by using the format github:${USERNAME}, similarly for Gitlab with gitlab:${USERNAME}. - -```yaml -stages: - default: - - name: "Setup exception trace" - authorized_keys: - mudler: - - github:mudler - - ssh-rsa: ... -``` - -### `stages.STAGE_ID.STEP_NAME.node` - -If defined, the node hostname where this stage has to run, otherwise it skips the execution. The node can be also a regexp in the Golang format. - -```yaml -stages: - default: - - name: "Setup logging" - node: "bastion" -``` - -### `stages.STAGE_ID.STEP_NAME.users` - -A map of users and user info to set. Passwords can be also encrypted. - -The `users` parameter adds or modifies the specified list of users. Each user is an object which consists of the following fields. Each field is optional and of type string unless otherwise noted. -In case the user is already existing, the entry is ignored. - -- **name**: Required. Login name of user -- **gecos**: GECOS comment of user -- **passwd**: Hash of the password to use for this user. Unencrypted strings are supported too. -- **homedir**: User's home directory. Defaults to /home/*name* -- **no-create-home**: Boolean. Skip home directory creation. -- **primary-group**: Default group for the user. Defaults to a new group created named after the user. -- **groups**: Add user to these additional groups -- **no-user-group**: Boolean. Skip default group creation. -- **ssh-authorized-keys**: List of public SSH keys to authorize for this user -- **system**: Create the user as a system user. No home directory will be created. -- **no-log-init**: Boolean. Skip initialization of lastlog and faillog databases. -- **shell**: User's login shell. - -```yaml -stages: - default: - - name: "Setup users" - users: - bastion: - passwd: "strongpassword" - homedir: "/home/foo -``` - -### `stages.STAGE_ID.STEP_NAME.ensure_entities` - -A `user` or a `group` in the [entity](https://github.com/mudler/entities) format to be configured in the system - -```yaml -stages: - default: - - name: "Setup users" - ensure_entities: - - path: /etc/passwd - entity: | - kind: "user" - username: "foo" - password: "x" - uid: 0 - gid: 0 - info: "Foo!" - homedir: "/home/foo" - shell: "/bin/bash" -``` -### `stages.STAGE_ID.STEP_NAME.delete_entities` - -A `user` or a `group` in the [entity](https://github.com/mudler/entities) format to be pruned from the system - -```yaml -stages: - default: - - name: "Setup users" - delete_entities: - - path: /etc/passwd - entity: | - kind: "user" - username: "foo" - password: "x" - uid: 0 - gid: 0 - info: "Foo!" - homedir: "/home/foo" - shell: "/bin/bash" -``` -### `stages.STAGE_ID.STEP_NAME.modules` - -A list of kernel modules to load. - -```yaml -stages: - default: - - name: "Setup users" - modules: - - nvidia -``` -### `stages.STAGE_ID.STEP_NAME.systemctl` - -A list of systemd services to `enable`, `disable`, `mask` or `start`. - -```yaml -stages: - default: - - name: "Setup users" - systemctl: - enable: - - systemd-timesyncd - - cronie - mask: - - purge-kernels - disable: - - crond - start: - - cronie -``` -### `stages.STAGE_ID.STEP_NAME.environment` - -A map of variables to write in `/etc/environment`, or otherwise specified in `environment_file` - -```yaml -stages: - default: - - name: "Setup users" - environment: - FOO: "bar" -``` -### `stages.STAGE_ID.STEP_NAME.environment_file` - -A string to specify where to set the environment file - -```yaml -stages: - default: - - name: "Setup users" - environment_file: "/home/user/.envrc" - environment: - FOO: "bar" -``` -### `stages.STAGE_ID.STEP_NAME.timesyncd` - -Sets the `systemd-timesyncd` daemon file (`/etc/system/timesyncd.conf`) file accordingly. The documentation for `timesyncd` and all the options can be found [here](https://www.freedesktop.org/software/systemd/man/timesyncd.conf.html). - -```yaml -stages: - default: - - name: "Setup NTP" - systemctl: - enable: - - systemd-timesyncd - timesyncd: - NTP: "0.pool.org foo.pool.org" - FallbackNTP: "" - ... -``` - -### `stages.STAGE_ID.STEP_NAME.commands` - -A list of arbitrary commands to run after file writes and directory creation. - -```yaml -stages: - default: - - name: "Setup something" - commands: - - echo 1 > /bar -``` - -### `stages.STAGE_ID.STEP_NAME.datasource` - -Sets to fetch user data from the specified cloud providers. It populates -provider specific data into `/run/config` folder and the custom user data -is stored into the provided path. - - -```yaml -stages: - default: - - name: "Fetch cloud provider's user data" - datasource: - providers: - - "aws" - - "digitalocean" - path: "/etc/cloud-data" -``` diff --git a/docs/dev.md b/docs/dev.md deleted file mode 100644 index 9a58acaf9dc..00000000000 --- a/docs/dev.md +++ /dev/null @@ -1,149 +0,0 @@ -# cOS-Toolkit Developer Documentation - -Welcome! - -The cOS (containerized OS) distribution is entirely built over GitHub. You can check the pipelines in the `.github` folder to see how the process looks like. - -## Repository layout - -- `packages`: contain packages definition for luet -- `values`: interpolation files, needed only for multi-arch and flavor-specific build -- `assets`: static files needed by the iso generation process -- `packer`: Packer templates -- `tests`: cOS test suites -- `manifest.yaml`: Is the manifest needed used to generate the ISO and additional packages to build - -## Forking and test on your own - -By forking the `cOS-toolkit` repository, you already have the Github Action workflow configured to start building and pushing your own `cOS` fork. - -The only changes required to keep in mind for pushing images: -- set `DOCKER_PASSWORD` and `DOCKER_USERNAME` as Github secrets, which are needed to push the resulting container images from the pipeline. -- Tweak or set the `Makefile`'s `REPO_CACHE` and `FINAL_REPO` accordingly. Those are used respectively for an image used for cache, and for the final image reference. - -Those are not required for building - you can disable image push (`--push`) from the `Makefile` or just by specifying e.g. `BUILD_ARGS=--pull` when calling the `make` targets. - -## Building locally - -cOS has a container image which can be used to build cOS locally in order to generate the cOS packages and the cOS iso from your checkout. - -From your git folder: - -```bash -$> docker build -t cos-builder . -$> docker run --privileged=true --rm -v /var/run/docker.sock:/var/run/docker.sock -v $PWD:/cOS cos-builder -``` - -or use the `.envrc` file: - -```bash -$> source .envrc -$> cos-build -``` - -### Build all packages locally - -Building locally has a [set of dependencies](dependencies.md) that -should be satisfied. - -Then you can run -``` -# make build -``` -as root - - -To clean from previous runs, run `make clean`. - -_Note_: The makefile uses [`yq` and `jq`](dev.md#yq-and-jq) to -retrieve the packages to build from the iso specfile. - -If you don't have `jq` and `yq` installed, you must pass by the packages manually with `PACKAGES` (e.g. `PACKAGES="system/cos live/systemd-boot live/boot live/syslinux"`). - -You might want to build packages running as `root` or `sudo -E` if you intend to preserve file permissions in the resulting packages (mainly for `xattrs`, and so on). - -### Build ISO - -If using SLES or openSUSE, first install the required deps: - -``` -# zypper in -y squashfs xorriso dosfstools -``` - -and then, simply run - -``` -# make local-iso -``` - -### Testing ISO changes - -To test changes against a specific set of packages, you can for example: - -``` -# make PACKAGES="toolchain/yq" build local-iso -``` - -root is required because we want to keep permissions on the output packages (not really required for experimenting). - -### Run with qemu - -After you have the iso locally, run - -``` - -$> QEMU=qemu-system-x86_64 make run-qemu - -``` - -This will create a disk image at `.qemu/drive.img` and boot from the ISO. - -> -> If the image already exists, it will NOT be overwritten. -> -> You need to run an explicit `make clean_run` to wipe the image and -> start over. -> - -#### Installing - -With a fresh `drive.img`, `make run-qemu` will boot from ISO. You can then log in as `root` with password `cos` and install cOS on -the disk image with: - -``` -# cos-installer /dev/sda -``` - -#### Running - -After a successful installation of cOS on `drive.img`, you can boot -the resulting sytem with - -``` - -$> QEMU_ARGS="-boot c" make run-qemu - -``` - - -### Run tests - -Requires: Virtualbox or libvirt, vagrant, packer - -We have a test suite which runs over SSH. - -To create the vagrant image: - -``` - -$> PACKER_ARGS="-var='feature=vagrant' -only virtualbox-iso" make packer - -``` - -To run the tests: - -``` - -$> make test - -``` diff --git a/docs/high_level_architecture.md b/docs/high_level_architecture.md deleted file mode 100644 index 824324e707b..00000000000 --- a/docs/high_level_architecture.md +++ /dev/null @@ -1,42 +0,0 @@ -# cOS toolkit High level Architecture - -This page tries to encompass the [`cos-toolkit`](https://github.com/rancher-sandbox/cOS-toolkit) structure and the high level architecture, along with all the involved components. - - -## Design goals - -- Blueprints to build immutable Linux derivatives from container images -- A workflow to maintain, support and deliver custom-OS and upgrades to end systems -- Derivatives have the same “foundation” manifest - easy to customize on top, add packages: `systemd`, `dracut` and `grub` as a foundation stack. -- Upgrades delivered with container registry images ( also workflow with `docker run` && `docker commit` supported! ) -
The content of the container image is the system which is booted. - - -## High level overview - -cOS-Toolkit encompasses several components required for building and distributing OS images. [This issue](https://github.com/rancher-sandbox/cOS-toolkit/issues/108) summarize the current state, and how we plan to integrate them in a single CLI to improve the user experience. - -cOS-Toolkit is also a manifest, which includes package definitions of how the underlying OS is composed. It forms an abstraction layer, which is then translated to Dockerfiles and built by our CI (optionally) for re-usal. A derivative can be built by parts of the manifest, or reusing it entirely, container images included. - -![High level overview](https://docs.google.com/drawings/d/e/2PACX-1vQQJOaISPbMxMYU44UT-M3ou9uGYOrzbXCRXMLPU8m7_ie3ke_08xCsyRLkFZJRB4VnzIeobPciEoQv/pub?w=942&h=532) - -The fundamental phases can be summarized in the following steps: - -- Build packages from container images (and optionally keep build caches) -- Extract artefacts from containers -- Add metadata(s) and create a repository -- (optionally) publish the repository and the artefacts - -The developer of the derivative applies a customization layer during build, which is an augmentation layer in the same form of `cos-toolkit` itself. [An example repository is provided](https://github.com/rancher-sandbox/cos-toolkit-sample-repo) that shows how to build a customOS that can be maintained with a container image registry. - -## Distribution - -The OS delivery mechanism is done via container registries. The developer that wants to provide upgrades for the custom OS will push the resulting container images to the container registry. It will then be used by the installed system to pull upgrades from. - -![](https://docs.google.com/drawings/d/e/2PACX-1vQrTArCYgu-iscf29v1sl1sEn2J81AqBpi9D5xpwGKr9uxR2QywoSqCmsSaJLxRRacoRr0Kq40a7jPF/pub?w=969&h=464) - -## Upgrade mechanism - -There are two different upgrade mechanisms available that can be used from a maintainer perspective: (a) release channels or (b) providing a container image reference ( `e.g. my.registry.com/image:tag` ) [that can be tweaked in the customization phases](https://github.com/rancher-sandbox/cOS-toolkit#default-oem) to achieve the desired effect. - - \ No newline at end of file diff --git a/docs/k3s_and_fleet_on_vanilla_image_example.md b/docs/k3s_and_fleet_on_vanilla_image_example.md deleted file mode 100644 index 540acc8f40c..00000000000 --- a/docs/k3s_and_fleet_on_vanilla_image_example.md +++ /dev/null @@ -1,140 +0,0 @@ -# K3s + Fleet on top of cOS Vanilla image - -This is a work in progress example of how to deploy K3S + Fleet + System Uprade Controller over a cOS vanilla image only -by using `yip` yaml configuration files (cloud-init style). The config file reproduced here is meant to be included -as a user-data in a cloud provider (aws, gcp, azure, etc) or as part of a cdrom (cOS-Recovery will try to fetch `/userdata` file -from a cdrom device). - -A vanilla image is an image that only provides the cOS-Recovery system on a `COS_RECOVERY` partition. It does not include any other -system and it is meant to be dumped to a bigger disk and deploy a cOS system or a derivative system over the free space in disk. -COS vanilla images are build as part of the CI workflow, see CI artifacts to download one of those. - -The configuration file of this example has two purposes: first it deploys cOS, second in reboots on the deployed OS and deploys -K3S + Fleet + System Upgrades Controller. - -On first boot it will fail to boot cOS grub menu entry and fallback -to cOS-Recovery system. From there it will partition the vanilla image to create the main system partition (`COS_STATE`) -and add an extra partition for persistent data (`COS_PERSISTENT`). It will use the full disk, a disk of at least 20GiB -is recommended. After partitioning it will deploy the main system on `COS_STATE` and reboot to it. - -On consequent boots it will simply boot from `COS_STATE`, there it prepares the persistent areas of the system (arranges few bind -mounts inside `COS_PERSISTENT`) and then it runs an standard installation of K3s, Fleet and System Upgrade Controller. After few -minutes after the system is up the K3s cluster is up and running. - -Note this setup similar to the [derivative example](https://github.com/rancher-sandbox/cos-fleet-upgrades-sample) using Fleet. -The main difference is that this example does not require to build any image, it is pure cloud-init configuration based. - -### User data configuration file -```yaml -name: "Default deployment" -stages: - rootfs.after: - - if: '[ -f "/run/cos/recovery_mode" ]' - name: "Repart image" - layout: - # It will partition a device including the given filesystem label or part label (filesystem label matches first) - device: - label: COS_RECOVERY - add_partitions: - - fsLabel: COS_STATE - # 15Gb for COS_STATE, so the disk should have, at least, 20Gb - size: 15360 - pLabel: state - - fsLabel: COS_PERSISTENT - # unset size or 0 size means all available space - pLabel: persistent - initramfs: - - name: "Set /etc/hosts" - files: - - path: /etc/hosts - content: | - 127.0.0.1 localhost - - if: '[ ! -f "/run/cos/recovery_mode" ]' - name: "Persist" - commands: - - | - target=/usr/local/.cos-state - - # Always want the latest update of systemd conf from the image - # TODO: This might break the fallback system - mkdir -p "${target}/etc/systemd/" - rsync -av /etc/systemd/ "${target}/etc/systemd/" - - # Only populate ssh conf once - if [ ! -e "${target}/etc/ssh" ]; then - mkdir -p "${target}/etc/ssh/" - rsync -av /etc/ssh/ "${target}/etc/ssh/" - fi - - # undo /home /opt /root mount from cos immutable-rootfs module - # TODO: we could think of configuring custom overlay paths in - # immutable rootfs package. So this part could be omitted - for i in home opt root; do - sed -i "/overlay \/${i} /d" /etc/fstab - nsenter -m -t 1 -- umount "/sysroot/${i}" - done - - # setup directories as persistent - # TODO: would it make sense defining persistent state overlayfs mounts - # as part of the immutable rootfs config? - for i in root opt home var/lib/rancher var/lib/kubelet etc/systemd etc/rancher etc/ssh; do - mkdir -p "${target}/${i}" "/${i}" - echo "${target}/${i} /${i} none defaults,bind 0 0" >> /etc/fstab - nsenter -m -t 1 -- mount -o defaults,bind "/sysroot${target}/${i}" "/sysroot/${i}" - done - - # ensure /var/log/journal exists so it's labeled correctly - mkdir -p /var/log/journal - network.before: - - name: "Setup SSH keys" - authorized_keys: - root: - # It can download ssh key from remote places, such as github user keys (e.g. `github:my_user`) - - my_custom_ssh_key - - if: '[ ! -f "/run/cos/recovery_mode" ]' - name: "Fleet deployment" - files: - - path: /etc/k3s/manifests/fleet-config.yaml - content: | - apiVersion: helm.cattle.io/v1 - kind: HelmChart - metadata: - name: fleet-crd - namespace: kube-system - spec: - chart: https://github.com/rancher/fleet/releases/download/v0.3.3/fleet-crd-0.3.3.tgz - --- - apiVersion: helm.cattle.io/v1 - kind: HelmChart - metadata: - name: fleet - namespace: kube-system - spec: - chart: https://github.com/rancher/fleet/releases/download/v0.3.3/fleet-0.3.3.tgz - network: - - if: '[ -f "/run/cos/recovery_mode" ]' - name: "Deploy cos-system" - commands: - # Deploys the latest image available in default channel (quay.io/costoolkit/releases-opensuse) - # use --docker-image to deploy a custom image - # e.g. `cos-deploy --docker-image quay.io/my_custom_repo:my_image` - - cos-deploy && shutdown -r now - - if: '[ ! -f "/run/cos/recovery_mode" ]' - name: "Setup k3s" - directories: - - path: "/usr/local/bin" - permissions: 0755 - owner: 0 - group: 0 - commands: - - | - curl -sfL https://get.k3s.io | \ - INSTALL_K3S_VERSION="v1.20.4+k3s1" \ - INSTALL_K3S_EXEC="--tls-san {{.Values.node.hostname}}" \ - INSTALL_K3S_SELINUX_WARN="true" \ - sh - - # Install fleet - kubectl apply -f /etc/k3s/manifests/fleet-config.yaml - # Install system-upgrade-controller - kubectl apply -f https://raw.githubusercontent.com/rancher/system-upgrade-controller/v0.6.2/manifests/system-upgrade-controller.yaml -``` diff --git a/docs/raw_image.md b/docs/raw_image.md deleted file mode 100644 index b958daf274c..00000000000 --- a/docs/raw_image.md +++ /dev/null @@ -1,67 +0,0 @@ - -# Instruction to boot the generated raw image to AWS - -Once you generate the raw image with `make raw_disk`, the generated file needs to be imported to AWS. This file describes the step to consume the image on AWS. - -1. Upload the image to an S3 bucket -``` -aws s3 cp s3://cos-images -``` - -2. Created the disk container JSON (`container.json` file) as: - -``` -{ - "Description": "cOS Testing image in RAW format", - "Format": "raw", - "UserBucket": { - "S3Bucket": "cos-images", - "S3Key": "" - } -} -``` - -3. Import the disk as snapshot - -``` -aws ec2 import-snapshot --description "cOS PoC" --disk-container file://container.json -``` - -4. Followed the procedure described in [AWS docs](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/creating-an-ami-ebs.html#creating-launching-ami-from-snapshot) to register an AMI from snapshot. Used all default settings unless for the firmware, were I forced it to UEFI boot. - -5. Launch instance with this simple userdata: -``` -name: "Default deployment" -stages: - rootfs.after: - - name: "Repart image" - layout: - # It will partition a device including the given filesystem label or part label (filesystem label matches first) - device: - label: COS_RECOVERY - # Only last partition can be expanded - # expand_partition: - # size: 4096 - add_partitions: - - fsLabel: COS_STATE - size: 8192 - pLabel: state - - fsLabel: COS_PERSISTENT - # unset size or 0 size means all available space - # size: 0 - # default filesystem is ext2 when omitted - # filesystem: ext4 - pLabel: persistent - network: - - if: '[ -z "$(blkid -L COS_SYSTEM || true)" ]' - name: "Deploy cos-system" - commands: - - | - cos-deploy --docker-image quay.io/costoolkit/releases-opensuse:cos-system-0.5.3-3 && \ - shutdown -r +1 - -``` - -You can login with default username/password: `root/cos`. - -See also https://github.com/rancher-sandbox/cOS-toolkit/pull/235#issuecomment-853762476 \ No newline at end of file