diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md index 13d873224aa..c271f1eec45 100644 --- a/CONTRIBUTING.md +++ b/CONTRIBUTING.md @@ -5,12 +5,6 @@ GitHub pull requests. This document outlines some of the conventions on development workflow, commit message formatting, contact points and other resources to make it easier to get your contribution accepted. -# Tectonic Installer contributions - -Tectonic Installer provides specific guidelines for the modification of included Terraform modules. For more information, please see [Modifying Tectonic Installer][modify-installer]. - -For more information on Terraform, please see the [Terraform Documentation][tf-doc]. - ## Certificate of Origin By contributing to this project you agree to the Developer Certificate of @@ -91,9 +85,6 @@ second line is always blank, and other lines should be wrapped at 80 characters. This allows the message to be easier to read on GitHub as well as in various git tools. - -[modify-installer]: Documentation/contrib/modify-installer.md -[tf-doc]: https://www.terraform.io/docs/index.html [golang-style]: https://github.com/golang/go/wiki/CodeReviewComments [disclosure]: https://coreos.com/security/disclosure/ [new-issue]: https://github.com/openshift/installer/issues/new diff --git a/Documentation/contrib/modify-installer.md b/Documentation/contrib/modify-installer.md deleted file mode 100644 index ece7eeb049f..00000000000 --- a/Documentation/contrib/modify-installer.md +++ /dev/null @@ -1,41 +0,0 @@ -# Modifying Tectonic Installer - -Modifications are great, but some might result in failed cluster creation, or be incompatible with the Tectonic Installer development process. This document provides an outline of changes to the Terraform modules, configs, and manifests included in Tectonic Installer that can and can not be modified successfully. - -Please also note that using "alpha features" through existing Beta or Stable APIs (even on your local resources) is discouraged in production. There is no guarantee that these features will survive a cluster upgrade. - -## Machine level modifications - -Always safe to modify: - -Never safe to modify: -* Kubelet configuration, including CNI. Modification of the kubelet configuration may result in an inability to start pods, or a failure in communication between cluster components. - -May be safe to modify, but must be managed individually: -* Changes to Ignition profiles, such as networking and mounting storage. The process by which these changes within your local fork will be merged back into a new release of Tectonic Installer has not yet been defined. - -## Kubernetes level modifications - -Always safe to modify: -* Pods, deployments, and any other local component of the cluster. It is always safe to modify your local instance of the Kubernetes cluster. -* Namespaces. It is always safe to add or modify local namespaces. -* Custom RBAC roles. It is always safe to add or modify local RBAC roles. - -Never safe to modify: -* Any manifests in `kube-system` or `tectonic-system`. Modifications to these manifests may result in an inability to perform a cluster upgrade. -* Default RBAC roles. Modifications to the default RBAC roles may prevent cluster control plane components from functioning. - -## Infrastructure level modifications - -Always safe to modify: - -Never safe to modify: -* Security group settings. -* Role permissions. Cloud Provider role permissions must meet or exceed documented requirements. (For example: [AWS IAM][iam].) -* EC2 block device mapping. -* EC2 AMIs. - -Modifying any of these settings might lead to invalid clusters. - - -[iam]: http://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles.html diff --git a/Documentation/design/dependency.md b/Documentation/design/dependency.md deleted file mode 100644 index dd5285c2ef0..00000000000 --- a/Documentation/design/dependency.md +++ /dev/null @@ -1,7 +0,0 @@ -# Dependency graph - -During the installation, the installer will generate a bunch of files. The dependency graph of those files is shown below. It is generated from [resource_dep.dot](./resource_dep.dot) by running: - - dot -Tsvg resource_dep.dot -o ./resource_dep.svg - -![Image depicting the resource dependency graph](./resource_dep.svg). diff --git a/Documentation/dev/audiences.md b/Documentation/dev/audiences.md deleted file mode 100644 index 21f24ad8750..00000000000 --- a/Documentation/dev/audiences.md +++ /dev/null @@ -1,17 +0,0 @@ -# Audiences and tools - -The Tectonic installer has two main use-cases and audiences – end-users that want to deploy clusters, and developers that want to extend, improve or modify the codebase. The user experience is important but different for these audiences, and is explained in detail below. - -## End-user experience - -The primary audience of this installer is end-users that want to deploy one or more clusters on supported platforms. The ideal UX is to download a release of the installer that requires minimal configuration of the user's machine, including dependencies. - -Freeing these users from installing many dependencies can isolate them from differences between platforms (macOS or Linux). This also reduces the documentation burden. - -We should strive to _never require_ end-users to use or install `make`, `npm`, etc to install a cluster. - -## Developer experience - -The developer workflow is reflective of how often clusters will be created and destroyed. This project makes heavy use of `make` to make these repetitive actions easier. - -It is expected that developers have a working knowledge of Terraform, including how to configure/use a `.terraformrc` and things of that nature. diff --git a/Documentation/dev/bazel-in-depth.md b/Documentation/dev/bazel-in-depth.md deleted file mode 100644 index 379613de5ea..00000000000 --- a/Documentation/dev/bazel-in-depth.md +++ /dev/null @@ -1,51 +0,0 @@ -# Bazel Under The Hood - -The goal of this document is to detail the steps taken by Bazel when building the Tectonic Installer project so that users better understand the process. Ultimately, a user building the project could elect to build the project without Bazel, either by hand or otherwise. *Note*: building without Bazel is not recommended because it will lead to non-hermetic builds, which could lead to unpredictable results. We strongly recommend using the build instructions outlined in the README for consistent, reproducible builds. - -This document covers the process of building the `tarball` Bazel target, which is the main target of the project. - -## Build Layout -As noted in [build.md](build.md), the goal of the build process is to produce an archive with the following file structure: - -``` -tectonic -├── config.tf -├── examples -├── modules -├── steps -└── tectonic-installer - ├── darwin - │ ├── tectonic - │ └── terraform - └── linux - ├── tectonic - └── terraform -``` - -## Steps -### Directories -Prepare the necessary output directories: - -* `tectonic` -* `tectonic/examples` -* `tectonic/modules` -* `tectonic/steps` -* `tectonic/tectonic-installer` -* `tectonic/tectonic-installer/darwin` -* `tectonic/tectonic-installer/linux` - -### Go Build -Build the Tectonic CLI Golang binary located in `tectonic/installer/cmd/tectonic` using `go build …` -The binary should be built for both Darwin and Linux and placed in the corresponding output directory, i.e. `tectonic/tectonic-installer/darwin`, or `tectonic/tectonic-installer/linux`. - -### Terraform Binaries -Download binaries for Terraform for both Darwin and Linux and place them in the corresponding output directories. - -### Terraform Configuration -Copy all required Terraform configuration files from their source directories and place them in the correct output directory. Specifically, `config.tf`, `modules` and `steps` should be copied to the output directory at `tectonic/config.tf`, `tectonic/modules`, and `tectonic/steps`, respectively. - -### Configuration Examples -Copy the Tectonic Installer configuration examples from `examples` to the output directory at `tectonic/examples`. - -### Archive -Lastly, archive and gzip the output directory using the `tar` utility to produce the final asset. diff --git a/Documentation/dev/build.md b/Documentation/dev/build.md deleted file mode 100644 index ba5e628a0dc..00000000000 --- a/Documentation/dev/build.md +++ /dev/null @@ -1,100 +0,0 @@ -# Building Tectonic Installer - -The Tectonic Installer leverages the [Bazel build system](https://bazel.build/) to build all artifacts, binaries, and documentation. - -## Getting Started - -Install Bazel > 0.11.x using the instructions [provided online](https://docs.bazel.build/versions/master/install.html). -*Note*: compiling Bazel from source requires Bazel. -*Note*: some Linux platforms may require installing _Static libraries for the GNU standard C++ library_ (on Fedora `dnf install libstdc++-static`) - -Clone the Tectonic Installer git repository locally: - -```sh -git clone git@github.com:coreos/tectonic-installer.git && cd tectonic-installer -``` - -## Quickstart - -To build Tectonic for development or testing, build the `tarball` target with Bazel: - -```sh -bazel build tarball -``` - -This will produce an archive named `tectonic.tar.gz` in the `bazel-bin` directory, containing all the assets necessary to bring up a Tectonic cluster, namely the: - -* Tectonic Installer binary; -* Terraform modules; -* Terraform binary; -* Terraform provider binaries; and -* examples - -To use the installer you can now do the following: - -```sh -cd bazel-bin -tar -xvzf tectonic.tar.gz -cd tectonic -``` - -Then proceed using the installer as documented on [coreos.com](https://coreos.com/tectonic/docs/). - -For more details on building a Tectonic release or other Tectonic assets as well as workarounds to some known issues, read on. - -## Building A Release Tarball - -To build a release tarball for the Tectonic Installer, issue the following command from the `tectonic-installer` root directory: - -```sh -bazel build tarball -``` - -*Note*: Bazel < 0.9.0 is known to [fail to build tarballs when using Python 3](https://github.com/bazelbuild/bazel/issues/3816); to avoid this issue, force Python 2 by using: - -```sh -bazel build --force_python=py2 --python_path=/usr/bin/python2 tarball -``` - -This will create a tarball named `tectonic.tar.gz` in the `bazel-bin` directory with the following directory structure: - -``` -tectonic -├── config.tf -├── examples -├── installer -├── modules -└── steps -``` - -In order to build a release tarball with the version string in the directory name within the tarball, export a `TECTONIC_VERSION` environment variable and then build the tarball while passing the variable to the build: - -```sh -export TECTONIC_VERSION=1.2.3-beta -bazel build tarball --action_env=TECTONIC_VERSION -``` - -This will create a tarball named `tectonic.tar.gz` in the `bazel-bin` directory with the following directory structure: - -``` -tectonic_1.2.3-beta -├── config.tf -├── examples -├── installer -├── modules -└── steps -``` - -*Note*: the generated tarball will not include the version string in its own name since output names must be known ahead of time in Bazel. To include the version in the tarball name, copy or move the archive with the desired name in the destination. - -## Cleaning - -You can cleanup all generated files by running: -```sh -bazel clean -``` - -Additionally you can remove all toolchains (in addition to the generated files) with: -```sh -bazel clean --expunge -``` diff --git a/Documentation/dev/node-bootstrap-flow.md b/Documentation/dev/node-bootstrap-flow.md deleted file mode 100644 index 31edb5a315a..00000000000 --- a/Documentation/dev/node-bootstrap-flow.md +++ /dev/null @@ -1,86 +0,0 @@ -# Node bootstrapping flow - -This is a development document which describes the bootstrapping flow for ContainerLinux nodes provisioned by the tectonic-installer as part of a Tectonic cluster. - -## Overview - -When a cluster node is being bootstrapped from scratch, it goes through several phases in the following order: - -1. first-boot OS configuration, via ignition (systemd units, node configuration, etc) -2. provisioning of additional assets (k8s manifests, TLS material), via either of: - * pushing from terraform file/remote-exec (SSH) - * pulling from private cloud stores (S3 buckets) -3. if needed, a node reboot is triggered to apply systemd-wide changes and to clean container runtime datadir - -Additionally, only on one of the master nodes the following kubernetes bootstrapping happens: - -1. `bootkube.service` is started after `kubelet.service` start -2. a static bootstrapping control-plane is deployed -3. a fully self-hosted control-plane starts and takes over the previous one -4. `bootkube.service` is completed with success -5. `tectonic.service` is started -6. a self-hosted tectonic control-plane is deployed -7. `tectonic.service` is completed with success - -## Systemd units - -The following systemd unit is deployed to a node by tectonic-installer and take part in the bootstrapping process: - -* `kubelet.service` is the main kubelet daemon. It is automatically started on boot. - -Additionally, only on one of the master nodes the following kubernetes bootstrapping happens: - -* `bootkube.service` deploys the initial bootstrapping control-plane. It is started only after `kubelet.service` _is started_. It is a oneshot unit and cannot crash, and it runs only during bootstrap -* `tectonic.service` deploys tectonic control-plane. It is started only after `bootkube.service` _has completed_. It is a oneshot unit and cannot crash, and it runs only during bootstrap - -## Service ordering - -Service ordering is enforced via systemd dependencies. This is the rationale for the settings, with relevant snippets: - -### `kubelet.service` - -``` -Restart=always -WantedBy=multi-user.target -``` - -This service is enabled by default and can crash-loop until success. -It is started on every boot. - -## Diagram - -This is a visual simplified representation of the overall bootstrapping flow. - -```bob -Legend: - * TF -> terraform provisioner - * IGN -> ignition - * k.s -> kubelet.service - * b.s -> bootkube.service - * t.s -> tectonic.service - -.-----------------------------------------------------------------------------------------------------------+ -| | -| Provision cloud/userdata +----------+ | -| ,---------------------------------------o| TF | | -| | +----------+ | -| | | -| | | -| | | -| | | -| V | -| +-------+ Before +------------+ Before | -| | IGN | .--------------->| k.s |o--------. | -| +-------+ | +------------+ | | -| | | ^ | | +-----+ Before +-------+ | -| '----------------------' | v '--->| b.s |o--------------->| t.s | | -| Enable '------' +-----+ +-------+ | -| | -| | -| o o | -| | | | -| | * Each boot | * First boot | -| | * All nodes | * Bootkube master | -| | | | -'---------------------------------------o----------------------------o--------------------------------------+ -``` diff --git a/README.md b/README.md index 573af119998..ce9b22059c3 100644 --- a/README.md +++ b/README.md @@ -1,96 +1,17 @@ -[![Build Status](https://travis-ci.org/openshift/installer.svg?branch=master)](https://travis-ci.org/openshift/installer) - # Openshift Installer -The CoreOS and OpenShift teams are now working together to integrate Tectonic and OpenShift into a converged platform. -See the CoreOS blog for any additional details: -https://coreos.com/blog/coreos-tech-to-combine-with-red-hat-openshift - -## Hacking - -These instructions can be used for AWS: - -1. Set you access-key and secret in `~/.aws/credentials`. - You can create credentials in [the IAM console][aws-iam-console], as documented [here][aws-cli-config] and [here][aws-cli-config-files]. - -2. Build the project - ```sh - bazel build tarball - ``` - - *Note*: the project can optionally be built without installing Bazel, provided Podman is installed: - ```sh - podman run --rm -v $PWD:$PWD:Z -w $PWD quay.io/coreos/tectonic-builder:bazel-v0.3 bazel --output_base=.cache build tarball - ``` - -3. Extract the tarball - ```sh - tar -zxf bazel-bin/tectonic-dev.tar.gz - ``` - -4. Create an alias for tectonic - ```sh - alias tectonic="${PWD}/tectonic-dev/installer/tectonic" - ``` - -5. Edit Tectonic configuration file including the $CLUSTER_NAME - ```sh - $EDITOR examples/aws.yaml - ``` - -6. Prepare a local configuration. - The structure behind the YAML input is described [here][godoc-InstallConfig]. - ```sh - tectonic init --config=examples/aws.yaml - ``` - -7. Install Tectonic cluster - ```sh - tectonic install --dir=$CLUSTER_NAME - ``` +## Quick Start -8. Visit `https://{$CLUSTER_NAME}-api.${BASE_DOMAIN}:6443/console/`. - You may need to ignore a certificate warning if you did not configure a CA known to your browser. - Log in with the admin credentials you configured in `aws.yaml`. - -9. Teardown Tectonic cluster - ```sh - tectonic destroy --dir=$CLUSTER_NAME - ``` - -## Managing Dependencies -### Go - -We follow a hard flattening approach; i.e. direct and inherited dependencies are installed in the base `vendor/`. - -Dependencies are managed with [glide](https://glide.sh/) but committed directly to the repository. If you don't have glide, install the latest release from [https://glide.sh/](https://glide.sh/). We require version 0.12 at a minimum. - -The vendor directory is pruned using [glide-vc](https://github.com/sgotti/glide-vc). Follow the [installation instructions](https://github.com/sgotti/glide-vc#install) in the project's README. - -To add a new dependency: -- Edit the `glide.yaml` file to add your dependency. -- Ensure you add a `version` field for the sha or tag you want to pin to. -- Revendor the dependencies: +After cloning this repository, the installer binary will need to be built by running the following: ```sh -rm glide.lock -glide install --strip-vendor -glide-vc --use-lock-file --no-tests --only-code -bazel run //:gazelle +./build.sh ``` -If it worked correctly it should: -- Clone your new dep to the `/vendor` dir and check out the ref you specified. -- Update `glide.lock` to include your new package, add any transitive dependencies and update its hash. -- Regenerate BUILD.bazel files. - -For the sake of your fellow reviewers, commit vendored code separately from any other changes. - -## Tests +This will create `bin/openshift-install`. This binary can then be invoked to create an OpenShift cluster, like so: -See [tests/README.md](tests/README.md). +```sh +bin/openshift-install cluster +``` -[aws-cli-config]: https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-getting-started.html#cli-quick-configuration -[aws-cli-config-files]: https://docs.aws.amazon.com/cli/latest/userguide/cli-config-files.html -[aws-iam-console]: https://console.aws.amazon.com/iam/home#/users -[godoc-InstallConfig]: https://godoc.org/github.com/openshift/installer/pkg/types#InstallConfig +The installer will show a series of prompts for user-specific information (e.g. admin password) and use reasonable defaults for everything else. In non-interactive contexts, prompts can be bypassed by providing appropriately-named environment variables. Refer to the [user documentation](docs/user) for more information. diff --git a/Documentation/design/assetgeneration.md b/docs/design/assetgeneration.md similarity index 67% rename from Documentation/design/assetgeneration.md rename to docs/design/assetgeneration.md index 4eb923b5f52..cf4b8a7dcb1 100644 --- a/Documentation/design/assetgeneration.md +++ b/docs/design/assetgeneration.md @@ -1,12 +1,12 @@ -# Installer asset generation based on dependency resolution +# Asset generation -## Goal - -Define generation of assets of varied types in installer based on a dependency graph. +The installer internally uses a directed acyclic graph to represent all of the assets it creates as well as their dependencies. This process looks very similar to many build systems (e.g. Bazel, Make). ## Overview -The installer generates assets based on the dependency [graph](./dependency.md). To generate an asset, all its dependencies have to be generated. The installer needs to define how assets declare this dependency and how it resolves the graph when asked to generated specific targets. Targets are chosen assets that need to be generated, the installer generates all assets that the target depends on and writes to disk only the targets, consuming any inputs from disk or state from a previous run. +The installer generates assets based on the [dependency graph](#dependency-graph). Each asset seperately defines how it can be generated as well as its dependencies. Targets represent a set of assets that should be generated and written to disk for the user's consumption. When a user invokes the installer for a particular target, each of the assets in the set is generated as well as any dependencies. This eventually results in the user being prompted for any missing information (e.g. administrator password, target platform). + +The installer is also able to read assets from disk if they have been provided by the user. In the event that an asset exists on disk, the install won't generate the asset, but will instead consume the asset from disk (removing the file). This allows the installer to be run multiple times, using the assets generated by the previous invocation. It also allows a user to make modifications to the generated assets before continuing to the next target. Each asset is individually responsible for declaring its dependencies. Each asset is also responsible resolving conflicts when combining its input from disk and its state from a previous run. The installer ensures all the dependencies for an asset is generated and provides the asset with latest state to generate its own output. @@ -122,3 +122,15 @@ A6: (A1, A2) update state Flush A5 and A6 to disk ``` + +## Dependency graph + +The following graph shows the relationship between the various assets that the installer generates: + +![Image depicting the resource dependency graph](resource_dep.svg) + +This graph is generated from the [source](resource_dep.dot) using the following command: + +```sh +dot -Tsvg resource_dep.dot -o ./resource_dep.svg +``` diff --git a/docs/design/cluster-bootstrap-flow.md b/docs/design/cluster-bootstrap-flow.md new file mode 100644 index 00000000000..95b56e467ac --- /dev/null +++ b/docs/design/cluster-bootstrap-flow.md @@ -0,0 +1,19 @@ +# Cluster Bootstrapping Flow + +This is a development document which describes the bootstrapping flow for an OpenShift cluster. + +## Overview + +All nodes in an OpenShift cluster are booted with a small Ignition config which references a larger dynamically-generated Ignition config, served within the cluster itself. This allows new nodes to always boot with the latest configuration. This does make bootstrap slightly more complex, however, because the master nodes require a cluster which doesn't yet exist in order to boot. This dependency loop is broken by a bootstrap node which temporarily hosts the control plane for the cluster. + +On this bootstrap node, the following steps happen on boot: + +1. `bootkube.service` is started after `kubelet.service` start +2. a static bootstrapping control-plane is deployed +3. a fully self-hosted control-plane starts (scheduled to the master nodes instead of the bootstrap node) and takes over the previous one +4. `bootkube.service` is completed with success +5. `tectonic.service` is started +6. a self-hosted tectonic control-plane is deployed +7. `tectonic.service` is completed with success + +The result of this process is a fully running cluster. At this point, it is safe to remove the bootstrap node. diff --git a/Documentation/design/resource_dep.dot b/docs/design/resource_dep.dot similarity index 100% rename from Documentation/design/resource_dep.dot rename to docs/design/resource_dep.dot diff --git a/Documentation/design/resource_dep.svg b/docs/design/resource_dep.svg similarity index 100% rename from Documentation/design/resource_dep.svg rename to docs/design/resource_dep.svg diff --git a/docs/dev/dependencies.md b/docs/dev/dependencies.md new file mode 100644 index 00000000000..05be7a3f593 --- /dev/null +++ b/docs/dev/dependencies.md @@ -0,0 +1,30 @@ +# Managing Dependencies + +## Go + +We follow a hard flattening approach; i.e. direct and inherited dependencies are installed in the base `vendor/`. + +Dependencies are managed with [glide](https://glide.sh/) but committed directly to the repository. If you don't have glide, install the latest release from [https://glide.sh/](https://glide.sh/). We require version 0.12 at a minimum. + +The vendor directory is pruned using [glide-vc](https://github.com/sgotti/glide-vc). Follow the [installation instructions](https://github.com/sgotti/glide-vc#install) in the project's README. + +To add a new dependency: +- Edit the `glide.yaml` file to add your dependency. +- Ensure you add a `version` field for the sha or tag you want to pin to. +- Revendor the dependencies: + +```sh +rm glide.lock +glide install --strip-vendor +glide-vc --use-lock-file --no-tests --only-code +``` + +If it worked correctly it should: +- Clone your new dep to the `/vendor` dir and check out the ref you specified. +- Update `glide.lock` to include your new package, add any transitive dependencies and update its hash. + +For the sake of your fellow reviewers, commit vendored code separately from any other changes. + +## Tests + +See [tests/README.md](../../tests/README.md). diff --git a/Documentation/dev/libvirt-howto.md b/docs/dev/libvirt-howto.md similarity index 98% rename from Documentation/dev/libvirt-howto.md rename to docs/dev/libvirt-howto.md index 82539eb80bf..bcdb6e712af 100644 --- a/Documentation/dev/libvirt-howto.md +++ b/docs/dev/libvirt-howto.md @@ -177,7 +177,7 @@ When you're done, destroy: ```sh tectonic destroy --dir=$CLUSTER_NAME ``` -Be sure to destroy, or else you will need to manually use virsh to clean up the leaked resources. The [`virsh-cleanup`](../../scripts/maintenance/virsh-cleanup) script may help with this, but note it will currently destroy *all* libvirt resources. +Be sure to destroy, or else you will need to manually use virsh to clean up the leaked resources. The [`virsh-cleanup`](../../scripts/maintenance/virsh-cleanup.sh) script may help with this, but note it will currently destroy *all* libvirt resources. With the cluster removed, you no longer need to allow libvirt nodes to reach your `libvirtd`. Restart `firewalld` to remove your temporary changes as follows: diff --git a/docs/user/environment-variables.md b/docs/user/environment-variables.md new file mode 100644 index 00000000000..0786a09be17 --- /dev/null +++ b/docs/user/environment-variables.md @@ -0,0 +1,22 @@ +# Environment Variables + +The installer accepts a number of environment variable that allow the interactive prompts to be bypassed. Setting any of the following environment variables to their corresponding value, will cause the installer to use that value instead of prompting. + +## General + +| Environment Variable | Description | +|:----------------------------------|:--------------------------------------------------------------------------------------------| +| `OPENSHIFT_INSTALL_BASE_DOMAIN` | The base domain of the cluster. All DNS records will be sub-domains of this base. | +| `OPENSHIFT_INSTALL_CLUSTER_NAME` | The name of the cluster. This will be used when generating sub-domains. | +| `OPENSHIFT_INSTALL_EMAIL_ADDRESS` | The email address of the cluster administrator. This will be used to log in to the console. | +| `OPENSHIFT_INSTALL_PASSWORD` | The password of the cluster administrator. This will be used to log in to the console. | +| `OPENSHIFT_INSTALL_PLATFORM` | The platform onto which the cluster will be installed. | +| `OPENSHIFT_INSTALL_PULL_SECRET` | The container registry pull secret for this cluster. | +| `OPENSHIFT_INSTALL_SSH_PUB_KEY` | The SSH key used to access all nodes within the cluster. This is optional. | + +## Platform-Specific + +| Environment Variable | Description | +|:----------------------------------|:-----------------------------------------------------------------------------------------| +| `OPENSHIFT_INSTALL_AWS_REGION` | The AWS region to be used for installation. | +| `OPENSHIFT_INSTALL_LIBVIRT_URI` | The libvirt connection URI to be used. This must be accessible from the running cluster. | diff --git a/tests/README.md b/tests/README.md index 0b6ddebf5e1..a64f832d3a5 100644 --- a/tests/README.md +++ b/tests/README.md @@ -1,59 +1,8 @@ -# Tectonic Installer Tests - - -## Running basic tests on PRs - -Our basic set of tests includes: -- Code linting -- Backend unit tests - -They are run on **every** PR by default. Successful basic tests are required in -order to merge any PRs. Before starting to test your proposed changes, they are -temporarily merged into the target branch of the pull request. - -### Actions required -- **none** - - -## Running smoke - -In addition to our basic set of tests we have smoke tests which are running on AWS platform only. - -### Actions required -- Add the `run-smoke-tests` GitHub label - -### FAQ -- *I am not able to add labels, what should I do?* - - Please ask one of the [repository](../OWNERS) [maintainers](../OWNERS_ALIASES) to add the - labels. - -- *How do I retrigger the tests?* - - comment with `ok to test` on the PR. - -- *I forgot to add the GitHub labels. Can I add them after creating the PR?* - - Yes, just add the GitHub labels and comment `ok to test` on the PR. - -- *What if I trigger the tests twice in a small time frame?* - - Triggering the tests twice in a small time frame results in two test - executions. The result of the most recent execution will be reported as a PR - status in GitHub. - -- *What can I do in case I run into test flakes continually?* - - 1. Make sure the test failure is in **no** way related to your PR changes. - Test your changes locally thoroughly. - 2. Document the flake in Jira in the `INST` project as *issue type* "bug" with the - `flake` label. - 3. Get the approval of another person. - 4. Merge the PR. +# OpenShift Installer Tests ## Running smoke tests locally -### 1. Expose environment variables +1. Expose environment variables To run the smoke tests locally you need to set the following environment variables: @@ -69,9 +18,7 @@ DOMAIN AWS_REGION ``` -### 2. Launch the tests +2. Launch the tests Once the environment variables are set, run `./tests/run.sh aws`. -## Or, if you already have a cluster running - -### Follow the [smoke/README.md](./smoke/README.md). +If you already have a cluster running, follow [smoke/README.md](./smoke/README.md). diff --git a/tests/smoke/README.md b/tests/smoke/README.md index 0fc622686f6..62b0b35dfb0 100644 --- a/tests/smoke/README.md +++ b/tests/smoke/README.md @@ -1,12 +1,12 @@ -# Tectonic Smoke Tests +# OpenShift Installer Smoke Tests -This directory contains all smoke tests for Tectonic. -The smoke tests are a set of Golang test files that perform minimal validation of a running Tectonic cluster. +This directory contains all smoke tests for OpenShift Installer. +The smoke tests are a set of Golang test files that perform minimal validation of a running OpenShift cluster. ## Getting Started -The smoke tests assume a running Tectonic cluster, so before running any tests: -1. create a Tectonic cluster; and +The smoke tests assume a running OpenShift cluster, so before running any tests: +1. create a OpenShift cluster; and 2. download the cluster's kubeconfig to a known location. ## Running