Skip to content

Test suite for OpenShift Container Storage on OpenShift Dedicated

Notifications You must be signed in to change notification settings

brainfunked/ocs-operator-test-harness

Repository files navigation

ocs-operator Test Harness for OpenShift Dedicated E2E Test Suite

This test harness is for validating ocs-operator deployed as an add-on in an OpenShift Dedicated (OSD) cluster. This code is NOT run directly.

Compiled docker images of this test harness are pushed to quay. These are fetched by the osde2e tool and run inside the cluster. Running the add-on using osde2e in the CI is covered in osde2e's add-ons documentation. However, that isn't helpful while developing the add-on. This file contains those details and this repo supplies other scripts and configuration files to enable running osde2e against a cluster manually; optionally with this test harness.

Currently, the ocs-operator add-on needs to be installed manually. The deploy_ocs_on_osd.sh script sets up the cluster environment to enable the add-on to then be deployed either via the OSD dashboard or using the script in John Strunk's repo.

This repository contains:

deploy_ocs_on_osd script

NOTE: The script assumes that only one cluster is active. It literally uses only the first cluster in the list and ignores the rest. It also reads and writes files in $PWD.

Prerequisites:

  • ocm is in $PATH and is logged in. Be sure to login to the correct OSD environment. For development and testing, it should probably be stage. The cluster should also be created in this environment.
  • auths.json is populated using auths.json.template present in $PWD for custom pull secrets. Skip if not needed.

Usage

The script can be used in two modes:

  • --details-only: Fetch the cluster details and populate the files needed by envrc. This will enable osde2e to run.
  • --prepare: Prepare the cluster for ocs-operator deployment. Always re-populates the envrc files.

In either of the modes, the script will write the following files to $PWD:

  • cluster_id
  • cluster_name
  • admin: kubeadmin user's password.
  • kubeconfig: Point $KUBECONFIG its path to access the cluster with oc.
  • secret.json: Pull secrets fetched from the cluster.
  • pull-secret-${cluster-id}.json: Pull secrets compiled by merging the auths.json into secret.json; pushed to the cluster during --prepare.

Preparing the cluster and deploying ocs-operator

  • Create an OSD cluster with 9 worker nodes and 4 load balancers.
  • Run the script:
    • Use --details-only if you wish simply to run osde2e against the cluster without deploying the add-on.
    • Use --prepare to also prepare the cluster to be able to deploy the ocs-operator.
  • If the script exits with an error due to desired cluster state not being reached in time, just re-run the script. It is idempotent and, at the most, will only overwrite the cluster details stored in the files.
  • Pull John Strunk's repo and deploy ocs-operator add-on manually or use the OSD console to do so.

Building the container image

Pre-requisites

  • Properly configured go workspace. While $GOPATH is mandatory, I also set GO111MODULES=true and GOROOT=$(go env GOROOT). Check the example .envrc below.
  • Running docker daemon.
  • docker client authenticated to the registry where you wish to push the image.
  • Run docker run hello-world to check that docker is functional.

Don't run go tidy. It will fetch the latest versions of some modules and break the suite.

Building the image

IMAGE_NAME="ocs-operator-test-harness"
IMAGE_TAG="0.01"
IMAGE_REPO="quay.io/mkarnikredhat/ocs-operator-osde2e-test-harness"
docker build -t "$IMAGE_NAME":"$IMAGE_TAG"
docker tag "$IMAGE_NAME":"$IMAGE_TAG" "$IMAGE_REPO":"$IMAGE_TAG"
docker push "$IMAGE_REPO":"$IMAGE_TAG"

Running osde2e Manually

Pre-requisites

  • Properly configure the go workspace. While $GOPATH is mandatory, I also set GO111MODULES=true and GOROOT=$(go env GOROOT).
  • Checkout osde2e and compile the odse2e tool by running make build.
  • Edit envrc to set the correct values for:
    • OSD_PROJECT_DIR: directory where all the cluster details files reside; in case it's not $PWD. Use absolute path.
    • OCS_ADDON_TEST_HARNESS: Container image repository for the test harness. Default should point to quay.
    • OCS_ADDON_TEST_HARNESS_TAG: Image tag to pull. Currently there's no latest tag on the repo; but this will be updated to it as soon as there is and shouldn't need changing.
    • OSD_ENV: For development and testing, this should probably be set to stage. This should match the environment you authenticate to using ocm and create the cluster in.
  • The repo ships with osde2e_addons_config.yaml which will run only the addons suite by default. Update this to add any other suites to be run. There are example configuration files in the osde2e repo in the configs directory.

Running the test suite

Run the deploy script and re-load envrc each time the cluster is re-deployed.

  • Run ./deploy_ocs_on_osd.sh --details-only to populate the cluster details files.
  • source envrc to load the cluster details into environment variables and configure osde2e.
  • ~/git/openshift/osde2e/out/osde2e test --custom-config osde2e_addons_config.yaml. Obviously, use the correct paths.
  • The output should be in "$ARTIFACTS/install/junit-ocs-operator.xml".

It is possible to configure the environment using the env section in the osde2e_addons_config.yaml file. An example of this is here. However, I find it much more convenient to run the scrip to fetch all the cluster details and load them via environment variables.

Custom .envrc for use with direnv

export GO111MODULE=on
export GOROOT="$(go env GOROOT)"
source envrc
export KUBECONFIG="$OSD_PROJECT_DIR/kubeconfig"

$KUBECONFIG is setup here because the envrc file supplied is for osde2e which does not require it. It uses $TEST_KUBECONFIG instead.

About

Test suite for OpenShift Container Storage on OpenShift Dedicated

Resources

Stars

Watchers

Forks