This test harness is for validating ocs-operator deployed as an add-on in an OpenShift Dedicated (OSD) cluster. This code is NOT run directly.
Compiled docker images of this test harness are pushed to quay. These are
fetched by the osde2e tool and run inside the cluster. Running the add-on
using osde2e in the CI is covered in osde2e's add-ons documentation.
However, that isn't helpful while developing the add-on. This file contains
those details and this repo supplies other scripts and configuration files to
enable running osde2e
against a cluster manually; optionally with this test
harness.
Currently, the ocs-operator add-on needs to be installed manually. The deploy_ocs_on_osd.sh script sets up the cluster environment to enable the add-on to then be deployed either via the OSD dashboard or using the script in John Strunk's repo.
This repository contains:
- The test harness which is compiled into a docker image and run via osde2e.
- envrc for configuring the environment variables for running osde2e. I normally use it in a custom
.envrc
file and load/reload it automatically using direnv. There's an example in below. - deploy_ocs_on_osd.sh script which can:
- fetch cluster details into files that are loaded via envrc.
- prepare the cluster for the deployment of the add-on.
- auths.json.template which is used with deploy_ocs_on_osd.sh to use custom auth tokens for registries.
- osde2e_addons_config.yaml which configures the
osde2e
tool to run the add-on test suite.
NOTE: The script assumes that only one cluster is active. It literally uses only the first cluster in the list and ignores the rest. It also reads and writes files in $PWD.
ocm
is in$PATH
and is logged in. Be sure to login to the correct OSD environment. For development and testing, it should probably bestage
. The cluster should also be created in this environment.auths.json
is populated using auths.json.template present in$PWD
for custom pull secrets. Skip if not needed.
The script can be used in two modes:
- --details-only: Fetch the cluster details and populate the files needed by envrc. This will enable osde2e to run.
- --prepare: Prepare the cluster for ocs-operator deployment. Always re-populates the envrc files.
In either of the modes, the script will write the following files to $PWD
:
- cluster_id
- cluster_name
- admin:
kubeadmin
user's password. - kubeconfig: Point
$KUBECONFIG
its path to access the cluster withoc
. - secret.json: Pull secrets fetched from the cluster.
- pull-secret-${cluster-id}.json: Pull secrets compiled by merging the
auths.json
intosecret.json
; pushed to the cluster during--prepare
.
- Create an OSD cluster with 9 worker nodes and 4 load balancers.
- Run the script:
- Use
--details-only
if you wish simply to run osde2e against the cluster without deploying the add-on. - Use
--prepare
to also prepare the cluster to be able to deploy the ocs-operator.
- Use
- If the script exits with an error due to desired cluster state not being reached in time, just re-run the script. It is idempotent and, at the most, will only overwrite the cluster details stored in the files.
- Pull John Strunk's repo and deploy ocs-operator add-on manually or use the OSD console to do so.
- Properly configured
go
workspace. While$GOPATH
is mandatory, I also setGO111MODULES=true
andGOROOT=$(go env GOROOT)
. Check the example.envrc
below. - Running
docker
daemon. docker
client authenticated to the registry where you wish to push the image.- Run
docker run hello-world
to check thatdocker
is functional.
Don't run go tidy
. It will fetch the latest versions of some modules and
break the suite.
IMAGE_NAME="ocs-operator-test-harness"
IMAGE_TAG="0.01"
IMAGE_REPO="quay.io/mkarnikredhat/ocs-operator-osde2e-test-harness"
docker build -t "$IMAGE_NAME":"$IMAGE_TAG"
docker tag "$IMAGE_NAME":"$IMAGE_TAG" "$IMAGE_REPO":"$IMAGE_TAG"
docker push "$IMAGE_REPO":"$IMAGE_TAG"
- Properly configure the
go
workspace. While$GOPATH
is mandatory, I also setGO111MODULES=true
andGOROOT=$(go env GOROOT)
. - Checkout osde2e and compile the
odse2e
tool by runningmake build
. - Edit envrc to set the correct values for:
- OSD_PROJECT_DIR: directory where all the cluster details files reside; in case it's not
$PWD
. Use absolute path. - OCS_ADDON_TEST_HARNESS: Container image repository for the test harness. Default should point to quay.
- OCS_ADDON_TEST_HARNESS_TAG: Image tag to pull. Currently there's no
latest
tag on the repo; but this will be updated to it as soon as there is and shouldn't need changing. - OSD_ENV: For development and testing, this should probably be set to
stage
. This should match the environment you authenticate to usingocm
and create the cluster in.
- OSD_PROJECT_DIR: directory where all the cluster details files reside; in case it's not
- The repo ships with osde2e_addons_config.yaml which will run only the
addons
suite by default. Update this to add any other suites to be run. There are example configuration files in the osde2e repo in theconfigs
directory.
Run the deploy script and re-load envrc each time the cluster is re-deployed.
- Run
./deploy_ocs_on_osd.sh --details-only
to populate the cluster details files. source envrc
to load the cluster details into environment variables and configureosde2e
.~/git/openshift/osde2e/out/osde2e test --custom-config osde2e_addons_config.yaml
. Obviously, use the correct paths.- The output should be in
"$ARTIFACTS/install/junit-ocs-operator.xml"
.
It is possible to configure the environment using the env
section in the
osde2e_addons_config.yaml file. An example of this is
here.
However, I find it much more convenient to run the scrip to fetch all the
cluster details and load them via environment variables.
export GO111MODULE=on
export GOROOT="$(go env GOROOT)"
source envrc
export KUBECONFIG="$OSD_PROJECT_DIR/kubeconfig"
$KUBECONFIG
is setup here because the envrc file supplied is for osde2e
which does not require it. It uses $TEST_KUBECONFIG
instead.