-
Notifications
You must be signed in to change notification settings - Fork 2.1k
add osa origin conformance e2e test via base images #1374
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merged
openshift-merge-robot
merged 2 commits into
openshift:master
from
mjudeikis:feature/osa-e2e-test-boilerplate
Sep 10, 2018
Merged
Changes from all commits
Commits
Show all changes
2 commits
Select commit
Hold shift + click to select a range
File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,2 @@ | ||
| # These templates are being used for testing OpenShift on Azure. | ||
| # At the moment it cannot be reused to launch cluster for Origin tests |
242 changes: 242 additions & 0 deletions
242
ci-operator/templates/openshift-azure/cluster-launch-e2e-azure-conformance.yaml
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,242 @@ | ||
| kind: Template | ||
| apiVersion: template.openshift.io/v1 | ||
|
|
||
| parameters: | ||
| - name: JOB_NAME_SAFE | ||
| required: true | ||
| - name: JOB_NAME_HASH | ||
| required: true | ||
| - name: LOCAL_IMAGE_BIN | ||
| required: true | ||
| - name: IMAGE_TESTS | ||
| required: true | ||
| - name: IMAGE_SYNC | ||
| required: true | ||
| - name: NAMESPACE | ||
| required: true | ||
| - name: CLUSTER_TYPE | ||
| value: "azure" | ||
| required: true | ||
| - name: TEST_FOCUS | ||
| - name: TEST_SKIP | ||
| value: "\\[local\\]" | ||
| - name: TEST_FOCUS_SERIAL | ||
| - name: TEST_SKIP_SERIAL | ||
| value: "\\[local\\]" | ||
|
|
||
| objects: | ||
|
|
||
| # We want the cluster to be able to access these images | ||
| - kind: RoleBinding | ||
| apiVersion: authorization.openshift.io/v1 | ||
| metadata: | ||
| name: ${JOB_NAME_SAFE}-image-puller | ||
| namespace: ${NAMESPACE} | ||
| roleRef: | ||
| name: system:image-puller | ||
| subjects: | ||
| - kind: SystemGroup | ||
| name: system:unauthenticated | ||
|
|
||
| - kind: Pod | ||
| apiVersion: v1 | ||
| metadata: | ||
| name: ${JOB_NAME_SAFE} | ||
| namespace: ${NAMESPACE} | ||
| annotations: | ||
| # we want to gather the teardown logs no matter what | ||
| ci-operator.openshift.io/wait-for-container-artifacts: teardown | ||
| spec: | ||
| restartPolicy: Never | ||
| activeDeadlineSeconds: 10800 | ||
| terminationGracePeriodSeconds: 600 | ||
| volumes: | ||
| - name: artifacts | ||
| emptyDir: {} | ||
| - name: shared-tmp | ||
| emptyDir: {} | ||
| - name: openshift-tmp | ||
| emptyDir: {} | ||
| - name: cluster-secrets-azure | ||
| secret: | ||
| secretName: e2e-azure-secret | ||
|
|
||
| containers: | ||
|
|
||
| # Executes origin conformance tests | ||
| - name: test | ||
| image: ${IMAGE_TESTS} | ||
| resources: | ||
| requests: | ||
| cpu: 1 | ||
| memory: 300Mi | ||
| limits: | ||
| cpu: 3 | ||
| memory: 4Gi | ||
| volumeMounts: | ||
| - name: shared-tmp | ||
| mountPath: /tmp/shared | ||
| - name: openshift-tmp | ||
| mountPath: /tmp/openshift | ||
| - name: artifacts | ||
| mountPath: /tmp/artifacts | ||
| env: | ||
| - name: HOME | ||
| value: /tmp/shared/home | ||
| command: | ||
| - /bin/bash | ||
| - -c | ||
| - | | ||
| #!/bin/bash | ||
| set -euo pipefail | ||
|
|
||
| trap 'touch /tmp/shared/exit' EXIT | ||
| trap 'kill $(jobs -p); exit 0' TERM | ||
|
|
||
| cp "$(which oc)" /tmp/shared/ | ||
|
|
||
| mkdir -p "${HOME}" | ||
|
|
||
| # wait until the setup job creates admin.kubeconfig | ||
| while true; do | ||
| if [[ ! -f /tmp/shared/_data/_out/admin.kubeconfig ]]; then | ||
| sleep 15 & wait | ||
| continue | ||
| fi | ||
| # if we got admin kubeconfig file with failure marker, ignore. Teardown is in progress. | ||
| if [[ -f /tmp/shared/exit ]]; then | ||
| exit 1 | ||
| fi | ||
| break | ||
| done | ||
| echo "Found shared kubeconfig" | ||
|
|
||
| # don't let clients impact the global kubeconfig | ||
| cp -r /tmp/shared/_data /tmp/openshift/ | ||
| export KUBECONFIG=/tmp/openshift/_data/_out/admin.kubeconfig | ||
|
|
||
| PATH=/usr/libexec/origin:$PATH | ||
|
|
||
| # TODO: the test binary should really be a more structured command - most of these flags should be | ||
| # autodetected from the running cluster. | ||
| # TODO: bump nodes up to 40 again | ||
| set -x | ||
| if [[ -n "${TEST_FOCUS}" ]]; then | ||
| ginkgo -v -noColor -nodes=30 $( which extended.test ) -- \ | ||
| -ginkgo.focus="${TEST_FOCUS}" -ginkgo.skip="${TEST_SKIP}" \ | ||
| -e2e-output-dir /tmp/artifacts -report-dir /tmp/artifacts/junit \ | ||
| -test.timeout=2h || rc=$? | ||
| fi | ||
| if [[ -n "${TEST_FOCUS_SERIAL}" ]]; then | ||
| ginkgo -v -noColor -nodes=1 $( which extended.test ) -- \ | ||
| -ginkgo.focus="${TEST_FOCUS_SERIAL}" -ginkgo.skip="${TEST_SKIP_SERIAL}" \ | ||
| -e2e-output-dir /tmp/artifacts -report-dir /tmp/artifacts/junit/serial \ | ||
| -test.timeout=2h || rc=$? | ||
| fi | ||
| exit ${rc:-0} | ||
|
|
||
| # Runs an install | ||
| - name: setup | ||
| image: ${LOCAL_IMAGE_BIN} | ||
| volumeMounts: | ||
| - name: shared-tmp | ||
| mountPath: /tmp/shared | ||
| - name: cluster-secrets-azure | ||
| mountPath: /etc/azure/credentials | ||
| env: | ||
| - name: INSTANCE_PREFIX | ||
| value: ${NAMESPACE}-${JOB_NAME_HASH} | ||
| - name: TYPE | ||
| value: ${CLUSTER_TYPE} | ||
| - name: HOME | ||
| value: /tmp/shared/home | ||
| - name: SYNC_IMAGE | ||
| value: ${IMAGE_SYNC} | ||
| command: | ||
| - /bin/bash | ||
| - -c | ||
| - | | ||
| #!/bin/bash | ||
| set -euo pipefail | ||
|
|
||
| # trap acts as a switch/router for the next phases | ||
| trap 'rc=$?; if [[ $rc -ne 0 ]]; then | ||
| touch /tmp/shared/exit; | ||
| fi; | ||
| cp -r /go/src/github.com/openshift/openshift-azure/_data /tmp/shared &>/dev/null | ||
| exit $rc' EXIT | ||
| trap 'kill $(jobs -p); exit 0' TERM | ||
|
|
||
| # Cluster creation specific configuration. | ||
| mkdir -p "${HOME}" | ||
| source /etc/azure/credentials/secret | ||
| az login --service-principal -u ${AZURE_CLIENT_ID} -p ${AZURE_CLIENT_SECRET} --tenant ${AZURE_TENANT_ID} &>/dev/null | ||
| # aad integration configuration - we dont test aad so populate as dummy | ||
| export AZURE_AAD_CLIENT_ID=$AZURE_CLIENT_ID | ||
| export AZURE_AAD_CLIENT_SECRET=$AZURE_CLIENT_SECRET | ||
| echo "Using sync image ${SYNC_IMAGE}" | ||
| export DNS_DOMAIN=osadev.cloud | ||
| export DNS_RESOURCEGROUP=dns | ||
| export DEPLOY_VERSION=v3.10 | ||
| export RUN_SYNC_LOCAL=true | ||
| export IMAGE_RESOURCEGROUP=images | ||
| export IMAGE_RESOURCENAME=$(az image list -g $IMAGE_RESOURCEGROUP -o json --query "[?starts_with(name, '${DEPLOY_OS:-rhel7}-${DEPLOY_VERSION//v}') && tags.valid=='true'].name | sort(@) | [-1]" | tr -d '"') | ||
| # create cluster for test | ||
| cd /go/src/github.com/openshift/openshift-azure/ | ||
| ./hack/create.sh ${INSTANCE_PREFIX} | ||
|
|
||
| # Performs cleanup of all created resources | ||
| - name: teardown | ||
| image: ${LOCAL_IMAGE_BIN} | ||
| volumeMounts: | ||
| - name: shared-tmp | ||
| mountPath: /tmp/shared | ||
| - name: cluster-secrets-azure | ||
| mountPath: /etc/azure/credentials | ||
| - name: artifacts | ||
| mountPath: /tmp/artifacts | ||
| env: | ||
| - name: INSTANCE_PREFIX | ||
| value: ${NAMESPACE}-${JOB_NAME_HASH} | ||
| - name: TYPE | ||
| value: ${CLUSTER_TYPE} | ||
| - name: HOME | ||
| value: /tmp/shared/home | ||
| command: | ||
| - /bin/bash | ||
| - -c | ||
| - | | ||
| #!/bin/bash | ||
|
|
||
| # teardown is collecting debug data and deleting all used resources | ||
| function teardown() { | ||
| set +e | ||
| mkdir -p "${HOME}" | ||
| export HOME=/tmp/shared | ||
| export DNS_DOMAIN=osadev.cloud | ||
| export DNS_RESOURCEGROUP=dns | ||
| export KUBECONFIG=/tmp/shared/_data/_out/admin.kubeconfig | ||
|
|
||
| cp -r /tmp/shared/_data /go/src/github.com/openshift/openshift-azure/ | ||
| cd /go/src/github.com/openshift/openshift-azure/ | ||
| source /etc/azure/credentials/secret | ||
| az login --service-principal -u ${AZURE_CLIENT_ID} -p ${AZURE_CLIENT_SECRET} --tenant ${AZURE_TENANT_ID} &>/dev/null | ||
| oc get po --all-namespaces -o wide > /tmp/artifacts/pods | ||
| oc get no -o wide > /tmp/artifacts/nodes | ||
| oc get events --all-namespaces > /tmp/artifacts/events | ||
| ./hack/delete.sh ${INSTANCE_PREFIX} | ||
|
|
||
| trap 'teardown' EXIT | ||
| trap 'kill $(jobs -p); exit 0' TERM | ||
| } | ||
|
|
||
| trap 'teardown' EXIT | ||
| trap 'kill $(jobs -p); exit 0' TERM | ||
|
|
||
| # teardown is triggered on file marker | ||
| for i in `seq 1 120`; do | ||
| if [[ -f /tmp/shared/exit ]]; then | ||
| exit 0 | ||
| fi | ||
| sleep 60 & wait | ||
| done | ||
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why do you have cluster-launch-e2e-azure and cluster-launch-e2e-azure-conformance?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The first is meant to run openshift on azure specific e2e tests, the second is meant to run the origin conformance suite on a cluster deployed on azure.