Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -254,31 +254,23 @@ objects:
value: ${CLUSTER_TYPE}
- name: AWS_SHARED_CREDENTIALS_FILE
value: /etc/openshift-installer/.awscred
- name: OPENSHIFT_INSTALL_CLUSTER_NAME
- name: AWS_REGION
value: us-east-1
- name: CLUSTER_NAME
value: ${NAMESPACE}-${JOB_NAME_HASH}
- name: OPENSHIFT_INSTALL_BASE_DOMAIN
- name: BASE_DOMAIN
value: origin-ci-int-aws.dev.rhcloud.com
- name: OPENSHIFT_INSTALL_EMAIL_ADDRESS
value: test@ci.openshift.io
- name: OPENSHIFT_INSTALL_PASSWORD
value: verysecure
- name: OPENSHIFT_INSTALL_SSH_PUB_KEY_PATH
- name: SSH_PUB_KEY_PATH
value: /etc/openshift-installer/ssh-publickey
- name: OPENSHIFT_INSTALL_PULL_SECRET_PATH
- name: PULL_SECRET_PATH
value: /etc/openshift-installer/pull-secret
- name: OPENSHIFT_INSTALL_PLATFORM
value: ${CLUSTER_TYPE}
- name: OPENSHIFT_INSTALL_AWS_REGION
value: us-east-1
- name: OPENSHIFT_INSTALL_RELEASE_IMAGE_OVERRIDE
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is now silently dropped, so CI would use pinned release instead of latest

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No, that variable is still respected; even after my installer PR.

value: ${RELEASE_IMAGE_LATEST}
- name: OPENSHIFT_INSTALL_OPENSTACK_IMAGE
- name: OPENSTACK_IMAGE
value: rhcos
- name: OPENSHIFT_INSTALL_OPENSTACK_CLOUD
value: openstack-cloud
- name: OPENSHIFT_INSTALL_OPENSTACK_REGION
- name: OPENSTACK_REGION
value: RegionOne
- name: OPENSHIFT_INSTALL_OPENSTACK_EXTERNAL_NETWORK
- name: OPENSTACK_EXTERNAL_NETWORK
value: public
- name: OS_CLOUD
value: openstack-cloud
Expand All @@ -299,7 +291,73 @@ objects:
mkdir /tmp/artifacts/installer &&
/bin/openshift-install version >/tmp/artifacts/installer/version

export _CI_ONLY_STAY_AWAY_OPENSHIFT_INSTALL_AWS_USER_TAGS="{\"expirationDate\": \"$(date -d '4 hours' --iso=minutes --utc)\"}"
export EXPIRATION_DATE=$(date -d '4 hours' --iso=minutes --utc)
export CLUSTER_ID=$(uuidgen --random)
export SSH_PUB_KEY=$(cat "${SSH_PUB_KEY_PATH}")
export PULL_SECRET=$(cat "${PULL_SECRET_PATH}")

if [[ "${CLUSTER_TYPE}" == "aws" ]]; then
cat > /tmp/artifacts/installer/install-config.yml << EOF
baseDomain: ${BASE_DOMAIN}
clusterID: ${CLUSTER_ID}
machines:
- name: master
replicas: 3
- name: worker
replicas: 3
metadata:
name: ${CLUSTER_NAME}
networking:
clusterNetworks:
- cidr: 10.128.0.0/14
hostSubnetLength: 9
serviceCIDR: 172.30.0.0/16
type: OpenshiftSDN
platform:
aws:
region: ${AWS_REGION}
vpcCIDRBlock: 10.0.0.0/16
userTags:
expirationDate: ${EXPIRATION_DATE}
pullSecret: |
${PULL_SECRET}
sshKey: |
${SSH_PUB_KEY}
EOF
elif [[ "${CLUSTER_TYPE}" == "openstack" ]]; then
cat > /tmp/artifacts/installer/install-config.yml << EOF
baseDomain: ${BASE_DOMAIN}
clusterID: ${CLUSTER_ID}
machines:
- name: master
replicas: 3
- name: worker
replicas: 3
metadata:
name: ${CLUSTER_NAME}
networking:
clusterNetworks:
- cidr: 10.128.0.0/14
hostSubnetLength: 9
serviceCIDR: 172.30.0.0/16
type: OpenshiftSDN
platform:
openstack:
NetworkCIDRBlock: 10.0.0.0/16
baseImage: ${OPENSTACK_IMAGE}
cloud: ${OS_CLOUD}
externalNetwork: ${OPENSTACK_EXTERNAL_NETWORK}
region: ${OPENSTACK_REGION}
pullSecret: |
${PULL_SECRET}
sshKey: |
${SSH_PUB_KEY}
EOF
else
echo "Unsupported cluster type '${CLUSTER_NAME}'"
exit 1
fi

/bin/openshift-install --dir=/tmp/artifacts/installer --log-level=debug create cluster &
wait "$!"

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -226,31 +226,23 @@ objects:
value: ${CLUSTER_TYPE}
- name: AWS_SHARED_CREDENTIALS_FILE
value: /etc/openshift-installer/.awscred
- name: OPENSHIFT_INSTALL_CLUSTER_NAME
- name: AWS_REGION
value: us-east-1
- name: CLUSTER_NAME
value: ${NAMESPACE}-${JOB_NAME_HASH}
- name: OPENSHIFT_INSTALL_BASE_DOMAIN
- name: BASE_DOMAIN
value: origin-ci-int-aws.dev.rhcloud.com
- name: OPENSHIFT_INSTALL_EMAIL_ADDRESS
value: test@ci.openshift.io
- name: OPENSHIFT_INSTALL_PASSWORD
value: verysecure
- name: OPENSHIFT_INSTALL_SSH_PUB_KEY_PATH
- name: SSH_PUB_KEY_PATH
value: /etc/openshift-installer/ssh-publickey
- name: OPENSHIFT_INSTALL_PULL_SECRET_PATH
- name: PULL_SECRET_PATH
value: /etc/openshift-installer/pull-secret
- name: OPENSHIFT_INSTALL_PLATFORM
value: ${CLUSTER_TYPE}
- name: OPENSHIFT_INSTALL_AWS_REGION
value: us-east-1
- name: OPENSHIFT_INSTALL_RELEASE_IMAGE_OVERRIDE
value: ${RELEASE_IMAGE_LATEST}
- name: OPENSHIFT_INSTALL_OPENSTACK_IMAGE
- name: OPENSTACK_IMAGE
value: rhcos
- name: OPENSHIFT_INSTALL_OPENSTACK_CLOUD
value: openstack-cloud
- name: OPENSHIFT_INSTALL_OPENSTACK_REGION
- name: OPENSTACK_REGION
value: RegionOne
- name: OPENSHIFT_INSTALL_OPENSTACK_EXTERNAL_NETWORK
- name: OPENSTACK_EXTERNAL_NETWORK
value: public
- name: OS_CLOUD
value: openstack-cloud
Expand All @@ -271,7 +263,73 @@ objects:
mkdir /tmp/artifacts/installer &&
/bin/openshift-install version >/tmp/artifacts/installer/version

export _CI_ONLY_STAY_AWAY_OPENSHIFT_INSTALL_AWS_USER_TAGS="{\"expirationDate\": \"$(date -d '4 hours' --iso=minutes --utc)\"}"
export EXPIRATION_DATE=$(date -d '4 hours' --iso=minutes --utc)
export CLUSTER_ID=$(uuidgen --random)
export SSH_PUB_KEY=$(cat "${SSH_PUB_KEY_PATH}")
export PULL_SECRET=$(cat "${PULL_SECRET_PATH}")

if [[ "${CLUSTER_TYPE}" == "aws" ]]; then
cat > /tmp/artifacts/installer/install-config.yml << EOF
baseDomain: ${BASE_DOMAIN}
clusterID: ${CLUSTER_ID}
machines:
- name: master
replicas: 3
- name: worker
replicas: 3
metadata:
name: ${CLUSTER_NAME}
networking:
clusterNetworks:
- cidr: 10.128.0.0/14
hostSubnetLength: 9
serviceCIDR: 172.30.0.0/16
type: OpenshiftSDN
platform:
aws:
region: ${AWS_REGION}
vpcCIDRBlock: 10.0.0.0/16
userTags:
expirationDate: ${EXPIRATION_DATE}
pullSecret: |
${PULL_SECRET}
sshKey: |
${SSH_PUB_KEY}
EOF
elif [[ "${CLUSTER_TYPE}" == "openstack" ]]; then
cat > /tmp/artifacts/installer/install-config.yml << EOF
baseDomain: ${BASE_DOMAIN}
clusterID: ${CLUSTER_ID}
machines:
- name: master
replicas: 3
- name: worker
replicas: 3
metadata:
name: ${CLUSTER_NAME}
networking:
clusterNetworks:
- cidr: 10.128.0.0/14
hostSubnetLength: 9
serviceCIDR: 172.30.0.0/16
type: OpenshiftSDN
platform:
openstack:
NetworkCIDRBlock: 10.0.0.0/16
baseImage: ${OPENSTACK_IMAGE}
cloud: ${OS_CLOUD}
externalNetwork: ${OPENSTACK_EXTERNAL_NETWORK}
region: ${OPENSTACK_REGION}
pullSecret: |
${PULL_SECRET}
sshKey: |
${SSH_PUB_KEY}
EOF
else
echo "Unsupported cluster type '${CLUSTER_NAME}'"
exit 1
fi

/bin/openshift-install --dir=/tmp/artifacts/installer --log-level=debug create cluster &
wait "$!"

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -233,10 +233,6 @@ objects:
value: ${CLUSTER_TYPE}
- name: OPENSHIFT_INSTALL_CLUSTER_NAME
value: ${NAMESPACE}-${JOB_NAME_HASH}
- name: OPENSHIFT_INSTALL_EMAIL_ADDRESS
value: test@ci.openshift.io
- name: OPENSHIFT_INSTALL_PASSWORD
value: verysecure
- name: OPENSHIFT_INSTALL_SSH_PUB_KEY_PATH
value: /etc/openshift-installer/ssh-publickey
- name: OPENSHIFT_INSTALL_PULL_SECRET_PATH
Expand All @@ -255,34 +251,80 @@ objects:
trap 'rc=$?; if test "${rc}" -eq 0; then touch /tmp/config-success; else touch /tmp/exit; fi; exit "${rc}"' EXIT
trap 'CHILDREN=$(jobs -p); if test -n "${CHILDREN}"; then kill ${CHILDREN}; fi' TERM

if [[ ${TYPE} == 'gcp' ]]; then
export OPENSHIFT_INSTALL_PLATFORM=libvirt
export OPENSHIFT_INSTALL_BASE_DOMAIN=origin-ci-int-gce.dev.rhcloud.com
export OPENSHIFT_INSTALL_LIBVIRT_URI="qemu+tcp://192.168.122.1/system"
export OPENSHIFT_INSTALL_LIBVIRT_IMAGE="file:///unused"
fi
if [[ ${TYPE} == 'aws' ]]; then
export OPENSHIFT_INSTALL_PLATFORM="aws"
export OPENSHIFT_INSTALL_BASE_DOMAIN="test.ose"
export AWS_SHARED_CREDENTIALS_FILE="/etc/openshift-installer/.awscred"
export OPENSHIFT_INSTALL_AWS_REGION="us-east-1"
fi
mkdir /tmp/artifacts/installer &&
/bin/openshift-install version >/tmp/artifacts/installer/version

export _CI_ONLY_STAY_AWAY_OPENSHIFT_INSTALL_AWS_USER_TAGS="{\"expirationDate\": \"$(date -d '4 hours' --iso=minutes --utc)\"}"
/bin/openshift-install --dir=/tmp/artifacts/installer --log-level=debug create install-config
export CLUSTER_ID=$(uuidgen --random)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why not use JOB_NAME_HASH? This would help us identify which resources are created by particular CI job

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The cluster ID has to be a version 4 UUID. If JOB_NAME_HASH satisfies that constraint we can use it. Do you know if it does?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah, right, it doesn't. Seems this value would be logged in the container logs, so resource can be tracked down to the CI job


# Update install configs to set desired number of masters and workers
sed -i "/master/{n;s/1/${MASTERS}/}" /tmp/artifacts/installer/.openshift_install_state.json
sed -i "/worker/{n;s/1/${WORKERS}/}" /tmp/artifacts/installer/.openshift_install_state.json
sed -i "/master/{n;n;s/1/${MASTERS}/}" /tmp/artifacts/installer/install-config.yml
sed -i "/worker/{n;n;s/1/${WORKERS}/}" /tmp/artifacts/installer/install-config.yml
if [[ "${CLUSTER_TYPE}" == "gcp" ]]; then
cat > /tmp/artifacts/installer/install-config.yml << EOF
baseDomain: origin-ci-int-gce.dev.rhcloud.com
clusterID: ${CLUSTER_ID}
machines:
- name: master
replicas: ${MASTERS}
- name: worker
replicas: ${WORKERS}
metadata:
name: ${CLUSTER_NAME}
networking:
clusterNetworks:
- cidr: 10.128.0.0/14
hostSubnetLength: 9
serviceCIDR: 172.30.0.0/16
type: OpenshiftSDN
platform:
libvirt:
URI: qemu+tcp://192.168.122.1/system
defaultMachinePlatform:
image: file:///unused
masterIPs: null
network:
if: tt0
ipRange: 192.168.126.0/24
pullSecret: |
${PULL_SECRET}
sshKey: |
${SSH_PUB_KEY}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This should be left blank for BYOR - public key is being injected via playbook

EOF
elif [[ "${CLUSTER_TYPE}" == "aws" ]]; then
export AWS_SHARED_CREDENTIALS_FILE="/etc/openshift-installer/.awscred"
export EXPIRATION_DATE=$(date -d '4 hours' --iso=minutes --utc)
cat > /tmp/artifacts/installer/install-config.yml << EOF
baseDomain: test.ose
clusterID: ${CLUSTER_ID}
machines:
- name: master
replicas: ${MASTERS}
- name: worker
replicas: ${WORKERS}
metadata:
name: ${CLUSTER_NAME}
networking:
clusterNetworks:
- cidr: 10.128.0.0/14
hostSubnetLength: 9
serviceCIDR: 172.30.0.0/16
type: OpenshiftSDN
platform:
aws:
region: us-east-1
vpcCIDRBlock: 10.0.0.0/16
userTags:
expirationDate: ${EXPIRATION_DATE}
pullSecret: |
${PULL_SECRET}
sshKey: |
${SSH_PUB_KEY}
EOF
else
echo "Unsupported cluster type '${CLUSTER_NAME}'"
exit 1
fi

/bin/openshift-install --dir=/tmp/artifacts/installer --log-level=debug create ignition-configs &
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ignition-configs would destroy install-config.yml - is this an installer bug? In any case in CI we should be backing install config since its now cannot be reconstructed using env vars

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That is core to the design of the installer (so no, not a bug). Why is the install config needed after the Ignition Configs are generated?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd keep the backup to ensure that the all the vars got templated correctly, this is not required though

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The installer's state file (.openshift_install_state.json) should be archived, so the install-config can be pulled out of there if we need to inspect it after the fact.

Copy link
Contributor

@vrutkovs vrutkovs Dec 12, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

CI stores this file, but it doesn't seem to contain any config - https://storage.googleapis.com/origin-ci-test/pr-logs/pull/openshift_installer/861/pull-ci-openshift-installer-master-e2e-aws/2219/artifacts/e2e-aws/installer/.openshift_install_state.json - a bug in PR 861?

nevermind, it does - installconfig.InstallConfig

wait "$!"


# Runs an install
- name: setup
image: ${IMAGE_ANSIBLE}
Expand Down