Skip to content

Conversation

@michaelgugino
Copy link
Contributor

@michaelgugino michaelgugino commented Jan 19, 2018

This commit builds off #6784

Enables openshift-master/scaleup.yml to call prerequisites.yml as required. Also modifies some failure conditions around how scaleup plays are called to ensure inventories are better aligned with playbook behavior.

@openshift-ci-robot openshift-ci-robot added the size/M Denotes a PR that changes 30-99 lines, ignoring generated files. label Jan 19, 2018
@michaelgugino
Copy link
Contributor Author

This seems to work pretty well. Deployed 1 master + 1 node, then added 1 new master and 1 node (simultaneously) and ran openshift-master/scaleup.yml

Output:

PLAY RECAP ***********************************
master1 : ok=79   changed=4    unreachable=0    failed=0   
node2 : ok=167  changed=57   unreachable=0    failed=0   
master2 : ok=343  changed=132  unreachable=0    failed=0   
node1 : ok=3    changed=0    unreachable=0    failed=0   
localhost                  : ok=23   changed=0    unreachable=0    failed=0   


INSTALLER STATUS ***********************
Initialization             : Complete (0:00:16)
Load balancer Install      : Complete (0:00:00)
Master Install             : Complete (0:03:19)
Node Install               : Complete (0:02:08)

@michaelgugino
Copy link
Contributor Author

I did confirm with oc get pods and oc get nodes everything looks to be in order.

Allow playbooks/openshift-master/scaleup.yml to call
prerequisites.yml at the proper time.

Related-to: openshift#6784
- fail:
# new_masters must be part of new_nodes as well;
msg: >
Each host in new_masters must also appear in new_nodes
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not sure why wouldn't scaleup disallow scaling up dedicated masters

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Our masters must be nodes for various reasons, mainly they need to be on the SDN but in 3.9 masters will actually be expected to run the console pods.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we actually run the node scaleup playbook to complete the node specific tasks or do we need to document the need to run node scaleup too?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@sdodson this openshift-master/private/scaleup.yml and openshift-node/private/scaleup.yml both call openshift-node/private/config.yml. There is no need to run openshift-node/scaleup in addition to this play if you're adding both masters and nodes. The nodes were already being configured at the same time as masters, according to our plays.

If one is adding nodes and 0 masters, then they can run openshift-node/scaleup.yml.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sounds good to me, thanks.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

in 3.9 masters will actually be expected to run the console pods.

Oh, I see, makes sense.

Do we actually run the node scaleup playbook to complete the node specific tasks

Yes, the node is being setup there

@sdodson
Copy link
Member

sdodson commented Jan 22, 2018

/lgtm

@openshift-ci-robot openshift-ci-robot added the lgtm Indicates that a PR is ready to be merged. label Jan 22, 2018
@openshift-merge-robot
Copy link
Contributor

/test all [submit-queue is verifying that this PR is safe to merge]

@openshift-merge-robot
Copy link
Contributor

Automatic merge from submit-queue.

@openshift-merge-robot openshift-merge-robot merged commit 9dc31f1 into openshift:master Jan 22, 2018
@openshift-ci-robot
Copy link

@michaelgugino: The following tests failed, say /retest to rerun them all:

Test name Commit Details Rerun command
ci/openshift-jenkins/gcp 682ddb8 link /test gcp
ci/openshift-jenkins/extended_conformance_install_crio 682ddb8 link /test crio

Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR.

Details

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

lgtm Indicates that a PR is ready to be merged. priority/P1 size/M Denotes a PR that changes 30-99 lines, ignoring generated files.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants