From c59b918377d0e2fd74acadd52a459da65bde2528 Mon Sep 17 00:00:00 2001 From: Russell Teague Date: Tue, 23 Jan 2018 15:15:48 -0500 Subject: [PATCH] [enterprise-3.9] Update 'byo' references based on playbook refactoring (cherry picked from commit ad443a02059483b7439f33a348ee270fa3c008c7) xref:https://github.com/openshift/openshift-docs/pull/7274 --- admin_solutions/master_node_config.adoc | 56 ++++++++++++++++++- install_config/cluster_metrics.adoc | 8 +-- install_config/install/advanced_install.adoc | 54 +++++++++--------- .../install/stand_alone_registry.adoc | 2 +- install_config/redeploying_certificates.adoc | 16 +++--- ...nerized_heketi_with_dedicated_gluster.adoc | 8 ++- .../upgrading/automated_upgrades.adoc | 2 +- 7 files changed, 102 insertions(+), 44 deletions(-) diff --git a/admin_solutions/master_node_config.adoc b/admin_solutions/master_node_config.adoc index a4ded3d6d589..1766ef38a298 100644 --- a/admin_solutions/master_node_config.adoc +++ b/admin_solutions/master_node_config.adoc @@ -69,7 +69,7 @@ xref:../admin_solutions/master_node_config.adoc#master-node-config-manual[manual For this section, familiarity with Ansible is assumed. Only a portion of the available host configuration options are -https://github.com/openshift/openshift-ansible/blob/master/inventory/byo/hosts.ose.example[exposed to Ansible]. +https://github.com/openshift/openshift-ansible/blob/master/inventory/hosts.example[exposed to Ansible]. After an {product-title} install, Ansible creates an inventory file with some substituted values. Modifying this inventory file and re-running the Ansible installer playbook is how you customize your {product-title} cluster. @@ -145,7 +145,7 @@ openshift_master_htpasswd_users={'jsmith': '$apr1$wIwXkFLI$bAygtKGmPOqaJftB', 'b . Re-run the ansible playbook for these modifications to take effect: + ---- -$ ansible-playbook -b -i ./hosts ~/src/openshift-ansible/playbooks/byo/config.yml +$ ansible-playbook -b -i ./hosts ~/src/openshift-ansible/playbooks/deploy_cluster.yml ---- + The playbook updates the configuration, and restarts the OpenShift master service to apply the changes. @@ -153,7 +153,7 @@ The playbook updates the configuration, and restarts the OpenShift master servic You have now modified the master and node configuration files using Ansible, but this is just a simple use case. From here you can see which xref:../admin_solutions/master_node_config.adoc#master-config-options[master] and xref:../admin_solutions/master_node_config.adoc#node-config-options[node configuration] options are -https://github.com/openshift/openshift-ansible/blob/master/inventory/byo/hosts.ose.example[exposed to Ansible] and customize your own Ansible inventory. +https://github.com/openshift/openshift-ansible/blob/master/inventory/hosts.example[exposed to Ansible] and customize your own Ansible inventory. [[htpasswd]] ==== Using the `htpasswd` commmand @@ -395,6 +395,56 @@ etcdConfig: storageDirectory: /var/lib/origin/openshift.local.etcd ---- +|`*etcdStorageConfig*` +|Contains information about how API resources are stored in etcd. These values are only relevant when etcd is the backing store for the cluster. + +|`*imageConfig*` +a|Holds options that describe how to build image names for system components: + +- `*Format*` (string): Describes how to determine image names for system components +- `*Latest*` (boolean): Defines whether to attempt to use the latest system component images or the latest release. + +|`*imagePolicyConfig*` +a|Controls limits and behavior for importing images: + +- `*MaxImagesBulkImportedPerRepository*` (integer): Controls the number of images that are imported when a user does a bulk import of a Docker repository. This number is set low to prevent users from importing large numbers of images accidentally. This can be set to `-1` for no limit. +- `*DisableScheduledImport*` (boolean): Allows scheduled background import of images to be disabled. +- `*ScheduledImageImportMinimumIntervalSeconds*` (integer): The minimum number of seconds that can elapse between when image streams scheduled for background import are checked against the upstream repository. The default value is `900` (15 minutes). +- `*MaxScheduledImageImportsPerMinute*` (integer): The maximum number of image streams that can be imported in the background, per minute. The default value is `60`. This can be set to `-1` for unlimited imports. + +https://github.com/openshift/openshift-ansible/blob/master/inventory/hosts.example[This can be controlled with the Ansible inventory]. + +|`*kubernetesMasterConfig*` +|Contains information about how to connect to kubelet's KubernetesMasterConfig. If present, then start the kubernetes master with this process. + +|`*masterClients*` +a|Holds all the client connection information for controllers and other system components: + +- `*OpenShiftLoopbackKubeConfig*` (string): the .kubeconfig filename for system components to loopback to this master. +- `*ExternalKubernetesKubeConfig*` (string): the .kubeconfig filename for proxying to Kubernetes. + +|`*masterPublicURL*` +|The URL that clients use to access the {product-title} API server. + +|`*networkConfig*` +a|To be passed to the compiled-in-network plug-in. Many of the options here can be controlled in the Ansible inventory. + +- `*NetworkPluginName*` (string) +- `*ClusterNetworkCIDR*` (string) +- `*HostSubnetLength*` (unsigned integer) +- `*ServiceNetworkCIDR*` (string) +- `*ExternalIPNetworkCIDRs*` (string array): Controls which values are acceptable for the service external IP field. If empty, no external IP may be set. It can contain a list of CIDRs which are checked for access. If a CIDR is prefixed with `!`, then IPs in that CIDR are rejected. Rejections are applied first, then the IP is checked against one of the allowed CIDRs. For security purposes, you should ensure this range does not overlap with your nodes, pods, or service CIDRs. + +For Example: +---- +networkConfig: + clusterNetworkCIDR: 10.3.0.0/16 + hostSubnetLength: 8 + networkPluginName: example/openshift-ovs-subnet +# serviceNetworkCIDR must match kubernetesMasterConfig.servicesSubnet + serviceNetworkCIDR: 179.29.0.0/16 +---- + |`*oauthConfig*` a|If present, then the /oauth endpoint starts based on the defined parameters. For example: ---- diff --git a/install_config/cluster_metrics.adoc b/install_config/cluster_metrics.adoc index f206459186e1..8ff75b369caa 100644 --- a/install_config/cluster_metrics.adoc +++ b/install_config/cluster_metrics.adoc @@ -918,7 +918,7 @@ openshift_prometheus_node_selector={"region":"infra"} Run the playbook: ---- -$ ansible-playbook -vvv -i ${INVENTORY_FILE} playbooks/byo/openshift-cluster/openshift-prometheus.yml +$ ansible-playbook -vvv -i ${INVENTORY_FILE} playbooks/openshift-prometheus/config.yml ---- [[openshift-prometheus-additional-deploy]] @@ -942,7 +942,7 @@ openshift_prometheus_node_selector={"${KEY}":"${VALUE}"} Run the playbook: ---- -$ ansible-playbook -vvv -i ${INVENTORY_FILE} playbooks/byo/openshift-cluster/openshift-prometheus.yml +$ ansible-playbook -vvv -i ${INVENTORY_FILE} playbooks/openshift-prometheus/config.yml ---- *Deploy Using a Non-default Namespace* @@ -958,7 +958,7 @@ openshift_prometheus_namespace=${USER_PROJECT} Run the playbook: ---- -$ ansible-playbook -vvv -i ${INVENTORY_FILE} playbooks/byo/openshift-cluster/openshift-prometheus.yml +$ ansible-playbook -vvv -i ${INVENTORY_FILE} playbooks/openshift-prometheus/config.yml ---- [[openshift-prometheus-web]] @@ -1093,5 +1093,5 @@ gathered from the `http://${POD_IP}:7575/metrics` endpoint. To undeploy Prometheus, run: ---- -$ ansible-playbook -vvv -i ${INVENTORY_FILE} playbooks/byo/openshift-cluster/openshift-prometheus.yml -e openshift_prometheus_state=absent +$ ansible-playbook -vvv -i ${INVENTORY_FILE} playbooks/openshift-prometheus/config.yml -e openshift_prometheus_state=absent ---- diff --git a/install_config/install/advanced_install.adoc b/install_config/install/advanced_install.adoc index b04a129c3167..4e9d25539bec 100644 --- a/install_config/install/advanced_install.adoc +++ b/install_config/install/advanced_install.adoc @@ -2288,18 +2288,17 @@ If you are not using a proxy, you can skip this step. ==== In {product-title}: -ifdef::openshift-enterprise[] + ---- +ifdef::openshift-enterprise[] # ansible-playbook [-i /path/to/inventory] \ - /usr/share/ansible/openshift-ansible/playbooks/byo/config.yml ----- + /usr/share/ansible/openshift-ansible/playbooks/deploy_cluster.yml endif::[] ifdef::openshift-origin[] ----- # ansible-playbook [-i /path/to/inventory] \ - ~/openshift-ansible/playbooks/byo/config.yml ----- + ~/openshift-ansible/playbooks/deploy_cluster.yml endif::[] +---- If for any reason the installation fails, before re-running the installer, see xref:installer-known-issues[Known Issues] to check for any specific @@ -2363,7 +2362,7 @@ or workarounds. You can use the `PLAYBOOK_FILE` environment variable to specify other playbooks you want to run by using the containerized installer. The default value of the `PLAYBOOK_FILE` is -*_/usr/share/ansible/openshift-ansible/playbooks/byo/config.yml_*, which is the +*_/usr/share/ansible/openshift-ansible/playbooks/deploy_cluster.yml_*, which is the main cluster installation playbook, but you can set it to the path of another playbook inside the container. @@ -2375,7 +2374,7 @@ installation, use the following command: # atomic install --system \ --storage=ostree \ --set INVENTORY_FILE=/path/to/inventory \ - --set PLAYBOOK_FILE=/usr/share/ansible/openshift-ansible/playbooks/byo/openshift-checks/pre-install.yml \ <1> + --set PLAYBOOK_FILE=/usr/share/ansible/openshift-ansible/playbooks/openshift-checks/pre-install.yml \ <1> --set OPTS="-v" \ <2> ifdef::openshift-enterprise[] registry.access.redhat.com/openshift3/ose-ansible:v3.7 @@ -2420,7 +2419,7 @@ $ docker run -t -u `id -u` \ <1> -v $HOME/.ssh/id_rsa:/opt/app-root/src/.ssh/id_rsa:Z \ <2> -v $HOME/ansible/hosts:/tmp/inventory:Z \ <3> -e INVENTORY_FILE=/tmp/inventory \ <3> - -e PLAYBOOK_FILE=playbooks/byo/config.yml \ <4> + -e PLAYBOOK_FILE=playbooks/deploy_cluster.yml \ <4> -e OPTS="-v" \ <5> ifdef::openshift-enterprise[] registry.access.redhat.com/openshift3/ose-ansible:v3.7 @@ -2465,8 +2464,8 @@ The inventory file can also be downloaded from a web server if you specify the `INVENTORY_URL` environment variable, or generated dynamically using `DYNAMIC_SCRIPT_URL` to specify an executable script that provides a dynamic inventory. -<4> `-e PLAYBOOK_FILE=playbooks/byo/config.yml` specifies the playbook -to run (in this example, the BYO installer) as a relative path from the +<4> `-e PLAYBOOK_FILE=playbooks/deploy_cluster.yml` specifies the playbook +to run (in this example, the default installer) as a relative path from the top level directory of *openshift-ansible* content. The full path from the RPM can also be used, as well as the path to any other playbook file in the container. @@ -2477,7 +2476,7 @@ inside the container. [[running-the-advanced-installation-individual-components]] === Running Individual Component Playbooks -The main installation playbook *_{pb-prefix}playbooks/byo/config.yml_* runs a +The main installation playbook *_{pb-prefix}playbooks/deploy_cluster.yml_* runs a set of individual component playbooks in a specific order, and the installer reports back at the end what phases you have gone through. If the installation fails during a phase, you are notified on the screen along with the errors from @@ -2500,46 +2499,49 @@ playbook is run: |Playbook Name |File Location |Health Check -|*_{pb-prefix}playbooks/byo/openshift-checks/pre-install.yml_* +|*_{pb-prefix}playbooks/openshift-checks/pre-install.yml_* |etcd Install -|*_{pb-prefix}playbooks/byo/openshift-etcd/config.yml_* +|*_{pb-prefix}playbooks/openshift-etcd/config.yml_* |NFS Install -|*_{pb-prefix}playbooks/byo/openshift-nfs/config.yml_* +|*_{pb-prefix}playbooks/openshift-nfs/config.yml_* |Load Balancer Install -|*_{pb-prefix}playbooks/byo/openshift-loadbalancer/config.yml_* +|*_{pb-prefix}playbooks/openshift-loadbalancer/config.yml_* |Master Install -|*_{pb-prefix}playbooks/byo/openshift-master/config.yml_* +|*_{pb-prefix}playbooks/openshift-master/config.yml_* |Master Additional Install -|*_{pb-prefix}playbooks/byo/openshift-master/additional_config.yml_* +|*_{pb-prefix}playbooks/openshift-master/additional_config.yml_* |Node Install -|*_{pb-prefix}playbooks/byo/openshift-node/config.yml_* +|*_{pb-prefix}playbooks/openshift-node/config.yml_* |GlusterFS Install -|*_{pb-prefix}playbooks/byo/openshift-glusterfs/config.yml_* +|*_{pb-prefix}playbooks/openshift-glusterfs/config.yml_* |Hosted Install -|*_{pb-prefix}playbooks/byo/openshift-cluster/openshift-hosted.yml_* +|*_{pb-prefix}playbooks/openshift-hosted/config.yml_* + +|Web Console Install +|*_{pb-prefix}playbooks/openshift-web-console/config.yml_* |Metrics Install -|*_{pb-prefix}playbooks/byo/openshift-cluster/openshift-metrics.yml_* +|*_{pb-prefix}playbooks/openshift-metrics/config.yml_* |Logging Install -|*_{pb-prefix}playbooks/byo/openshift-cluster/openshift-logging.yml_* +|*_{pb-prefix}playbooks/openshift-logging/config.yml_* |Prometheus Install -|*_{pb-prefix}playbooks/byo/openshift-cluster/openshift-prometheus.yml_* +|*_{pb-prefix}playbooks/openshift-prometheus/config.yml_* |Service Catalog Install -|*_{pb-prefix}playbooks/byo/openshift-cluster/service-catalog.yml_* +|*_{pb-prefix}playbooks/openshift-service-catalog/config.yml_* |Management Install -|*_{pb-prefix}playbooks/byo/openshift-management/config.yml_* +|*_{pb-prefix}playbooks/openshift-management/config.yml_* |=== [[advanced-verifying-the-installation]] diff --git a/install_config/install/stand_alone_registry.adoc b/install_config/install/stand_alone_registry.adoc index f807f1e8e91e..b709f8ca7aac 100644 --- a/install_config/install/stand_alone_registry.adoc +++ b/install_config/install/stand_alone_registry.adoc @@ -275,7 +275,7 @@ After you have configured Ansible by defining an inventory file in following playbook: ---- -# ansible-playbook /usr/share/ansible/openshift-ansible/playbooks/byo/config.yml +# ansible-playbook /usr/share/ansible/openshift-ansible/playbooks/deploy_cluster.yml ---- [NOTE] diff --git a/install_config/redeploying_certificates.adoc b/install_config/redeploying_certificates.adoc index 22a5e46105af..a01f2d8332b8 100644 --- a/install_config/redeploying_certificates.adoc +++ b/install_config/redeploying_certificates.adoc @@ -289,7 +289,7 @@ To redeploy master, etcd, and node certificates using the current ---- $ ansible-playbook -i \ - /usr/share/ansible/openshift-ansible/playbooks/byo/openshift-cluster/redeploy-certificates.yml + /usr/share/ansible/openshift-ansible/playbooks/redeploy-certificates.yml ---- [[redeploying-new-custom-ca]] @@ -336,7 +336,7 @@ step. + ---- $ ansible-playbook -i \ - /usr/share/ansible/openshift-ansible/playbooks/byo/openshift-cluster/redeploy-openshift-ca.yml + /usr/share/ansible/openshift-ansible/playbooks/openshift-master/redeploy-openshift-ca.yml ---- With the new {product-title} CA in place, you can then use the @@ -366,7 +366,7 @@ To redeploy a newly generated etcd CA: + ---- $ ansible-playbook -i \ - /usr/share/ansible/openshift-ansible/playbooks/byo/openshift-cluster/redeploy-etcd-ca.yml + /usr/share/ansible/openshift-ansible/playbooks/openshift-etcd/redeploy-ca.yml ---- With the new etcd CA in place, you can then use the @@ -385,7 +385,7 @@ file: ---- $ ansible-playbook -i \ - /usr/share/ansible/openshift-ansible/playbooks/byo/openshift-cluster/redeploy-master-certificates.yml + /usr/share/ansible/openshift-ansible/playbooks/openshift-master/redeploy-certificates.yml ---- [[redeploying-etcd-certificates]] @@ -404,7 +404,7 @@ file: ---- $ ansible-playbook -i \ - /usr/share/ansible/openshift-ansible/playbooks/byo/openshift-cluster/redeploy-etcd-certificates.yml + /usr/share/ansible/openshift-ansible/playbooks/openshift-etcd/redeploy-certificates.yml ---- [[redeploying-node-certificates]] @@ -418,7 +418,7 @@ file: ---- $ ansible-playbook -i \ - /usr/share/ansible/openshift-ansible/playbooks/byo/openshift-cluster/redeploy-node-certificates.yml + /usr/share/ansible/openshift-ansible/playbooks/openshift-node/redeploy-certificates.yml ---- [[redeploying-registry-router-certificates]] @@ -439,7 +439,7 @@ inventory file: ---- $ ansible-playbook -i \ - /usr/share/ansible/openshift-ansible/playbooks/byo/openshift-cluster/redeploy-registry-certificates.yml + /usr/share/ansible/openshift-ansible/playbooks/openshift-hosted/redeploy-registry-certificates.yml ---- [[redeploying-router-certificates]] @@ -450,7 +450,7 @@ inventory file: ---- $ ansible-playbook -i \ - /usr/share/ansible/openshift-ansible/playbooks/byo/openshift-cluster/redeploy-router-certificates.yml + /usr/share/ansible/openshift-ansible/playbooks/openshift-hosted/redeploy-router-certificates.yml ---- [[redeploying-custom-registry-or-router-certificates]] diff --git a/install_config/storage_examples/containerized_heketi_with_dedicated_gluster.adoc b/install_config/storage_examples/containerized_heketi_with_dedicated_gluster.adoc index 3139d50632ae..da3d0ffb259b 100644 --- a/install_config/storage_examples/containerized_heketi_with_dedicated_gluster.adoc +++ b/install_config/storage_examples/containerized_heketi_with_dedicated_gluster.adoc @@ -224,7 +224,13 @@ $ gluster volume info == Dynamically Provision a Volume [NOTE] ==== -If you installed {product-title} by using the link:https://github.com/openshift/openshift-ansible/tree/master/inventory/byo[BYO (Bring your own) OpenShift Ansible inventory configuration files] for either link:https://github.com/openshift/openshift-ansible/blob/master/inventory/byo/hosts.byo.glusterfs.native.example[native] or link:https://github.com/openshift/openshift-ansible/blob/master/inventory/byo/hosts.byo.glusterfs.external.example[external] GlusterFS instance, the GlusterFS StorageClass automatically get created during the installation. For such cases you can skip the following storage class creation steps and directly proceed with creating persistent volume claim instruction. +If you installed {product-title} by using the +link:https://github.com/openshift/openshift-ansible/tree/master/inventory/[OpenShift Ansible example inventory configuration files] for either +link:https://github.com/openshift/openshift-ansible/blob/master/inventory/hosts.glusterfs.native.example[native] or +link:https://github.com/openshift/openshift-ansible/blob/master/inventory/hosts.glusterfs.external.example[external] +GlusterFS instance, the GlusterFS StorageClass automatically gets created during +the installation. For such cases you can skip the following storage class creation +steps and directly proceed with creating persistent volume claim instruction. ==== . Create a `StorageClass` object definition. The following definition is based on the diff --git a/install_config/upgrading/automated_upgrades.adoc b/install_config/upgrading/automated_upgrades.adoc index 9f50a9b2c01c..a76f71ce816b 100644 --- a/install_config/upgrading/automated_upgrades.adoc +++ b/install_config/upgrading/automated_upgrades.adoc @@ -558,7 +558,7 @@ xref:../../install_config/install/advanced_install.adoc#install-config-install-a + ---- # ansible-playbook -i \ - /usr/share/ansible/openshift-ansible/playbooks/byo/openshift-cluster/service-catalog.yml + /usr/share/ansible/openshift-ansible/playbooks/openshift-service-catalog/config.yml ---- // end::automated-service-catalog-upgrade-steps[]