Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
56 changes: 53 additions & 3 deletions admin_solutions/master_node_config.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -69,7 +69,7 @@ xref:../admin_solutions/master_node_config.adoc#master-node-config-manual[manual
For this section, familiarity with Ansible is assumed.

Only a portion of the available host configuration options are
https://github.com/openshift/openshift-ansible/blob/master/inventory/byo/hosts.ose.example[exposed to Ansible].
https://github.com/openshift/openshift-ansible/blob/master/inventory/hosts.example[exposed to Ansible].
After an {product-title} install, Ansible creates an
inventory file with some substituted values. Modifying this inventory file and re-running the Ansible installer playbook is how you customize your {product-title} cluster.

Expand Down Expand Up @@ -145,15 +145,15 @@ openshift_master_htpasswd_users={'jsmith': '$apr1$wIwXkFLI$bAygtKGmPOqaJftB', 'b
. Re-run the ansible playbook for these modifications to take effect:
+
----
$ ansible-playbook -b -i ./hosts ~/src/openshift-ansible/playbooks/byo/config.yml
$ ansible-playbook -b -i ./hosts ~/src/openshift-ansible/playbooks/deploy_cluster.yml
----
+
The playbook updates the configuration, and restarts the OpenShift master service to apply the changes.

You have now modified the master and node configuration files using Ansible, but this is just a simple use case. From here you can see which
xref:../admin_solutions/master_node_config.adoc#master-config-options[master] and
xref:../admin_solutions/master_node_config.adoc#node-config-options[node configuration] options are
https://github.com/openshift/openshift-ansible/blob/master/inventory/byo/hosts.ose.example[exposed to Ansible] and customize your own Ansible inventory.
https://github.com/openshift/openshift-ansible/blob/master/inventory/hosts.example[exposed to Ansible] and customize your own Ansible inventory.

[[htpasswd]]
==== Using the `htpasswd` commmand
Expand Down Expand Up @@ -395,6 +395,56 @@ etcdConfig:
storageDirectory: /var/lib/origin/openshift.local.etcd
----

|`*etcdStorageConfig*`
|Contains information about how API resources are stored in etcd. These values are only relevant when etcd is the backing store for the cluster.

|`*imageConfig*`
a|Holds options that describe how to build image names for system components:

- `*Format*` (string): Describes how to determine image names for system components
- `*Latest*` (boolean): Defines whether to attempt to use the latest system component images or the latest release.

|`*imagePolicyConfig*`
a|Controls limits and behavior for importing images:

- `*MaxImagesBulkImportedPerRepository*` (integer): Controls the number of images that are imported when a user does a bulk import of a Docker repository. This number is set low to prevent users from importing large numbers of images accidentally. This can be set to `-1` for no limit.
- `*DisableScheduledImport*` (boolean): Allows scheduled background import of images to be disabled.
- `*ScheduledImageImportMinimumIntervalSeconds*` (integer): The minimum number of seconds that can elapse between when image streams scheduled for background import are checked against the upstream repository. The default value is `900` (15 minutes).
- `*MaxScheduledImageImportsPerMinute*` (integer): The maximum number of image streams that can be imported in the background, per minute. The default value is `60`. This can be set to `-1` for unlimited imports.

https://github.com/openshift/openshift-ansible/blob/master/inventory/hosts.example[This can be controlled with the Ansible inventory].

|`*kubernetesMasterConfig*`
|Contains information about how to connect to kubelet's KubernetesMasterConfig. If present, then start the kubernetes master with this process.

|`*masterClients*`
a|Holds all the client connection information for controllers and other system components:

- `*OpenShiftLoopbackKubeConfig*` (string): the .kubeconfig filename for system components to loopback to this master.
- `*ExternalKubernetesKubeConfig*` (string): the .kubeconfig filename for proxying to Kubernetes.

|`*masterPublicURL*`
|The URL that clients use to access the {product-title} API server.

|`*networkConfig*`
a|To be passed to the compiled-in-network plug-in. Many of the options here can be controlled in the Ansible inventory.

- `*NetworkPluginName*` (string)
- `*ClusterNetworkCIDR*` (string)
- `*HostSubnetLength*` (unsigned integer)
- `*ServiceNetworkCIDR*` (string)
- `*ExternalIPNetworkCIDRs*` (string array): Controls which values are acceptable for the service external IP field. If empty, no external IP may be set. It can contain a list of CIDRs which are checked for access. If a CIDR is prefixed with `!`, then IPs in that CIDR are rejected. Rejections are applied first, then the IP is checked against one of the allowed CIDRs. For security purposes, you should ensure this range does not overlap with your nodes, pods, or service CIDRs.

For Example:
----
networkConfig:
clusterNetworkCIDR: 10.3.0.0/16
hostSubnetLength: 8
networkPluginName: example/openshift-ovs-subnet
# serviceNetworkCIDR must match kubernetesMasterConfig.servicesSubnet
serviceNetworkCIDR: 179.29.0.0/16
----

|`*oauthConfig*`
a|If present, then the /oauth endpoint starts based on the defined parameters. For example:
----
Expand Down
8 changes: 4 additions & 4 deletions install_config/cluster_metrics.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -918,7 +918,7 @@ openshift_prometheus_node_selector={"region":"infra"}

Run the playbook:
----
$ ansible-playbook -vvv -i ${INVENTORY_FILE} playbooks/byo/openshift-cluster/openshift-prometheus.yml
$ ansible-playbook -vvv -i ${INVENTORY_FILE} playbooks/openshift-prometheus/config.yml
----

[[openshift-prometheus-additional-deploy]]
Expand All @@ -942,7 +942,7 @@ openshift_prometheus_node_selector={"${KEY}":"${VALUE}"}

Run the playbook:
----
$ ansible-playbook -vvv -i ${INVENTORY_FILE} playbooks/byo/openshift-cluster/openshift-prometheus.yml
$ ansible-playbook -vvv -i ${INVENTORY_FILE} playbooks/openshift-prometheus/config.yml
----

*Deploy Using a Non-default Namespace*
Expand All @@ -958,7 +958,7 @@ openshift_prometheus_namespace=${USER_PROJECT}

Run the playbook:
----
$ ansible-playbook -vvv -i ${INVENTORY_FILE} playbooks/byo/openshift-cluster/openshift-prometheus.yml
$ ansible-playbook -vvv -i ${INVENTORY_FILE} playbooks/openshift-prometheus/config.yml
----

[[openshift-prometheus-web]]
Expand Down Expand Up @@ -1093,5 +1093,5 @@ gathered from the `http://${POD_IP}:7575/metrics` endpoint.
To undeploy Prometheus, run:

----
$ ansible-playbook -vvv -i ${INVENTORY_FILE} playbooks/byo/openshift-cluster/openshift-prometheus.yml -e openshift_prometheus_state=absent
$ ansible-playbook -vvv -i ${INVENTORY_FILE} playbooks/openshift-prometheus/config.yml -e openshift_prometheus_state=absent
----
54 changes: 28 additions & 26 deletions install_config/install/advanced_install.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -2288,18 +2288,17 @@ If you are not using a proxy, you can skip this step.
====

In {product-title}:
ifdef::openshift-enterprise[]

----
ifdef::openshift-enterprise[]
# ansible-playbook [-i /path/to/inventory] \
/usr/share/ansible/openshift-ansible/playbooks/byo/config.yml
----
/usr/share/ansible/openshift-ansible/playbooks/deploy_cluster.yml
endif::[]
ifdef::openshift-origin[]
----
# ansible-playbook [-i /path/to/inventory] \
~/openshift-ansible/playbooks/byo/config.yml
----
~/openshift-ansible/playbooks/deploy_cluster.yml
endif::[]
----

If for any reason the installation fails, before re-running the installer, see
xref:installer-known-issues[Known Issues] to check for any specific
Expand Down Expand Up @@ -2363,7 +2362,7 @@ or workarounds.

You can use the `PLAYBOOK_FILE` environment variable to specify other playbooks
you want to run by using the containerized installer. The default value of the `PLAYBOOK_FILE` is
*_/usr/share/ansible/openshift-ansible/playbooks/byo/config.yml_*, which is the
*_/usr/share/ansible/openshift-ansible/playbooks/deploy_cluster.yml_*, which is the
main cluster installation playbook, but you can set it to the path of another
playbook inside the container.

Expand All @@ -2375,7 +2374,7 @@ installation, use the following command:
# atomic install --system \
--storage=ostree \
--set INVENTORY_FILE=/path/to/inventory \
--set PLAYBOOK_FILE=/usr/share/ansible/openshift-ansible/playbooks/byo/openshift-checks/pre-install.yml \ <1>
--set PLAYBOOK_FILE=/usr/share/ansible/openshift-ansible/playbooks/openshift-checks/pre-install.yml \ <1>
--set OPTS="-v" \ <2>
ifdef::openshift-enterprise[]
registry.access.redhat.com/openshift3/ose-ansible:v3.7
Expand Down Expand Up @@ -2420,7 +2419,7 @@ $ docker run -t -u `id -u` \ <1>
-v $HOME/.ssh/id_rsa:/opt/app-root/src/.ssh/id_rsa:Z \ <2>
-v $HOME/ansible/hosts:/tmp/inventory:Z \ <3>
-e INVENTORY_FILE=/tmp/inventory \ <3>
-e PLAYBOOK_FILE=playbooks/byo/config.yml \ <4>
-e PLAYBOOK_FILE=playbooks/deploy_cluster.yml \ <4>
-e OPTS="-v" \ <5>
ifdef::openshift-enterprise[]
registry.access.redhat.com/openshift3/ose-ansible:v3.7
Expand Down Expand Up @@ -2465,8 +2464,8 @@ The inventory file can also be downloaded from a web server if you specify
the `INVENTORY_URL` environment variable, or generated dynamically using
`DYNAMIC_SCRIPT_URL` to specify an executable script that provides a
dynamic inventory.
<4> `-e PLAYBOOK_FILE=playbooks/byo/config.yml` specifies the playbook
to run (in this example, the BYO installer) as a relative path from the
<4> `-e PLAYBOOK_FILE=playbooks/deploy_cluster.yml` specifies the playbook
to run (in this example, the default installer) as a relative path from the
top level directory of *openshift-ansible* content. The full path from the
RPM can also be used, as well as the path to any other playbook file in
the container.
Expand All @@ -2477,7 +2476,7 @@ inside the container.
[[running-the-advanced-installation-individual-components]]
=== Running Individual Component Playbooks

The main installation playbook *_{pb-prefix}playbooks/byo/config.yml_* runs a
The main installation playbook *_{pb-prefix}playbooks/deploy_cluster.yml_* runs a
set of individual component playbooks in a specific order, and the installer
reports back at the end what phases you have gone through. If the installation
fails during a phase, you are notified on the screen along with the errors from
Expand All @@ -2500,46 +2499,49 @@ playbook is run:
|Playbook Name |File Location

|Health Check
|*_{pb-prefix}playbooks/byo/openshift-checks/pre-install.yml_*
|*_{pb-prefix}playbooks/openshift-checks/pre-install.yml_*

|etcd Install
|*_{pb-prefix}playbooks/byo/openshift-etcd/config.yml_*
|*_{pb-prefix}playbooks/openshift-etcd/config.yml_*

|NFS Install
|*_{pb-prefix}playbooks/byo/openshift-nfs/config.yml_*
|*_{pb-prefix}playbooks/openshift-nfs/config.yml_*

|Load Balancer Install
|*_{pb-prefix}playbooks/byo/openshift-loadbalancer/config.yml_*
|*_{pb-prefix}playbooks/openshift-loadbalancer/config.yml_*

|Master Install
|*_{pb-prefix}playbooks/byo/openshift-master/config.yml_*
|*_{pb-prefix}playbooks/openshift-master/config.yml_*

|Master Additional Install
|*_{pb-prefix}playbooks/byo/openshift-master/additional_config.yml_*
|*_{pb-prefix}playbooks/openshift-master/additional_config.yml_*

|Node Install
|*_{pb-prefix}playbooks/byo/openshift-node/config.yml_*
|*_{pb-prefix}playbooks/openshift-node/config.yml_*

|GlusterFS Install
|*_{pb-prefix}playbooks/byo/openshift-glusterfs/config.yml_*
|*_{pb-prefix}playbooks/openshift-glusterfs/config.yml_*

|Hosted Install
|*_{pb-prefix}playbooks/byo/openshift-cluster/openshift-hosted.yml_*
|*_{pb-prefix}playbooks/openshift-hosted/config.yml_*

|Web Console Install
|*_{pb-prefix}playbooks/openshift-web-console/config.yml_*

|Metrics Install
|*_{pb-prefix}playbooks/byo/openshift-cluster/openshift-metrics.yml_*
|*_{pb-prefix}playbooks/openshift-metrics/config.yml_*

|Logging Install
|*_{pb-prefix}playbooks/byo/openshift-cluster/openshift-logging.yml_*
|*_{pb-prefix}playbooks/openshift-logging/config.yml_*

|Prometheus Install
|*_{pb-prefix}playbooks/byo/openshift-cluster/openshift-prometheus.yml_*
|*_{pb-prefix}playbooks/openshift-prometheus/config.yml_*

|Service Catalog Install
|*_{pb-prefix}playbooks/byo/openshift-cluster/service-catalog.yml_*
|*_{pb-prefix}playbooks/openshift-service-catalog/config.yml_*

|Management Install
|*_{pb-prefix}playbooks/byo/openshift-management/config.yml_*
|*_{pb-prefix}playbooks/openshift-management/config.yml_*
|===

[[advanced-verifying-the-installation]]
Expand Down
2 changes: 1 addition & 1 deletion install_config/install/stand_alone_registry.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -275,7 +275,7 @@ After you have configured Ansible by defining an inventory file in
following playbook:

----
# ansible-playbook /usr/share/ansible/openshift-ansible/playbooks/byo/config.yml
# ansible-playbook /usr/share/ansible/openshift-ansible/playbooks/deploy_cluster.yml
----

[NOTE]
Expand Down
16 changes: 8 additions & 8 deletions install_config/redeploying_certificates.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -289,7 +289,7 @@ To redeploy master, etcd, and node certificates using the current

----
$ ansible-playbook -i <inventory_file> \
/usr/share/ansible/openshift-ansible/playbooks/byo/openshift-cluster/redeploy-certificates.yml
/usr/share/ansible/openshift-ansible/playbooks/redeploy-certificates.yml
----

[[redeploying-new-custom-ca]]
Expand Down Expand Up @@ -336,7 +336,7 @@ step.
+
----
$ ansible-playbook -i <inventory_file> \
/usr/share/ansible/openshift-ansible/playbooks/byo/openshift-cluster/redeploy-openshift-ca.yml
/usr/share/ansible/openshift-ansible/playbooks/openshift-master/redeploy-openshift-ca.yml
----

With the new {product-title} CA in place, you can then use the
Expand Down Expand Up @@ -366,7 +366,7 @@ To redeploy a newly generated etcd CA:
+
----
$ ansible-playbook -i <inventory_file> \
/usr/share/ansible/openshift-ansible/playbooks/byo/openshift-cluster/redeploy-etcd-ca.yml
/usr/share/ansible/openshift-ansible/playbooks/openshift-etcd/redeploy-ca.yml
----

With the new etcd CA in place, you can then use the
Expand All @@ -385,7 +385,7 @@ file:

----
$ ansible-playbook -i <inventory_file> \
/usr/share/ansible/openshift-ansible/playbooks/byo/openshift-cluster/redeploy-master-certificates.yml
/usr/share/ansible/openshift-ansible/playbooks/openshift-master/redeploy-certificates.yml
----

[[redeploying-etcd-certificates]]
Expand All @@ -404,7 +404,7 @@ file:

----
$ ansible-playbook -i <inventory_file> \
/usr/share/ansible/openshift-ansible/playbooks/byo/openshift-cluster/redeploy-etcd-certificates.yml
/usr/share/ansible/openshift-ansible/playbooks/openshift-etcd/redeploy-certificates.yml
----

[[redeploying-node-certificates]]
Expand All @@ -418,7 +418,7 @@ file:

----
$ ansible-playbook -i <inventory_file> \
/usr/share/ansible/openshift-ansible/playbooks/byo/openshift-cluster/redeploy-node-certificates.yml
/usr/share/ansible/openshift-ansible/playbooks/openshift-node/redeploy-certificates.yml
----

[[redeploying-registry-router-certificates]]
Expand All @@ -439,7 +439,7 @@ inventory file:

----
$ ansible-playbook -i <inventory_file> \
/usr/share/ansible/openshift-ansible/playbooks/byo/openshift-cluster/redeploy-registry-certificates.yml
/usr/share/ansible/openshift-ansible/playbooks/openshift-hosted/redeploy-registry-certificates.yml
----

[[redeploying-router-certificates]]
Expand All @@ -450,7 +450,7 @@ inventory file:

----
$ ansible-playbook -i <inventory_file> \
/usr/share/ansible/openshift-ansible/playbooks/byo/openshift-cluster/redeploy-router-certificates.yml
/usr/share/ansible/openshift-ansible/playbooks/openshift-hosted/redeploy-router-certificates.yml
----

[[redeploying-custom-registry-or-router-certificates]]
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -224,7 +224,13 @@ $ gluster volume info
== Dynamically Provision a Volume
[NOTE]
====
If you installed {product-title} by using the link:https://github.com/openshift/openshift-ansible/tree/master/inventory/byo[BYO (Bring your own) OpenShift Ansible inventory configuration files] for either link:https://github.com/openshift/openshift-ansible/blob/master/inventory/byo/hosts.byo.glusterfs.native.example[native] or link:https://github.com/openshift/openshift-ansible/blob/master/inventory/byo/hosts.byo.glusterfs.external.example[external] GlusterFS instance, the GlusterFS StorageClass automatically get created during the installation. For such cases you can skip the following storage class creation steps and directly proceed with creating persistent volume claim instruction.
If you installed {product-title} by using the
link:https://github.com/openshift/openshift-ansible/tree/master/inventory/[OpenShift Ansible example inventory configuration files] for either
link:https://github.com/openshift/openshift-ansible/blob/master/inventory/hosts.glusterfs.native.example[native] or
link:https://github.com/openshift/openshift-ansible/blob/master/inventory/hosts.glusterfs.external.example[external]
GlusterFS instance, the GlusterFS StorageClass automatically gets created during
the installation. For such cases you can skip the following storage class creation
steps and directly proceed with creating persistent volume claim instruction.
====

. Create a `StorageClass` object definition. The following definition is based on the
Expand Down
2 changes: 1 addition & 1 deletion install_config/upgrading/automated_upgrades.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -558,7 +558,7 @@ xref:../../install_config/install/advanced_install.adoc#install-config-install-a
+
----
# ansible-playbook -i </path/to/inventory/file> \
/usr/share/ansible/openshift-ansible/playbooks/byo/openshift-cluster/service-catalog.yml
/usr/share/ansible/openshift-ansible/playbooks/openshift-service-catalog/config.yml
----
// end::automated-service-catalog-upgrade-steps[]

Expand Down