diff --git a/modules/efk-logging-deploy-subscription.adoc b/modules/efk-logging-deploy-subscription.adoc index 699997223556..101e2c366105 100644 --- a/modules/efk-logging-deploy-subscription.adoc +++ b/modules/efk-logging-deploy-subscription.adoc @@ -15,7 +15,7 @@ creates and manages the Elasticsearch cluster used by cluster logging. The {product-title} cluster logging solution requires that you install both the Cluster Logging Operator and Elasticsearch Operator. There is no use case in {product-title} for installing the operators individually. -You *must* install the Elasticsearch Operator using the CLI following the directions below. +You *must* install the Elasticsearch Operator using the CLI following the directions below. You can install the Cluster Logging Operator using the web console or CLI. ==== @@ -31,7 +31,7 @@ memory setting though this is not recommended for production deployments. [NOTE] ==== -You *must* install the Elasticsearch Operator using the CLI following the directions below. +You *must* install the Elasticsearch Operator using the CLI following the directions below. You can install the Cluster Logging Operator using the web console or CLI. ==== @@ -71,7 +71,7 @@ For example: ---- $ oc create -f eo-namespace.yaml ---- - + .. Create a Namespace for the Cluster Logging Operator (for example, `clo-namespace.yaml`): + [source,yaml] @@ -109,7 +109,7 @@ $ oc create -f clo-namespace.yaml apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: - name: openshift-operators-redhat + name: openshift-operators-redhat namespace: openshift-operators-redhat <1> spec: {} ---- @@ -121,7 +121,7 @@ spec: {} $ oc create -f eo-og.yaml ---- -.. Create a CatalogSourceConfig object YAML file (for example, `eo-csc.yaml`) to enable the Elasticsearch Operator on the cluster. +.. Create a CatalogSourceConfig object YAML file (for example, `eo-csc.yaml`) to enable the Elasticsearch Operator on the cluster. + .Example CatalogSourceConfig [source,yaml] @@ -146,20 +146,20 @@ namespace specified in `targetNamespace`. $ oc create -f eo-csc.yaml ---- -.. Use the following commands to get the `channel` and `currentCSV` values required for the next step. +.. Use the following commands to get the `channel` and `currentCSV` values required for the next step. + ---- $ oc get packagemanifest elasticsearch-operator -n openshift-marketplace -o jsonpath='{.status.channels[].name}' preview -$ oc get packagemanifest elasticsearch-operator -n openshift-marketplace -o jsonpath='{.status.channels[].currentCSV}' +$ oc get packagemanifest elasticsearch-operator -n openshift-marketplace -o jsonpath='{.status.channels[].currentCSV}' elasticsearch-operator.v4.1.0 ---- .. Create a Subscription object YAML file (for example, `eo-sub.yaml`) to -subscribe a Namespace to an Operator. +subscribe a Namespace to an Operator. + .Example Subscription [source,yaml] @@ -175,7 +175,6 @@ spec: source: "elasticsearch" sourceNamespace: "openshift-operators-redhat" <1> name: "elasticsearch-operator" - startingCSV: "elasticsearch-operator.v4.1.0" <3> ---- <1> You must specify the `openshift-operators-redhat` namespace for `namespace` and `sourceNameSpace`. <2> Specify the `.status.channels[].name` value from the previous step. @@ -240,7 +239,7 @@ $ oc create -f eo-rbac.yaml The Elasticsearch operator is installed to each project in the cluster. -. Install the Cluster Logging Operator using the {product-title} web console for best results: +. Install the Cluster Logging Operator using the {product-title} web console for best results: .. In the {product-title} web console, click *Catalog* -> *OperatorHub*. @@ -255,7 +254,7 @@ Then, click *Subscribe*. .. Ensure that *Cluster Logging* is listed in the *openshift-logging* project with a *Status* of *InstallSucceeded*. -.. Ensure that *Elasticsearch Operator* is listed in the *openshift-operator-redhat* project with a *Status* of *InstallSucceeded*. +.. Ensure that *Elasticsearch Operator* is listed in the *openshift-operator-redhat* project with a *Status* of *InstallSucceeded*. The Elasticsearch Operator is copies to all other projects. + [NOTE] @@ -334,7 +333,7 @@ However, an unmanaged deployment does not receive updates until the cluster logg [NOTE] + ==== -The maximum number of Elasticsearch master nodes is three. If you specify a `nodeCount` greater than `3`, {product-title} creates three Elasticsearch nodes that are Master-eligible nodes, with the master, client, and data roles. The additional Elasticsearch nodes are created as Data-only nodes, using client and data roles. Master nodes perform cluster-wide actions such as creating or deleting an index, shard allocation, and tracking nodes. Data nodes hold the shards and perform data-related operations such as CRUD, search, and aggregations. Data-related operations are I/O-, memory-, and CPU-intensive. It is important to monitor these resources and to add more Data nodes if the current nodes are overloaded. +The maximum number of Elasticsearch master nodes is three. If you specify a `nodeCount` greater than `3`, {product-title} creates three Elasticsearch nodes that are Master-eligible nodes, with the master, client, and data roles. The additional Elasticsearch nodes are created as Data-only nodes, using client and data roles. Master nodes perform cluster-wide actions such as creating or deleting an index, shard allocation, and tracking nodes. Data nodes hold the shards and perform data-related operations such as CRUD, search, and aggregations. Data-related operations are I/O-, memory-, and CPU-intensive. It is important to monitor these resources and to add more Data nodes if the current nodes are overloaded. For example, if `nodeCount=4`, the following nodes are created: @@ -372,4 +371,3 @@ You should see several pods for cluster logging, Elasticsearch, Fluentd, and Kib * fluentd-zqgqx * kibana-7fb4fd4cc9-bvt4p + -