Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
16 changes: 7 additions & 9 deletions logging/cluster-logging-external.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -5,23 +5,22 @@ include::modules/common-attributes.adoc[]

toc::[]

By default, OpenShift Logging sends container and infrastructure logs to the default internal Elasticsearch log store defined in the `ClusterLogging` custom resource. However, it does not send audit logs to the internal store because it does not provide secure storage. If this default configuration meets your needs, you do not need to configure the Log Forwarding API.

By default, OpenShift Logging sends logs to the default internal Elasticsearch log store, defined in the `ClusterLogging` custom resource. If you want to forward logs to other log aggregators, you can use the {product-title} Log Forwarding API to send container, infrastructure, and audit logs to specific endpoints within or outside your cluster. You can send different types of logs to different systems, allowing you to control who in your organization can access each type. Optional TLS support ensures that you can send logs using secure communication as required by your organization.

When you forward logs externally, the Cluster Logging Operator creates or modifies a Fluentd config map to send logs using your desired protocols. You are responsible for configuring the protocol on the external log aggregator.

If you want to forward logs to only the internal {product-title} Elasticsearch instance, do not configure the Log Forwarding API.
To send logs to other log aggregators, you use the {product-title} Log Forwarding API. This API enables you to send container, infrastructure, and audit logs to specific endpoints within or outside your cluster. You can send different types of logs to various systems, so different individuals can access each type. You can also enable TLS support to send logs securely, as required by your organization.

[NOTE]
====
Because the internal {product-title} Elasticsearch log store does not provide secure storage for audit logs, audit logs are not stored in the internal Elasticsearch instance by default. If you want to send the audit logs to the internal log store, for example to view the audit logs in Kibana, you must use the Log Forwarding API as described in xref:../logging/config/cluster-logging-log-store.adoc#cluster-logging-elasticsearch-audit_cluster-logging-store[Forward audit logs to the log store].
To send audit logs to the internal log store, use the Log Forwarding API as described in xref:../logging/config/cluster-logging-log-store.adoc#cluster-logging-elasticsearch-audit_cluster-logging-store[Forward audit logs to the log store].
====

Alternatively, you can create a ConfigMap to use the xref:../logging/cluster-logging-external.html#cluster-logging-collector-legacy-fluentd_cluster-logging-external[Fluentd *forward* protocol] or the xref:../logging/cluster-logging-external.html#cluster-logging-collector-legacy-syslog_cluster-logging-external[syslog protocol] to send logs to external systems. However, these methods for forwarding logs are deprecated in {product-title} and will be removed in a future release.
When you forward logs externally, the Cluster Logging Operator creates or modifies a Fluentd config map to send logs using your desired protocols. You are responsible for configuring the protocol on the external log aggregator.

Alternatively, you can create a config map to use the xref:../logging/cluster-logging-external.html#cluster-logging-collector-legacy-fluentd_cluster-logging-external[Fluentd *forward* protocol] or the xref:../logging/cluster-logging-external.html#cluster-logging-collector-legacy-syslog_cluster-logging-external[syslog protocol] to send logs to external systems. However, these methods for forwarding logs are deprecated in {product-title} and will be removed in a future release.

[IMPORTANT]
====
You cannot use the ConfigMap methods and the Log Forwarding API in the same cluster.
You cannot use the config map methods and the Log Forwarding API in the same cluster.
====

// The following include statements pull in the module files that comprise
Expand All @@ -38,4 +37,3 @@ include::modules/cluster-logging-collector-log-forward-project.adoc[leveloffset=
include::modules/cluster-logging-collector-legacy-fluentd.adoc[leveloffset=+2]
include::modules/cluster-logging-collector-legacy-syslog.adoc[leveloffset=+2]
// modules/cluster-logging-collector-log-forward-update.adoc[leveloffset=+2]

2 changes: 1 addition & 1 deletion logging/cluster-logging.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ OpenShift Logging aggregates the following types of logs:

* `application` - Container logs generated by user applications running in the cluster, except infrastructure container applications.
* `infrastructure` - Logs generated by infrastructure components running in the cluster and {product-title} nodes, such as journal logs. Infrastructure components are pods that run in the `openshift*`, `kube*`, or `default` projects.
* `audit` - Logs generated by the node audit system (auditd), which are stored in the */var/log/audit/audit.log* file, and the audit logs from the Kubernetes apiserver and the OpenShift apiserver.
* `audit` - Logs generated by auditd, the node audit system, which are stored in the */var/log/audit/audit.log* file, and the audit logs from the Kubernetes apiserver and the OpenShift apiserver.

[NOTE]
====
Expand Down
15 changes: 7 additions & 8 deletions modules/cluster-logging-collector-log-forwarding-about.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@
[id="cluster-logging-collector-log-forwarding-about_{context}"]
= About forwarding logs to third-party systems

Forwarding cluster logs to external third-party systems requires a combination of _outputs_ and _pipelines_ specified in a `ClusterLogForwarder` custom resource (CR) to send logs to specific endpoints inside and outside of your {product-title} cluster. You can also use _inputs_ to forward the application logs associated with a specific project to an endpoint.
Forwarding cluster logs to external third-party systems requires a combination of _outputs_ and _pipelines_ specified in a `ClusterLogForwarder` custom resource (CR) to send logs to specific endpoints inside and outside of your {product-title} cluster. You can also use _inputs_ to forward the application logs associated with a specific project to an endpoint.

* An _output_ is the destination for log data that you define, or where you want the logs sent. An output can be one of the following types:
+
Expand All @@ -30,7 +30,7 @@ If the output URL scheme requires TLS (HTTPS, TLS, or UDPS), then TLS server-sid

* `infrastructure`. Container logs from pods that run in the `openshift*`, `kube*`, or `default` projects and journal logs sourced from node file system.

* `audit`. Logs generated by the node audit system (auditd) and the audit logs from the Kubernetes API server and the OpenShift API server.
* `audit`. Logs generated by auditd, the node audit system, and the audit logs from the Kubernetes API server and the OpenShift API server.
--
+
You can add labels to outbound log messages by using `key:value` pairs in the pipeline. For example, you might add a label to messages that are forwarded to others data centers or label the logs by type. Labels that are added to objects are also forwarded with the log message.
Expand All @@ -43,17 +43,17 @@ Note the following:

* If a `ClusterLogForwarder` object exists, logs are not forwarded to the default Elasticsearch instance, unless there is a pipeline with the `default` output.

* If you want to forward all logs to only the internal {product-title} Elasticsearch instance, do not configure the Log Forwarding API.
* By default, OpenShift Logging sends container and infrastructure logs to the default internal Elasticsearch log store defined in the `ClusterLogging` custom resource. However, it does not send audit logs to the internal store because it does not provide secure storage. If this default configuration meets your needs, do not configure the Log Forwarding API.

* If you do not define a pipeline for a log type, the logs of the undefined types are dropped. For example, if you specify a pipeline for the `application` and `audit` types, but do not specify a pipeline for the `infrastructure` type, `infrastructure` logs are dropped.

* You can use multiple types of outputs in the `ClusterLogForwarder` custom resource (CR) to send logs to servers that support different protocols.
* You can use multiple types of outputs in the `ClusterLogForwarder` custom resource (CR) to send logs to servers that support different protocols.

* The internal {product-title} Elasticsearch instance does not provide secure storage for audit logs. We recommend you ensure that the system to which you forward audit logs is compliant with your organizational and governmental regulations and is properly secured. OpenShift Logging does not comply with those regulations.

* You are responsible for creating and maintaining any additional configurations that external destinations might require, such as keys and secrets, service accounts, port openings, or global proxy configuration.

The following example forwards the audit logs to a secure external Elasticsearch instance, the infrastructure logs to an insecure external Elasticsearch instance, the application logs to a Kafka broker, and the application logs from the `my-apps-logs` project to the internal Elasticsearch instance.
The following example forwards the audit logs to a secure external Elasticsearch instance, the infrastructure logs to an insecure external Elasticsearch instance, the application logs to a Kafka broker, and the application logs from the `my-apps-logs` project to the internal Elasticsearch instance.

.Sample log forwarding outputs and pipelines
[source,yaml]
Expand All @@ -77,7 +77,7 @@ spec:
type: "kafka"
url: tls://kafka.secure.com:9093/app-topic
inputs: <6>
- name: my-app-logs
- name: my-app-logs
application:
namespaces:
- my-project
Expand All @@ -104,7 +104,7 @@ spec:
outputRefs:
- default
- inputRefs: <10>
- application
- application
outputRefs:
- kafka-app
labels:
Expand Down Expand Up @@ -147,4 +147,3 @@ spec:
== Fluentd log handling when the external log aggregator is unavailable

If your external logging aggregator becomes unavailable and cannot receive logs, Fluentd continues to collect logs and stores them in a buffer. When the log aggregator becomes available, log forwarding resumes, including the buffered logs. If the buffer fills completely, Fluentd stops collecting logs. {product-title} rotates the logs and deletes them. You cannot adjust the buffer size or add a persistent volume claim (PVC) to the Fluentd daemon set or pods.

2 changes: 1 addition & 1 deletion modules/cluster-logging-manual-rollout-rolling.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@
[id="cluster-logging-manual-rollout-rolling_{context}"]
= Performing an Elasticsearch rolling cluster restart

Perform a rolling restart when you change the `elasticsearch` configmap
Perform a rolling restart when you change the `elasticsearch` config map
or any of the `elasticsearch-*` deployment configurations.

Also, a rolling restart is recommended if the nodes on which an Elasticsearch pod
Expand Down