Skip to content
Merged
Show file tree
Hide file tree
Changes from 3 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions docs/reference/logstash-settings-file.md
Original file line number Diff line number Diff line change
Expand Up @@ -48,6 +48,7 @@ The `logstash.yml` file includes these settings.
| `pipeline.workers` | The number of workers that will, in parallel, execute the filter and outputstages of the pipeline. This setting uses the[`java.lang.Runtime.getRuntime.availableProcessors`](https://docs.oracle.com/javase/7/docs/api/java/lang/Runtime.md#availableProcessors())value as a default if not overridden by `pipeline.workers` in `pipelines.yml` or`pipeline.workers` from `logstash.yml`. If you have modified this setting andsee that events are backing up, or that the CPU is not saturated, considerincreasing this number to better utilize machine processing power. | Number of the host’s CPU cores |
| `pipeline.batch.size` | The maximum number of events an individual worker thread will collect from inputs before attempting to execute its filters and outputs. Larger batch sizes are generally more efficient, but come at the cost of increased memory overhead. You may need to increase JVM heap space in the `jvm.options` config file. See [Logstash Configuration Files](/reference/config-setting-files.md) for more info. | `125` |
| `pipeline.batch.delay` | When creating pipeline event batches, how long in milliseconds to wait for each event before dispatching an undersized batch to pipeline workers. | `50` |
| `pipeline.batch.metrics.sampling_mode` | Controls how metrics about batch size should be collected. These metrics measure the quantity of event in batches to understand if the fullfillemnt reaches the configured `pipeline.batch.size`. It also measure an estimation of memory size for each event. <br><br>Note: This feature is in **technical preview** and may change in the future.<br><br>Current options are:<br><br>* `disabled`: disabling the collection.<br>* `minimal`: collects measure only on a subset of all the batches.(default)<br>* `full`: compute the measure on every processed batch.<br> |
| `pipeline.unsafe_shutdown` | When set to `true`, forces Logstash to exit during shutdown even if there are still inflight events in memory. By default, Logstash will refuse to quit until all received events have been pushed to the outputs. Enabling this option can lead to data loss during shutdown. | `false` |
| `pipeline.plugin_classloaders` | (Beta) Load Java plugins in independent classloaders to isolate their dependencies. | `false` |
| `pipeline.ordered` | Set the pipeline event ordering. Valid options are:<br><br>* `auto`. Automatically enables ordering if the `pipeline.workers` setting is `1`, and disables otherwise.<br>* `true`. Enforces ordering on the pipeline and prevents Logstash from starting if there are multiple workers.<br>* `false`. Disables the processing required to preserve order. Ordering will not be guaranteed, but you save the processing cost of preserving order.<br> | `auto` |
Expand Down
1 change: 1 addition & 0 deletions docs/reference/tuning-logstash.md
Original file line number Diff line number Diff line change
Expand Up @@ -50,6 +50,7 @@ Make sure you’ve read the [Performance troubleshooting](/reference/performance
If you plan to modify the default pipeline settings, take into account the following suggestions:

* The total number of inflight events is determined by the product of the `pipeline.workers` and `pipeline.batch.size` settings. This product is referred to as the *inflight count*. Keep the value of the inflight count in mind as you adjust the `pipeline.workers` and `pipeline.batch.size` settings. Pipelines that intermittently receive large events at irregular intervals require sufficient memory to handle these spikes. Set the JVM heap space accordingly in the `jvm.options` config file (See [Logstash Configuration Files](/reference/config-setting-files.md) for more info).
* Consider enabling the metering of batch sizes using the setting `pipeline.batch.metrics.sampling_mode` to help you understand the actual batch sizes being processed by your pipeline. This setting can be useful tuning the `pipeline.batch.size` setting. For more details see [logstash.yml](/reference/logstash-settings-file.md).
* Measure each change to make sure it increases, rather than decreases, performance.
* Ensure that you leave enough memory available to cope with a sudden increase in event size. For example, an application that generates exceptions that are represented as large blobs of text.
* The number of workers may be set higher than the number of CPU cores since outputs often spend idle time in I/O wait conditions.
Expand Down
15 changes: 15 additions & 0 deletions docs/static/spec/openapi/logstash-api.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -810,6 +810,7 @@ paths:
- stats for each configured filter or output stage
- info about config reload successes and failures (when [config reload](https://www.elastic.co/guide/en/logstash/current/reloading-config.html) is enabled)
- info about the persistent queue (when [persistent queues](https://www.elastic.co/guide/en/logstash/current/persistent-queues.html) are enabled)
- info about batch structure, in term of events count and memory size (when setting [pipeline.batch.metrics](https://www.elastic.co/docs/reference/logstash/logstash-settings-file.html) is enabled).

content:
application/json:
Expand All @@ -821,6 +822,13 @@ paths:
example:
pipelines:
beats-es:
batch:
event_count:
average:
lifetime: 115
byte_size:
average:
lifetime: 14820
events:
duration_in_millis: 365495
in: 216610
Expand Down Expand Up @@ -1095,6 +1103,13 @@ paths:
value:
pipelines:
heartbeat-ruby-stdout:
batch:
event_count:
average:
lifetime: 115
byte_size:
average:
lifetime: 14820
events:
queue_push_duration_in_millis: 159
in: 45
Expand Down
Loading