Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
19 changes: 7 additions & 12 deletions docs/release-notes/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -25,11 +25,9 @@ To check for security updates, go to [Security announcements for the Elastic sta

### Features and enhancements [logstash-9.2.0-features-enhancements]

#### Highlights
#### Persistent queue (PQ} compression [logstash-9.2.0-pq-compression]

##### Logstash 9.2 introduces PQ compression
Comment on lines 26 to -30
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Three heading levels with no content inbetween can signal a structural issue. I restructured the content to [hopefully] achieve the emphasis you were going for, @robbavey, but eliminate a heading level. LMKWYT

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Much better!


In Logstash 9.2, we’ve added support for compression to the [Persisted Queue](https://www.elastic.co/docs/reference/logstash/persistent-queues), allowing you to spend some CPU in exchange for reduced disk IO. This can help reduce cost and increase throughput in situations where your hardware is rate-limited or metered.
We’ve added support for compression to the [Persistent Queue (PQ)](https://www.elastic.co/docs/reference/logstash/persistent-queues), allowing you to spend some CPU in exchange for reduced disk IO. This can help reduce cost and increase throughput in situations where your hardware is rate-limited or metered.

PQ compression is implemented using the industry-standard highly-efficient ZSTD algorithm, and can be activated at one of three levels:

Expand All @@ -39,16 +37,16 @@ PQ compression is implemented using the industry-standard highly-efficient ZSTD

The effects of these settings will depend on the shape and size of each pipeline’s events. To help you tune your configuration to meet your own requirements, we have added [queue metrics](https://www.elastic.co/docs/api/doc/logstash/operation/operation-nodestatspipelines) exposing the effective compression ratio and the amount of CPU that is being spent to achieve it.

PQ Compression has been introduced as an opt-in feature in 9.2 because a PQ that contains one or more compressed events cannot be read by previous versions of Logstash, making the feature a rollback-barrier. We recommend validating your pipelines with Logstash 9.2+ before enabling PQ compression, so that you have the freedom to roll back if you encounter any issues with your pipelines.
PQ Compression has been introduced as an opt-in feature in 9.2 because a PQ that contains one or more compressed events cannot be read by previous versions of Logstash, making the feature a rollback-barrier. We recommend validating your pipelines with Logstash 9.2 (or later) before enabling PQ compression so that you have the freedom to roll back if you encounter any issues with your pipelines.

Related:
* Persisted Queue: improved serialization to be more compact by default (note: queues containing these compact events can be processed by Logstash v8.10.0 and later) [#17849](https://github.com/elastic/logstash/pull/17849)
* Support for user defined metrics [#18218](https://github.com/elastic/logstash/pull/18218)
* PQ: Add support for event-level compression using ZStandard (ZSTD) [#18121](https://github.com/elastic/logstash/pull/18121)

##### Batch size metrics
#### Batch size metrics [logstash-9.2.0-batch-size-metrics]

In Logstash 9.2, we have added metrics to help you track the size of batches processed by Logstash pipelines.
We've added metrics to help you track the size of batches processed by Logstash pipelines.

The [Node API pipelines endpoint](https://www.elastic.co/docs/api/doc/logstash/operation/operation-nodestatspipelines) now shows includes information displaying the showing the average number of events processed per batch, and the average byte size of those batches for each pipeline. This information can be used to help size Logstash instances, and optimize settings for `pipeline.batch.size` for Logstash pipelines based on real observations of data.

Expand All @@ -57,18 +55,15 @@ Related:
* Implements average batch event count and byte size metrics. The collection of such metric could be disabled, enabled for each batch or done on a sample of the total batches [#18000](https://github.com/elastic/logstash/pull/18000)


#### Other Features and Enhancements
#### Additional features and enhancements [logstash-9.2.0-more-features]

* Dropped the persistent queue setting queue.checkpoint.interval [#17759](https://github.com/elastic/logstash/pull/17759)
* Reimplements BufferedTokenizer to leverage pure Java classes instead of use JRuby runtime's classes [#17229](https://github.com/elastic/logstash/pull/17229)
* Logging improvement while handling exceptions in the pipeline, ensuring that chained exceptions propagate enough information to be actionable. [#17935](https://github.com/elastic/logstash/pull/17935)
* [Support for using ES|QL queries](https://github.com/logstash-plugins/logstash-filter-elasticsearch/pull/194) in the Elasticsearch filter to add improved flexibility when ingesting data from Elasticsearch is now in Technical Preview.
* Gauge type metrics, such as current and peak connection counts of Elastic Agent, are now available in the `_node/stats` API response when the `vertices=true` parameter is included. These metrics are particularly useful for monitoring {{ls}} plugin activity on the {{ls}} Integration dashboards [#18090](https://github.com/elastic/logstash/pull/18090)
* Improve logstash release artifacts file metadata: mtime is preserved when buiilding tar archives [#18091](https://github.com/elastic/logstash/pull/18091)

### Known issues
* Improve Logstash release artifacts file metadata: mtime is preserved when building tar archives [#18091](https://github.com/elastic/logstash/pull/18091)

* The `decode_size_limit_bytes` setting for plugins that use the `json_lines` codec behaves differently in 9.2.0. Specifically when the size limit exceeds the limit without a separator in the data the size will grow beyond the limit. The details for this issue and the details for the future behavior are being tracked in [#18321](https://github.com/elastic/logstash/issues/18321)

### Plugins [logstash-plugin-9.2.0-changes]

Expand Down
18 changes: 18 additions & 0 deletions docs/release-notes/known-issues.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,24 @@ Known issues are significant defects or limitations that may impact your impleme
These issues are actively being worked on and will be addressed in a future release.
Review known issues to help you make informed decisions, such as upgrading to a new version.

## 9.2.0

**BufferedTokenizer may silently drop data when oversize input has no delimiters**

Applies to: {{ls}} 9.2.0
Copy link
Contributor

@mashhurs mashhurs Oct 24, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we move Applies to an upper level for better visibility?
Reader figures out the version when actually toggles the dropdown, but I am okay if this is a standard.

Copy link
Contributor Author

@karenzone karenzone Oct 24, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I totally agree, @mashhurs. I was following the new guidelines for Known Issues used for Kibana, Beats, solutions, etc. BUT I noticed that ES format is different. I believe that, to your point, the ES model elevates relevant info and better serves the user. Stay tuned for a quick rework.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OK, I tried something different based on the ES format. I did keep the Applies to: {{ls}} 9.2.0 line even though it appears redundant with the 9.2.0 heading. Thinking that we can update the line with future versions affected, or the version in which it was resolved. LMKWYT


::::{dropdown} Details

The `decode_size_limit_bytes` setting for {{ls}} plugins that use the `json_lines` codec is behaving differently than expected. When the size limit exceeds the specified limit without a separator in the data, the size grows beyond the limit.
This occurs because size validation happens after the token is fully accumulated, which does not occur if no trailing separator is detected.

{{ls}} plugins that use the `json_lines` codec include `input-stdin` `input-http`, `input-tcp` `integration-logstash` and `input-elastic_serverless_forwarder`.

Best practice: Do not set `decode_size_limit_bytes` manually.

Details for this issue and the details for future behavior are being tracked in [#18321](https://github.com/elastic/logstash/issues/18321)
::::


## 9.0.0

Expand Down
Loading