diff --git a/docs/release-notes/index.md b/docs/release-notes/index.md index 4b188cacdfe..d9edece5d0e 100644 --- a/docs/release-notes/index.md +++ b/docs/release-notes/index.md @@ -25,11 +25,9 @@ To check for security updates, go to [Security announcements for the Elastic sta ### Features and enhancements [logstash-9.2.0-features-enhancements] -#### Highlights +#### Persistent queue (PQ} compression [logstash-9.2.0-pq-compression] -##### Logstash 9.2 introduces PQ compression - -In Logstash 9.2, we’ve added support for compression to the [Persisted Queue](https://www.elastic.co/docs/reference/logstash/persistent-queues), allowing you to spend some CPU in exchange for reduced disk IO. This can help reduce cost and increase throughput in situations where your hardware is rate-limited or metered. +We’ve added support for compression to the [Persistent Queue (PQ)](https://www.elastic.co/docs/reference/logstash/persistent-queues), allowing you to spend some CPU in exchange for reduced disk IO. This can help reduce cost and increase throughput in situations where your hardware is rate-limited or metered. PQ compression is implemented using the industry-standard highly-efficient ZSTD algorithm, and can be activated at one of three levels: @@ -39,16 +37,16 @@ PQ compression is implemented using the industry-standard highly-efficient ZSTD The effects of these settings will depend on the shape and size of each pipeline’s events. To help you tune your configuration to meet your own requirements, we have added [queue metrics](https://www.elastic.co/docs/api/doc/logstash/operation/operation-nodestatspipelines) exposing the effective compression ratio and the amount of CPU that is being spent to achieve it. -PQ Compression has been introduced as an opt-in feature in 9.2 because a PQ that contains one or more compressed events cannot be read by previous versions of Logstash, making the feature a rollback-barrier. We recommend validating your pipelines with Logstash 9.2+ before enabling PQ compression, so that you have the freedom to roll back if you encounter any issues with your pipelines. +PQ Compression has been introduced as an opt-in feature in 9.2 because a PQ that contains one or more compressed events cannot be read by previous versions of Logstash, making the feature a rollback-barrier. We recommend validating your pipelines with Logstash 9.2 (or later) before enabling PQ compression so that you have the freedom to roll back if you encounter any issues with your pipelines. Related: * Persisted Queue: improved serialization to be more compact by default (note: queues containing these compact events can be processed by Logstash v8.10.0 and later) [#17849](https://github.com/elastic/logstash/pull/17849) * Support for user defined metrics [#18218](https://github.com/elastic/logstash/pull/18218) * PQ: Add support for event-level compression using ZStandard (ZSTD) [#18121](https://github.com/elastic/logstash/pull/18121) -##### Batch size metrics +#### Batch size metrics [logstash-9.2.0-batch-size-metrics] -In Logstash 9.2, we have added metrics to help you track the size of batches processed by Logstash pipelines. +We've added metrics to help you track the size of batches processed by Logstash pipelines. The [Node API pipelines endpoint](https://www.elastic.co/docs/api/doc/logstash/operation/operation-nodestatspipelines) now shows includes information displaying the showing the average number of events processed per batch, and the average byte size of those batches for each pipeline. This information can be used to help size Logstash instances, and optimize settings for `pipeline.batch.size` for Logstash pipelines based on real observations of data. @@ -57,18 +55,15 @@ Related: * Implements average batch event count and byte size metrics. The collection of such metric could be disabled, enabled for each batch or done on a sample of the total batches [#18000](https://github.com/elastic/logstash/pull/18000) -#### Other Features and Enhancements +#### Additional features and enhancements [logstash-9.2.0-more-features] * Dropped the persistent queue setting queue.checkpoint.interval [#17759](https://github.com/elastic/logstash/pull/17759) * Reimplements BufferedTokenizer to leverage pure Java classes instead of use JRuby runtime's classes [#17229](https://github.com/elastic/logstash/pull/17229) * Logging improvement while handling exceptions in the pipeline, ensuring that chained exceptions propagate enough information to be actionable. [#17935](https://github.com/elastic/logstash/pull/17935) * [Support for using ES|QL queries](https://github.com/logstash-plugins/logstash-filter-elasticsearch/pull/194) in the Elasticsearch filter to add improved flexibility when ingesting data from Elasticsearch is now in Technical Preview. * Gauge type metrics, such as current and peak connection counts of Elastic Agent, are now available in the `_node/stats` API response when the `vertices=true` parameter is included. These metrics are particularly useful for monitoring {{ls}} plugin activity on the {{ls}} Integration dashboards [#18090](https://github.com/elastic/logstash/pull/18090) -* Improve logstash release artifacts file metadata: mtime is preserved when buiilding tar archives [#18091](https://github.com/elastic/logstash/pull/18091) - -### Known issues +* Improve Logstash release artifacts file metadata: mtime is preserved when building tar archives [#18091](https://github.com/elastic/logstash/pull/18091) -* The `decode_size_limit_bytes` setting for plugins that use the `json_lines` codec behaves differently in 9.2.0. Specifically when the size limit exceeds the limit without a separator in the data the size will grow beyond the limit. The details for this issue and the details for the future behavior are being tracked in [#18321](https://github.com/elastic/logstash/issues/18321) ### Plugins [logstash-plugin-9.2.0-changes] diff --git a/docs/release-notes/known-issues.md b/docs/release-notes/known-issues.md index ccc59c38e59..fbc65af49e0 100644 --- a/docs/release-notes/known-issues.md +++ b/docs/release-notes/known-issues.md @@ -8,6 +8,24 @@ Known issues are significant defects or limitations that may impact your impleme These issues are actively being worked on and will be addressed in a future release. Review known issues to help you make informed decisions, such as upgrading to a new version. +## 9.2.0 + +**BufferedTokenizer may silently drop data when oversize input has no delimiters** + +Applies to: {{ls}} 9.2.0 + +::::{dropdown} Details + +The `decode_size_limit_bytes` setting for {{ls}} plugins that use the `json_lines` codec is behaving differently than expected. When the size limit exceeds the specified limit without a separator in the data, the size grows beyond the limit. +This occurs because size validation happens after the token is fully accumulated, which does not occur if no trailing separator is detected. + +{{ls}} plugins that use the `json_lines` codec include `input-stdin` `input-http`, `input-tcp` `integration-logstash` and `input-elastic_serverless_forwarder`. + +Best practice: Do not set `decode_size_limit_bytes` manually. + +Details for this issue and the details for future behavior are being tracked in [#18321](https://github.com/elastic/logstash/issues/18321) +:::: + ## 9.0.0