Skip to content

Commit 6e9dd49

Browse files
mergify[bot]kvch
andauthored
Adjust the documentation of backoff options in filestream input (#30552) (#30556)
(cherry picked from commit dc8ed37) Co-authored-by: Noémi Ványi <[email protected]>
1 parent d94627b commit 6e9dd49

File tree

2 files changed

+16
-26
lines changed

2 files changed

+16
-26
lines changed

filebeat/docs/inputs/input-filestream-file-options.asciidoc

Lines changed: 15 additions & 25 deletions
Original file line numberDiff line numberDiff line change
@@ -319,7 +319,7 @@ the `close.reader.after_interval` period has elapsed. This option can be useful
319319
files when you want to spend only a predefined amount of time on the files.
320320
While `close.reader.after_interval` will close the file after the predefined timeout, if the
321321
file is still being updated, {beatname_uc} will start a new harvester again per
322-
the defined `scan_frequency`. And the close.reader.after_interval for this harvester will
322+
the defined `prospector.scanner.check_interval`. And the close.reader.after_interval for this harvester will
323323
start again with the countdown for the timeout.
324324

325325
This option is particularly useful in case the output is blocked, which makes
@@ -358,7 +358,7 @@ When this option is enabled, {beatname_uc} removes the state of a file after the
358358
specified period of inactivity has elapsed. The state can only be removed if
359359
the file is already ignored by {beatname_uc} (the file is older than
360360
`ignore_older`). The `clean_inactive` setting must be greater than `ignore_older +
361-
scan_frequency` to make sure that no states are removed while a file is still
361+
prospector.scanner.check_interval` to make sure that no states are removed while a file is still
362362
being harvested. Otherwise, the setting could result in {beatname_uc} resending
363363
the full content constantly because `clean_inactive` removes state for files
364364
that are still detected by {beatname_uc}. If a file is updated or appears
@@ -403,42 +403,32 @@ You must disable this option if you also disable `close_removed`.
403403
The backoff options specify how aggressively {beatname_uc} crawls open files for
404404
updates. You can use the default values in most cases.
405405

406-
The `backoff` option defines how long {beatname_uc} waits before checking a file
407-
again after EOF is reached. The default is 1s, which means the file is checked
408-
every second if new lines were added. This enables near real-time crawling.
409-
Every time a new line appears in the file, the `backoff` value is reset to the
410-
initial value. The default is 1s.
411406

412407
[float]
413408
===== `backoff.init`
414409

415-
The maximum time for {beatname_uc} to wait before checking a file again after
416-
EOF is reached. After having backed off multiple times from checking the file,
417-
the wait time will never exceed `max_backoff` regardless of what is specified
418-
for `backoff_factor`. Because it takes a maximum of 10s to read a new line,
419-
specifying 10s for `max_backoff` means that, at the worst, a new line could be
420-
added to the log file if {beatname_uc} has backed off multiple times. The
421-
default is 10s.
422-
423-
Requirement: Set `max_backoff` to be greater than or equal to `backoff` and
424-
less than or equal to `scan_frequency` (`backoff <= max_backoff <= scan_frequency`).
425-
If `max_backoff` needs to be higher, it is recommended to close the file handler
426-
instead and let {beatname_uc} pick up the file again.
410+
The `backoff.init` option defines how long {beatname_uc} waits for the first time
411+
before checking a file again after EOF is reached. The backoff intervals increase exponentially.
412+
The default is 2s. Thus, the file is checked after 2 seconds, then 4 seconds,
413+
then 8 seconds and so on until it reaches the limit defined in `backoff.max`.
414+
Every time a new line appears in the file, the `backoff.init` value is reset to the
415+
initial value.
427416

428417
[float]
429418
===== `backoff.max`
430419

431420
The maximum time for {beatname_uc} to wait before checking a file again after
432421
EOF is reached. After having backed off multiple times from checking the file,
433-
the wait time will never exceed `max_backoff` regardless of what is specified
434-
for `backoff_factor`. Because it takes a maximum of 10s to read a new line,
435-
specifying 10s for `max_backoff` means that, at the worst, a new line could be
422+
the wait time will never exceed `backoff.max`.
423+
Because it takes a maximum of 10s to read a new line,
424+
specifying 10s for `backoff.max` means that, at the worst, a new line could be
436425
added to the log file if {beatname_uc} has backed off multiple times. The
437426
default is 10s.
438427

439-
Requirement: Set `max_backoff` to be greater than or equal to `backoff` and
440-
less than or equal to `scan_frequency` (`backoff <= max_backoff <= scan_frequency`).
441-
If `max_backoff` needs to be higher, it is recommended to close the file handler
428+
Requirement: Set `backoff.max` to be greater than or equal to `backoff.init` and
429+
less than or equal to `prospector.scanner.check_interval`
430+
(`backoff.init <= backoff.max <= prospector.scanner.check_interval`).
431+
If `backoff.max` needs to be higher, it is recommended to close the file handler
442432
instead and let {beatname_uc} pick up the file again.
443433

444434
[float]

filebeat/input/filestream/config.go

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -121,7 +121,7 @@ func defaultCloserConfig() closerConfig {
121121
func defaultReaderConfig() readerConfig {
122122
return readerConfig{
123123
Backoff: backoffConfig{
124-
Init: 1 * time.Second,
124+
Init: 2 * time.Second,
125125
Max: 10 * time.Second,
126126
},
127127
BufferSize: 16 * humanize.KiByte,

0 commit comments

Comments
 (0)