Skip to content

Commit 73150c2

Browse files
[Elastic Log Driver] Create a config shim between libbeat and the user (#18605)
* init commit of config shim * update docs * make check * add timeout * move config system to use typeconv * add rest of backoff settings. * make fmt * some cleanup * use uint64 hash for structs * make fmt * create custom index manager, remove ILM support * add support for multiple endpoints * update tests * update docs * remove setup options * remove old tests * try to update asciidocm change 'endpoint' to 'hosts' * trying to fix CI * update docs * fix backtics
1 parent 600998b commit 73150c2

File tree

12 files changed

+264
-269
lines changed

12 files changed

+264
-269
lines changed

x-pack/dockerlogbeat/docs/configuration.asciidoc

Lines changed: 22 additions & 205 deletions
Original file line numberDiff line numberDiff line change
@@ -14,9 +14,6 @@ you can set them in the `daemon.json` file for all containers.
1414

1515
* <<cloud-options>>
1616
* <<es-output-options>>
17-
* <<ls-output-options>>
18-
* <<kafka-output-options>>
19-
* <<redis-output-options>>
2017

2118
[float]
2219
=== Usage examples
@@ -39,11 +36,11 @@ For more examples, see <<log-driver-usage-examples>>.
3936
|=====
4037
|Option | Description
4138

42-
|`cloud.id`
39+
|`cloud_id`
4340
|The Cloud ID found in the Elastic Cloud web console. This ID is
4441
used to resolve the {stack} URLs when connecting to {ess} on {ecloud}.
4542

46-
|`cloud.auth`
43+
|`cloud_auth`
4744
|The username and password combination for connecting to {ess} on {ecloud}. The
4845
format is `"username:password"`.
4946
|=====
@@ -61,242 +58,62 @@ format is `"username:password"`.
6158
|=====
6259
|Option |Default |Description
6360

64-
|`output.elasticsearch.hosts`
61+
|`hosts`
6562
|`"localhost:9200"`
6663
|The list of {es} nodes to connect to. Specify each node as a `URL` or
6764
`IP:PORT`. For example: `http://192.0.2.0`, `https://myhost:9230` or
6865
`192.0.2.0:9300`. If no port is specified, the default is `9200`.
6966

70-
|`output.elasticsearch.protocol`
71-
|`http`
72-
|The protocol (`http` or `https`) that {es} is reachable on. If you specify a
73-
URL for `hosts`, the value of `protocol` is overridden by whatever scheme you
74-
specify in the URL.
75-
76-
|`output.elasticsearch.username`
67+
|`user`
7768
|
7869
|The basic authentication username for connecting to {es}.
7970

80-
|`output.elasticsearch.password`
71+
|`password`
8172
|
8273
|The basic authentication password for connecting to {es}.
8374

84-
|`output.elasticsearch.index`
75+
|`index`
8576
|
8677
|A {beats-ref}/config-file-format-type.html#_format_string_sprintf[format string]
8778
value that specifies the index to write events to when you're using daily
8879
indices. For example: +"dockerlogs-%{+yyyy.MM.dd}"+.
8980

9081
3+|*Advanced:*
9182

92-
|`output.elasticsearch.backoff.init`
83+
|`backoff_init`
9384
|`1s`
9485
|The number of seconds to wait before trying to reconnect to {es} after
9586
a network error. After waiting `backoff.init` seconds, the {log-driver}
9687
tries to reconnect. If the attempt fails, the backoff timer is increased
9788
exponentially up to `backoff.max`. After a successful connection, the backoff
9889
timer is reset.
9990

100-
|`output.elasticsearch.backoff.max`
91+
|`backoff_max`
10192
|`60s`
10293
|The maximum number of seconds to wait before attempting to connect to
10394
{es} after a network error.
10495

105-
|`output.elasticsearch.bulk_max_size`
106-
|`50`
107-
|The maximum number of events to bulk in a single {es} bulk API index request.
108-
Specify 0 to allow the queue to determine the batch size.
109-
110-
|`output.elasticsearch.compression_level`
111-
|`0`
112-
|The gzip compression level. Valid compression levels range from 1 (best speed)
113-
to 9 (best compression). Specify 0 to disable compression. Higher compression
114-
levels reduce network usage, but increase CPU usage.
115-
116-
|`output.elasticsearch.escape_html`
117-
|`false`
118-
|Whether to escape HTML in strings.
119-
120-
|`output.elasticsearch.headers`
121-
|
122-
|Custom HTTP headers to add to each request created by the {es} output. Specify
123-
multiple header values for the same header name by separating them with a comma.
124-
125-
|`output.elasticsearch.loadbalance`
126-
|`false`
127-
|Whether to load balance when sending events to multiple hosts. The load
128-
balancer also supports multiple workers per host (see
129-
`output.elasticsearch.worker`.)
130-
131-
|`output.elasticsearch.max_retries`
132-
|`3`
133-
|The number of times to retry publishing an event after a publishing failure.
134-
After the specified number of retries, the events are typically dropped. Specify
135-
0 to retry indefinitely.
136-
137-
|`output.elasticsearch.parameters`
138-
|
139-
| A dictionary of HTTP parameters to pass within the URL with index operations.
140-
141-
|`output.elasticsearch.path`
96+
|`api_key`
14297
|
143-
|An HTTP path prefix that is prepended to the HTTP API calls. This is useful for
144-
cases where {es} listens behind an HTTP reverse proxy that exports the API under
145-
a custom prefix.
98+
|Instead of using usernames and passwords,
99+
you can use API keys to secure communication with {es}.
146100

147-
|`output.elasticsearch.pipeline`
101+
|`pipeline`
148102
|
149-
|A {beats-ref}/config-file-format-type.html#_format_string_sprintf[format string]
150-
value that specifies the {ref}/ingest.html[ingest node pipeline] to write events
151-
to.
103+
|A format string value that specifies the ingest node pipeline to write events to.
152104

153-
|`output.elasticsearch.proxy_url`
154-
|
155-
|The URL of the proxy to use when connecting to the {es} servers. Specify a
156-
`URL` or `IP:PORT`.
157-
158-
|`output.elasticsearch.timeout`
105+
|`timeout`
159106
|`90`
160-
|The HTTP request timeout in seconds for the {es} request.
161-
162-
|`output.elasticsearch.worker`
163-
|`1`
164-
|The number of workers per configured host publishing events to {es}. Use with
165-
load balancing mode (`output.elasticsearch.loadbalance`) set to `true`. Example:
166-
If you have 2 hosts and 3 workers, in total 6 workers are started (3 for each
167-
host).
168-
169-
|=====
170-
171-
172-
[float]
173-
[[ls-output-options]]
174-
=== {ls} output options
175-
176-
[options="header"]
177-
|=====
178-
|Option | Default | Description
179-
180-
|`output.logstash.hosts`
181-
|`"localhost:5044"`
182-
|The list of known {ls} servers to connect to. If load balancing is
183-
disabled, but multiple hosts are configured, one host is selected randomly
184-
(there is no precedence). If one host becomes unreachable, another one is
185-
selected randomly. If no port is specified, the default is `5044`.
186-
187-
|`output.logstash.index`
188-
|
189-
|The index root name to write events to. For example +"dockerlogs"+ generates
190-
+"dockerlogs-{version}"+ indices.
191-
192-
3+|*Advanced:*
193-
194-
|`output.logstash.backoff.init`
195-
|`1s`
196-
|The number of seconds to wait before trying to reconnect to {ls} after
197-
a network error. After waiting `backoff.init` seconds, the {log-driver}
198-
tries to reconnect. If the attempt fails, the backoff timer is increased
199-
exponentially up to `backoff.max`. After a successful connection, the backoff
200-
timer is reset.
201-
202-
|`output.logstash.backoff.max`
203-
|`60s`
204-
|The maximum number of seconds to wait before attempting to connect to
205-
{ls} after a network error.
206-
207-
|`output.logstash.bulk_max_size`
208-
|`2048`
209-
|The maximum number of events to bulk in a single {ls} request. Specify 0 to
210-
allow the queue to determine the batch size.
211-
212-
|`output.logstash.compression_level`
213-
|`0`
214-
|The gzip compression level. Valid compression levels range from 1 (best speed)
215-
to 9 (best compression). Specify 0 to disable compression. Higher compression
216-
levels reduce network usage, but increase CPU usage.
217-
218-
|`output.logstash.escape_html`
219-
|`false`
220-
|Whether to escape HTML in strings.
107+
|The http request timeout in seconds for the Elasticsearch request.
221108

222-
|`output.logstash.loadbalance`
223-
|`false`
224-
|Whether to load balance when sending events to multiple {ls} hosts. If set to
225-
`false`, the driver sends all events to only one host (determined at random) and
226-
switches to another host if the selected one becomes unresponsive.
227-
228-
|`output.logstash.pipelining`
229-
|`2`
230-
|The number of batches to send asynchronously to {ls} while waiting for an ACK
231-
from {ls}. Specify 0 to disable pipelining.
232-
233-
|`output.logstash.proxy_url`
109+
|`proxy_url`
234110
|
235-
|The URL of the SOCKS5 proxy to use when connecting to the {ls} servers. The
236-
value must be a URL with a scheme of `socks5://`. You can embed a
237-
username and password in the URL (for example,
238-
`socks5://user:password@socks5-proxy:2233`).
239-
240-
|`output.logstash.proxy_use_local_resolver`
241-
|`false`
242-
|Whether to resolve {ls} hostnames locally when using a proxy. If `false`,
243-
name resolution occurs on the proxy server.
111+
|The URL of the proxy to use when connecting to the Elasticsearch servers. The
112+
value may be either a complete URL or a `host[:port]`, in which case the `http`
113+
scheme is assumed. If a value is not specified through the configuration file
114+
then proxy environment variables are used. See the
115+
https://golang.org/pkg/net/http/#ProxyFromEnvironment[Go documentation]
116+
for more information about the environment variables.
244117

245-
|`output.logstash.slow_start`
246-
|`false`
247-
|When enabled, only a subset of events in a batch are transferred per
248-
transaction. If there are no errors, the number of events per transaction
249-
is increased up to the bulk max size (see `output.logstash.bulk_max_size`).
250-
On error, the number of events per transaction is reduced again.
251-
252-
|`output.logstash.timeout`
253-
|`30`
254-
|The number of seconds to wait for responses from the {ls} server before
255-
timing out.
256-
257-
|`output.logstash.ttl`
258-
|`0`
259-
|Time to live for a connection to {ls} after which the connection will be
260-
re-established. Useful when {ls} hosts represent load balancers. Because
261-
connections to {ls} hosts are sticky, operating behind load balancers can lead
262-
to uneven load distribution across instances. Specify a TTL on the connection
263-
to distribute connections across instances. Specify 0 to disable this feature.
264-
This option is not supported if `output.logstash.pipelining` is set.
265-
266-
|`output.logstash.worker`
267-
|`1`
268-
|The number of workers per configured host publishing events to {ls}. Use with
269-
load balancing mode (`output.logstash.loadbalance`) set to `true`. Example:
270-
If you have 2 hosts and 3 workers, in total 6 workers are started (3 for each
271-
host).
272118

273119
|=====
274-
275-
[float]
276-
[[kafka-output-options]]
277-
=== Kafka output options
278-
279-
// TODO: Add kafka output options here.
280-
281-
// NOTE: The following annotation renders as: "Coming in a future update. This
282-
// documentation is a work in progress."
283-
284-
coming[a future update. This documentation is a work in progress]
285-
286-
Need the docs now? See the
287-
{filebeat-ref}/kafka-output.html[Kafka output docs] for {filebeat}.
288-
The {log-driver} supports most of the same options, just make sure you use
289-
the fully qualified setting names.
290-
291-
[float]
292-
[[redis-output-options]]
293-
=== Redis output options
294-
295-
// TODO: Add Redis output options here.
296-
297-
coming[a future update. This documentation is a work in progress]
298-
299-
Need the docs now? See the
300-
{filebeat-ref}/redis-output.html[Redis output docs] for {filebeat}.
301-
The {log-driver} supports most of the same options, just make sure you use
302-
the fully qualified setting names.

x-pack/dockerlogbeat/docs/install.asciidoc

Lines changed: 6 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -81,10 +81,9 @@ example:
8181
["source","sh",subs="attributes"]
8282
----
8383
docker run --log-driver=elastic/{log-driver-alias}:{version} \
84-
--log-opt output.elasticsearch.hosts="https://myhost:9200" \
85-
--log-opt output.elasticsearch.username="myusername" \
86-
--log-opt output.elasticsearch.password="mypassword" \
87-
--log-opt output.elasticsearch.index="elastic-log-driver-%{+yyyy.MM.dd}" \
84+
--log-opt endpoint="https://myhost:9200" \
85+
--log-opt user="myusername" \
86+
--log-opt password="mypassword" \
8887
-it debian:jessie /bin/bash
8988
----
9089
// end::log-driver-run[]
@@ -100,10 +99,9 @@ example:
10099
{
101100
"log-driver" : "elastic/{log-driver-alias}:{version}",
102101
"log-opts" : {
103-
"output.elasticsearch.hosts" : "https://myhost:9200",
104-
"output.elasticsearch.username" : "myusername",
105-
"output.elasticsearch.password" : "mypassword",
106-
"output.elasticsearch.index" : "elastic-log-driver-%{+yyyy.MM.dd}"
102+
"endpoint" : "https://myhost:9200",
103+
"user" : "myusername",
104+
"password" : "mypassword"
107105
}
108106
}
109107
----

x-pack/dockerlogbeat/docs/limitations.asciidoc

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -8,6 +8,5 @@ This release of the {log-driver} has the following known problems and
88
limitations:
99

1010
* Spool to disk (beta) is not supported.
11-
* Complex config options can't be easily represented via `--log-opts`.
1211
* Mapping templates and other assets that are normally installed by the
1312
{beats} setup are not available.

0 commit comments

Comments
 (0)