diff --git a/changelogs/7.15.asciidoc b/changelogs/7.15.asciidoc index 64adbbfe59d..cfd6e6fd0fd 100644 --- a/changelogs/7.15.asciidoc +++ b/changelogs/7.15.asciidoc @@ -37,7 +37,7 @@ https://github.com/elastic/apm-server/compare/v7.14.2\...v7.15.0[View commits] - `network.connection_type` is now `network.connection.type` {pull}5671[5671] - `transaction.page` and `error.page` no longer recorded {pull}5872[5872] - experimental:["This breaking change applies to the experimental tail-based sampling feature."] `apm-server.sampling.tail` now requires `apm-server.data_streams.enabled` {pull}5952[5952] -- beta:["This breaking change applies to the beta <>."] The `traces-sampled-*` data stream is now `traces-apm.sampled-*` {pull}5952[5952] +- beta:["This breaking change applies to the beta APM integration."] The `traces-sampled-*` data stream is now `traces-apm.sampled-*` {pull}5952[5952] [float] ==== Bug fixes diff --git a/changelogs/8.0.asciidoc b/changelogs/8.0.asciidoc index 1afd7b8547e..6e17fa4daca 100644 --- a/changelogs/8.0.asciidoc +++ b/changelogs/8.0.asciidoc @@ -177,7 +177,7 @@ No significant changes. ==== Breaking Changes * APM Server now responds with 403 (HTTP) and PermissionDenied (gRPC) for authenticated but unauthorized requests {pull}5545[5545] * `sourcemap.error` and `sourcemap.updated` are no longer set due to failing to find a matching source map {pull}5631[5631] -* experimental:["This breaking change applies to the experimental <>."] Removed `service.name` from dataset {pull}5451[5451] +* experimental:["This breaking change applies to the experimental APM integration."] Removed `service.name` from dataset {pull}5451[5451] // [float] // ==== Bug fixes diff --git a/docs/a.yml b/docs/a.yml deleted file mode 100644 index 535c6dc31d7..00000000000 --- a/docs/a.yml +++ /dev/null @@ -1 +0,0 @@ -# delete me \ No newline at end of file diff --git a/docs/anonymous-auth.asciidoc b/docs/anonymous-auth.asciidoc new file mode 100644 index 00000000000..643738e8699 --- /dev/null +++ b/docs/anonymous-auth.asciidoc @@ -0,0 +1,63 @@ +[[anonymous-auth]] +=== Anonymous authentication + +Elastic APM agents can send unauthenticated (anonymous) events to the APM Server. +An event is considered to be anonymous if no authentication token can be extracted from the incoming request. +The APM Server's default response to these these requests depends on its configuration: + +[options="header"] +|==== +|Configuration |Default +|An <> or <> is configured | Anonymous requests are rejected and an authentication error is returned. +|No API key or secret token is configured | Anonymous requests are accepted by the APM Server. +|==== + +In some cases, however, it makes sense to allow both authenticated and anonymous requests. +For example, it isn't possible to authenticate requests from front-end services as +the secret token or API key can't be protected. This is the case with the Real User Monitoring (RUM) +agent running in a browser, or the iOS/Swift agent running in a user application. +However, you still likely want to authenticate requests from back-end services. +To solve this problem, you can enable anonymous authentication in the APM Server to allow the +ingestion of unauthenticated client-side APM data while still requiring authentication for server-side services. + +[float] +[[anonymous-auth-config]] +=== Configuring anonymous auth for client-side services + +[NOTE] +==== +You can only enable and configure anonymous authentication if an <> or +<> is configured. If neither are configured, these settings will be ignored. +==== + +include::./tab-widgets/anonymous-auth-widget.asciidoc[] + +[float] +[[derive-client-ip]] +=== Deriving an incoming request's `client.ip` address + +The remote IP address of an incoming request might be different +from the end-user's actual IP address, for example, because of a proxy. For this reason, +the APM Server attempts to derive the IP address of an incoming request from HTTP headers. +The supported headers are parsed in the following order: + +1. `Forwarded` +2. `X-Real-Ip` +3. `X-Forwarded-For` + +If none of these headers are present, the remote address for the incoming request is used. + +[float] +[[derive-client-ip-concerns]] +==== Using a reverse proxy or load balancer + +HTTP headers are easily modified; +it's possible for anyone to spoof the derived `client.ip` value by changing or setting, +for example, the value of the `X-Forwarded-For` header. +For this reason, if any of your clients are not trusted, +we recommend setting up a reverse proxy or load balancer in front of the APM Server. + +Using a proxy allows you to clear any existing IP-forwarding HTTP headers, +and replace them with one set by the proxy. +This prevents malicious users from cycling spoofed IP addresses to bypass the +APM Server's rate limiting feature. diff --git a/docs/api-keys.asciidoc b/docs/api-keys.asciidoc new file mode 100644 index 00000000000..33f5266b360 --- /dev/null +++ b/docs/api-keys.asciidoc @@ -0,0 +1,321 @@ +[[api-key]] +=== API keys + +IMPORTANT: API keys are sent as plain-text, +so they only provide security when used in combination with <>. + +When enabled, API keys are used to authorize requests to the APM Server. +API keys are not applicable for APM agents running on clients, like the RUM agent, +as there is no way to prevent them from being publicly exposed. + +You can assign one or more unique privileges to each API key: + +* *Agent configuration* (`config_agent:read`): Required for agents to read +{kibana-ref}/agent-configuration.html[Agent configuration remotely]. +* *Ingest* (`event:write`): Required for ingesting agent events. + +To secure the communication between APM Agents and the APM Server with API keys, +make sure <> is enabled, then complete these steps: + +. <> +. <> +. <> +. <> + +[[enable-api-key]] +[float] +=== Enable API keys + +include::./tab-widgets/api-key-widget.asciidoc[] + +[[create-api-key-user]] +[float] +=== Create an API key user in {kib} + +API keys can only have the same or lower access rights than the user that creates them. +Instead of using a superuser account to create API keys, you can create a role with the minimum required +privileges. + +The user creating an {apm-agent} API key must have at least the `manage_own_api_key` cluster privilege +and the APM application-level privileges that it wishes to grant. +In addition, when creating an API key from the {apm-app}, +you'll need the appropriate {kib} Space and Feature privileges. + +The example below uses the {kib} {kibana-ref}/role-management-api.html[role management API] +to create a role named `apm_agent_key_role`. + +[source,js] +---- +POST /_security/role/apm_agent_key_role +{ + "cluster": [ "manage_own_api_key" ], + "applications": [ + { + "application":"apm", + "privileges":[ + "event:write", + "config_agent:read" + ], + "resources":[ "*" ] + }, + { + "application":"kibana-.kibana", + "privileges":[ "feature_apm.all" ], + "resources":[ "space:default" ] <1> + } + ] +} +---- +<1> This example assigns privileges for the default space. + +Assign the newly created `apm_agent_key_role` role to any user that wishes to create {apm-agent} API keys. + +[[create-an-api-key]] +[float] +=== Create an API key in the {apm-app} + +The {apm-app} has a built-in workflow that you can use to easily create and view {apm-agent} API keys. +Only API keys created in the {apm-app} will show up here. + +Using a superuser account, or a user with the role created in the previous step, +open {kib} and navigate to **{observability}** > **APM** > **Settings** > **Agent keys**. +Enter a name for your API key and select at least one privilege. + +For example, to create an API key that can be used to ingest APM events +and read agent central configuration, select `config_agent:read` and `event:write`. + +// lint ignore apm-agent +Click **Create APM Agent key** and copy the Base64 encoded API key. +You will need this for the next step, and you will not be able to view it again. + +[role="screenshot"] +image::images/apm-ui-api-key.png[{apm-app} API key] + +[[agent-api-key]] +[float] +=== Set the API key in your APM agents + +You can now apply your newly created API keys in the configuration of each of your APM agents. +See the relevant agent documentation for additional information: + +// Not relevant for RUM and iOS +* *Go agent*: {apm-go-ref}/configuration.html#config-api-key[`ELASTIC_APM_API_KEY`] +* *.NET agent*: {apm-dotnet-ref}/config-reporter.html#config-api-key[`ApiKey`] +* *Java agent*: {apm-java-ref}/config-reporter.html#config-api-key[`api_key`] +* *Node.js agent*: {apm-node-ref}/configuration.html#api-key[`apiKey`] +* *PHP agent*: {apm-php-ref-v}/configuration-reference.html#config-api-key[`api_key`] +* *Python agent*: {apm-py-ref}/configuration.html#config-api-key[`api_key`] +* *Ruby agent*: {apm-ruby-ref}/configuration.html#config-api-key[`api_key`] + +[[configure-api-key-alternative]] +[float] +=== Alternate API key creation methods + +API keys can also be created and validated outside of {kib}: + +* <> +* <> + +[[create-api-key-workflow-apm-server]] +[float] +==== APM Server API key workflow + +This API creation method only works with the APM Server binary. + +deprecated::[8.6.0, Users should create API Keys through {kib} or the {es} REST API] + +APM Server provides a command line interface for creating, retrieving, invalidating, and verifying API keys. +Keys created using this method can only be used for communication with APM Server. + +[[create-api-key-subcommands]] +[float] +===== `apikey` subcommands + +include::{libbeat-dir}/command-reference.asciidoc[tag=apikey-subcommands] + +[[create-api-key-privileges]] +[float] +===== Privileges + +If privileges are not specified at creation time, the created key will have all privileges. + +* `--agent-config` grants the `config_agent:read` privilege +* `--ingest` grants the `event:write` privilege +* `--sourcemap` grants the `sourcemap:write` privilege + +[[create-api-key-workflow]] +[float] +===== Create an API key + +Create an API key with the `create` subcommand. + +The following example creates an API key with a `name` of `java-001`, +and gives the "agent configuration" and "ingest" privileges. + +["source","sh",subs="attributes"] +----- +{beatname_lc} apikey create --ingest --agent-config --name java-001 +----- + +The response will look similar to this: + +[source,console-result] +-------------------------------------------------- +Name ........... java-001 +Expiration ..... never +Id ............. qT4tz28B1g59zC3uAXfW +API Key ........ rH55zKd5QT6wvs3UbbkxOA (won't be shown again) +Credentials .... cVQ0dHoyOEIxZzU5ekMzdUFYZlc6ckg1NXpLZDVRVDZ3dnMzVWJia3hPQQ== (won't be shown again) +-------------------------------------------------- + +You should always verify the privileges of an API key after creating it. +Verification can be done using the `verify` subcommand. + +The following example verifies that the `java-001` API key has the "agent configuration" and "ingest" privileges. + +["source","sh",subs="attributes"] +----- +{beatname_lc} apikey verify --agent-config --ingest --credentials cVQ0dHoyOEIxZzU5ekMzdUFYZlc6ckg1NXpLZDVRVDZ3dnMzVWJia3hPQQ== +----- + +If the API key has the requested privileges, the response will look similar to this: + +[source,console-result] +-------------------------------------------------- +Authorized for privilege "event:write"...: Yes +Authorized for privilege "config_agent:read"...: Yes +-------------------------------------------------- + +To invalidate an API key, use the `invalidate` subcommand. +Due to {es} caching, there may be a delay between when this subcommand is executed and when it takes effect. + +The following example invalidates the `java-001` API key. + +["source","sh",subs="attributes"] +----- +{beatname_lc} apikey invalidate --name java-001 +----- + +The response will look similar to this: + +[source,console-result] +-------------------------------------------------- +Invalidated keys ... qT4tz28B1g59zC3uAXfW +Error count ........ 0 +-------------------------------------------------- + +A full list of `apikey` subcommands and flags is available in the <>. + +[[create-api-key-workflow-es]] +[float] +==== {es} API key workflow + +It is also possible to create API keys using the {es} +{ref}/security-api-create-api-key.html[create API key API]. + +This example creates an API key named `java-002`: + +[source,kibana] +---- +POST /_security/api_key +{ + "name": "java-002", <1> + "expiration": "1d", <2> + "role_descriptors": { + "apm": { + "applications": [ + { + "application": "apm", + "privileges": ["sourcemap:write", "event:write", "config_agent:read"], <3> + "resources": ["*"] + } + ] + } + } +} +---- +<1> The name of the API key +<2> The expiration time of the API key +<3> Any assigned privileges + +The response will look similar to this: + +[source,console-result] +---- +{ + "id" : "GnrUT3QB7yZbSNxKET6d", + "name" : "java-002", + "expiration" : 1599153532262, + "api_key" : "RhHKisTmQ1aPCHC_TPwOvw" +} +---- + +The `credential` string, which is what agents use to communicate with APM Server, +is a base64 encoded representation of the API key's `id:api_key`. +It can be created like this: + +[source,console-result] +-------------------------------------------------- +echo -n GnrUT3QB7yZbSNxKET6d:RhHKisTmQ1aPCHC_TPwOvw | base64 +-------------------------------------------------- + +You can verify your API key has been base64-encoded correctly with the +{ref}/security-api-authenticate.html[Authenticate API]: + +["source","sh",subs="attributes"] +----- +curl -H "Authorization: ApiKey R0gzRWIzUUI3eVpiU054S3pYSy06bXQyQWl4TlZUeEcyUjd4cUZDS0NlUQ==" localhost:9200/_security/_authenticate +----- + +If the API key has been encoded correctly, you'll see a response similar to the following: + +[source,console-result] +---- +{ + "username":"1325298603", + "roles":[], + "full_name":null, + "email":null, + "metadata":{ + "saml_nameid_format":"urn:oasis:names:tc:SAML:2.0:nameid-format:transient", + "saml(http://saml.elastic-cloud.com/attributes/principal)":[ + "1325298603" + ], + "saml_roles":[ + "superuser" + ], + "saml_principal":[ + "1325298603" + ], + "saml_nameid":"_7b0ab93bbdbc21d825edf7dca9879bd8d44c0be2", + "saml(http://saml.elastic-cloud.com/attributes/roles)":[ + "superuser" + ] + }, + "enabled":true, + "authentication_realm":{ + "name":"_es_api_key", + "type":"_es_api_key" + }, + "lookup_realm":{ + "name":"_es_api_key", + "type":"_es_api_key" + } +} +---- + +You can then use the APM Server CLI to verify that the API key has the requested privileges: + +["source","sh",subs="attributes"] +----- +{beatname_lc} apikey verify --credentials R25yVVQzUUI3eVpiU054S0VUNmQ6UmhIS2lzVG1RMWFQQ0hDX1RQd092dw== +----- + +If the API key has the requested privileges, the response will look similar to this: + +[source,console-result] +---- +Authorized for privilege "config_agent:read"...: Yes +Authorized for privilege "event:write"...: Yes +Authorized for privilege "sourcemap:write"...: Yes +---- diff --git a/docs/apm-components.asciidoc b/docs/apm-components.asciidoc deleted file mode 100644 index 42ca088b7a5..00000000000 --- a/docs/apm-components.asciidoc +++ /dev/null @@ -1,93 +0,0 @@ -[[apm-components]] -== Components and documentation - -**** -There are two ways to install, run, and manage Elastic APM: - -* With the Elastic APM integration -* With the standalone (legacy) APM Server binary - -This documentation focuses on option one: the **Elastic APM integration**. -For standalone APM Server (legacy) documentation, please see the <> -and <>. -**** - -Elastic APM consists of four components: *APM agents*, the *Elastic APM integration*, *{es}*, and *{kib}*. -Generally, there are two ways that these four components can work together: - -APM agents on edge machines send data to a centrally hosted APM integration: - -[subs=attributes+] -include::./diagrams/apm-architecture-central.asciidoc[Elastic APM architecture with edge APM integrations] - -Or, APM agents and the APM integration live on edge machines and enroll via a centrally hosted {agent}: - -[subs=attributes+] -include::./diagrams/apm-architecture-edge.asciidoc[Elastic APM architecture with central APM integration] - -In addition, Elastic supports OpenTelemetry: - -[subs=attributes+] -include::./diagrams/apm-otel-architecture.asciidoc[Architecture of Elastic APM with OpenTelemetry] - -// Not sure which to choose? See the [blog post] - -[float] -=== APM Agents - -APM agents are open source libraries written in the same language as your service. -You may only need one, or you might use all of them. -You install them into your service as you would install any other library. -They instrument your code and collect performance data and errors at runtime. -This data is buffered for a short period and sent on to APM Server. - -Each agent has its own documentation: - -* {apm-go-ref-v}/introduction.html[Go agent] -* {apm-ios-ref-v}/intro.html[iOS agent] -* {apm-java-ref-v}/intro.html[Java agent] -* {apm-dotnet-ref-v}/intro.html[.NET agent] -* {apm-node-ref-v}/intro.html[Node.js agent] -* {apm-php-ref-v}/intro.html[PHP agent] -* {apm-py-ref-v}/getting-started.html[Python agent] -* {apm-ruby-ref-v}/introduction.html[Ruby agent] -* {apm-rum-ref-v}/intro.html[JavaScript Real User Monitoring (RUM) agent] - -[float] -[[apm-integration]] -=== Elastic APM integration - -The APM integration receives performance data from your APM agents, -validates and processes it, and then transforms the data into {es} documents. -Removing this logic from APM agents help keeps them light, prevents certain security risks, -and improves compatibility across the {stack}. - -The Elastic integration runs on {fleet-guide}[{agent}]. {agent} is a single, unified way to add monitoring for logs, -metrics, traces, and other types of data to each host. -A single agent makes it easier and faster to deploy monitoring across your infrastructure. -The agent's single, unified policy makes it easier to add integrations for new data sources. - -[float] -=== {es} - -{ref}/index.html[{es}] is a highly scalable free and open full-text search and analytics engine. -It allows you to store, search, and analyze large volumes of data quickly and in near real time. -{es} is used to store APM performance metrics and make use of its aggregations. - -[float] -=== {kib} {apm-app} - -{kibana-ref}/index.html[{kib}] is a free and open analytics and visualization platform designed to work with {es}. -You use {kib} to search, view, and interact with data stored in {es}. - -Since application performance monitoring is all about visualizing data and detecting bottlenecks, -it's crucial you understand how to use the {kibana-ref}/xpack-apm.html[{apm-app}] in {kib}. -The following sections will help you get started: - -* {apm-app-ref}/apm-ui.html[Set up] -* {apm-app-ref}/apm-getting-started.html[Get started] -* {apm-app-ref}/apm-how-to.html[How-to guides] - -APM also has built-in integrations with {ml-cap}. To learn more about this feature, -or the {anomaly-detect} feature that's built on top of it, -refer to {kibana-ref}/machine-learning-integration.html[{ml-cap} integration]. diff --git a/docs/apm-data-security.asciidoc b/docs/apm-data-security.asciidoc index 545d512c09c..c7d02efafd2 100644 --- a/docs/apm-data-security.asciidoc +++ b/docs/apm-data-security.asciidoc @@ -218,7 +218,7 @@ Pipelines are a flexible and easy way to filter or obfuscate Elastic APM data. [[filters-ingest-pipeline-tutorial]] ===== Tutorial: redact sensitive information -Say you decide to <> +Say you decide to <> but quickly notice that sensitive information is being collected in the `http.request.body.original` field: diff --git a/docs/apm-input-settings.asciidoc b/docs/apm-input-settings.asciidoc deleted file mode 100644 index 8c9f6537db8..00000000000 --- a/docs/apm-input-settings.asciidoc +++ /dev/null @@ -1,552 +0,0 @@ -// tag::NAME-setting[] -| -[id="input-{input-type}-NAME-setting"] -`NAME` - -| (TYPE) DESCRIPTION. - -*Default:* `DEFAULT` - -OPTIONAL INFO AND EXAMPLE -// end::NAME-setting[] - -// ============================================================================= - -// These settings are shared across the docs for multiple inputs. Copy and use -// the above template to add a shared setting. Replace values in all caps. -// Use an include statement // to pull the tagged region into your source file: -// include::input-shared-settings.asciidoc[tag=NAME-setting] - -// tag::host-setting[] -| -[id="input-{input-type}-host-setting"] -Host - -| (text) Defines the host and port the server is listening on. - -Use `"unix:/path/to.sock"` to listen on a Unix domain socket. - -*Default:* `127.0.0.1:8200` -// end::host-setting[] - -// ============================================================================= - -// tag::url-setting[] -| -[id="input-{input-type}-url-setting"] -URL - -| The publicly reachable server URL. For deployments on {ecloud} or ECK, the default is unchangeable. -// end::url-setting[] - -// ============================================================================= - -// tag::max_header_bytes-setting[] -| -[id="input-{input-type}-max_header_bytes-setting"] -Maximum size of a request's header - -| (int) Maximum permitted size of a request's header accepted by the server to be processed (in Bytes). - -*Default:* `1048576` Bytes -// end::max_header_bytes-setting[] - -// ============================================================================= - -// tag::idle_timeout-setting[] -| -[id="input-{input-type}-idle_timeout-setting"] -Idle time before underlying connection is closed - -| (text) Maximum amount of time to wait for the next incoming request before underlying connection is closed. - -*Default:* `45s` (45 seconds) -// end::idle_timeout-setting[] - -// ============================================================================= - -// tag::read_timeout-setting[] -| -[id="input-{input-type}-read_timeout-setting"] -Maximum duration for reading an entire request - -| (text) Maximum permitted duration for reading an entire request. - -*Default:* `3600s` (3600 seconds) -// end::read_timeout-setting[] - -// ============================================================================= - -// tag::shutdown_timeout-setting[] -| -[id="input-{input-type}-shutdown_timeout-setting"] -Maximum duration before releasing resources when shutting down - -| (text) Maximum duration in seconds before releasing resources when shutting down the server. - -*Default:* `30s` (30 seconds) -// end::shutdown_timeout-setting[] - -// ============================================================================= - -// tag::write_timeout-setting[] -| -[id="input-{input-type}-write_timeout-setting"] -Maximum duration for writing a response - -| (text) Maximum permitted duration for writing a response. - -*Default:* `30s` (30 seconds) -// end::write_timeout-setting[] - -// ============================================================================= - -// tag::max_event_bytes-setting[] -| -[id="input-{input-type}-max_event_bytes-setting"] -Maximum size per event - -| (int) Maximum permitted size of an event accepted by the server to be processed (in Bytes). - -*Default:* `307200` Bytes -// end::max_event_bytes-setting[] - -// ============================================================================= - -// tag::max_connections-setting[] -| -[id="input-{input-type}-max_connections-setting"] -Simultaneously accepted connections - -| (int) Maximum number of TCP connections to accept simultaneously. `0` means unlimited. - -*Default:* `0` (unlimited) -// end::max_connections-setting[] - -// ============================================================================= - -// tag::response_headers-setting[] -| -[id="input-{input-type}-response_headers-setting"] -Custom HTTP response headers - -| (text) Custom HTTP headers to add to HTTP responses. Useful for security policy compliance. - -// end::response_headers-setting[] - -// ============================================================================= - -// tag::capture_personal_data-setting[] -| -[id="input-{input-type}-capture_personal_data-setting"] -Capture personal data - -| (bool) Capture personal data such as IP or User Agent. -If true, APM Server captures the IP of the instrumented service and its User Agent if any. - -*Default:* `true` -// end::capture_personal_data-setting[] - -// ============================================================================= - -// tag::default_service_environment-setting[] -| -[id="input-{input-type}-default_service_environment-setting"] -Default Service Environment - -| (text) The default service environment for events without a defined service environment. - -*Default:* none - -// end::default_service_environment-setting[] - -// ============================================================================= - -// tag::golang_xpvar-setting[] -| -[id="input-{input-type}-golang_xpvar-setting"] -Enable APM Server Golang expvar support - -| (bool) When set to `true`, the server exposes https://golang.org/pkg/expvar/[Golang expvar] under `/debug/vars`. - -*Default:* `false` - -// end::golang_xpvar-setting[] - -// ============================================================================= - -// tag::enable_rum-setting[] -| -[id="input-{input-type}-enable_rum-setting"] -Enable RUM - -| (bool) Enables and disables Real User Monitoring (RUM). - -*Default:* `false` (disabled) -// end::enable_rum-setting[] - -// ============================================================================= - -// tag::rum_allow_origins-setting[] -| -[id="input-{input-type}-rum_allow_origins-setting"] -Allowed Origins - -| (text) A list of permitted origins for RUM support. -User-agents send an Origin header that will be validated against this list. -This is done automatically by modern browsers as part of the https://www.w3.org/TR/cors/[CORS specification]. -An origin is made of a protocol scheme, host and port, without the URL path. - -*Default:* `["*"]` (allows everything) -// end::rum_allow_origins-setting[] - -// ============================================================================= - -// tag::rum_allow_headers-setting[] -| -[id="input-{input-type}-rum_allow_headers-setting"] -Access-Control-Allow-Headers - -| (text) By default, HTTP requests made from the RUM agent to the APM integration are limited in the HTTP headers they are allowed to have. -If any other headers are added, the request will be rejected by the browser due to Cross-Origin Resource Sharing (CORS) restrictions. -If you need to add extra headers to these requests, use this configuration to allow additional headers. - -The default list of values includes `"Content-Type"`, `"Content-Encoding"`, and `"Accept"`. -Configured values are appended to the default list and used as the value for the -`Access-Control-Allow-Headers` header. -// end::rum_allow_headers-setting[] - -// ============================================================================= - -// tag::rum_response_headers-setting[] -| -[id="input-{input-type}-rum_response_headers-setting"] -Custom HTTP response headers - -| (text) Custom HTTP headers to add to RUM responses. For example, for security policy compliance. Headers set here are in addition to those set in the "Custom HTTP response headers", but only apply to RUM responses. - -*Default:* none -// end::rum_response_headers-setting[] - -// ============================================================================= - -// tag::rum_library_frame_pattern-setting[] -| -[id="input-{input-type}-rum_library_frame_pattern-setting"] -Library Frame Pattern - -| (text) RegExp to be matched against a stack trace frame's `file_name` and `abs_path` attributes. -If the RegExp matches, the stack trace frame is considered to be a library frame. -When source mapping is applied, the `error.culprit` is set to reflect the _function_ and the _filename_ -of the first non-library frame. -This aims to provide an entry point for identifying issues. - -*Default:* `"node_modules\|bower_components\|~"` -// end::rum_library_frame_pattern-setting[] - -// ============================================================================= - -// tag::rum_exclude_from_grouping-setting[] -| -[id="input-{input-type}-rum_exclude_from_grouping-setting"] -Exclude from grouping - -| (text) RegExp to be matched against a stack trace frame's `file_name`. -If the RegExp matches, the stack trace frame is excluded from being used for calculating error groups. - -*Default:* `"^/webpack"` (excludes stack trace frames that have a filename starting with `/webpack`) -// end::rum_exclude_from_grouping-setting[] - -// ============================================================================= - -// tag::tls_enabled-setting[] -| -[id="input-{input-type}-tls_enabled-setting"] -Enable TLS - -| (bool) Enable TLS. - -*Default:* `false` -// end::tls_enabled-setting[] - -// ============================================================================= - -// tag::tls_certificate-setting[] -| -[id="input-{input-type}-tls_certificate-setting"] -File path to server certificate - -| (text) The path to the file containing the certificate for server authentication. Required when TLS is enabled. - -*Default:* none -// end::tls_certificate-setting[] - -// ============================================================================= - -// tag::tls_key-setting[] -| -[id="input-{input-type}-tls_key-setting"] -File path to server certificate key - -| (text) The path to the file containing the Server certificate key. Required when TLS is enabled. - -*Default:* none -// end::tls_key-setting[] - -// ============================================================================= - -// tag::tls_supported_protocols-setting[] -| -[id="input-{input-type}-tls_supported_protocols-setting"] -Supported protocol versions - -| (array) A list of allowed TLS protocol versions. - -*Default:* `["TLSv1.1", "TLSv1.2", "TLSv1.3"]` -// end::tls_supported_protocols-setting[] - -// ============================================================================= - -// tag::tls_cipher_suites-setting[] -| -[id="input-{input-type}-tls_cipher_suites-setting"] -Cipher suites for TLS connections - -| (text) The list of cipher suites to use. The first entry has the highest priority. -If this option is omitted, the Go crypto library’s https://golang.org/pkg/crypto/tls/[default suites] are used (recommended). -Note that TLS 1.3 cipher suites are not individually configurable in Go, so they are not included in this list. -// end::tls_cipher_suites-setting[] - -// ============================================================================= - -// tag::tls_curve_types-setting[] -| -[id="input-{input-type}-tls_curve_types-setting"] -Curve types for ECDHE based cipher suites - -| (text) The list of curve types for ECDHE (Elliptic Curve Diffie-Hellman ephemeral key exchange). - -*Default:* none -// end::tls_curve_types-setting[] - -// ============================================================================= - -// tag::api_key_enabled-setting[] -| -[id="input-{input-type}-api_key_enabled-setting"] -API key for agent authentication - -| (bool) Enable or disable API key authorization between APM Server and APM agents. - -*Default:* `false` (disabled) -// end::api_key_enabled-setting[] - -// ============================================================================= - -// tag::api_key_limit-setting[] -| -[id="input-{input-type}-api_key_limit-setting"] -Number of keys - -| (int) Each unique API key triggers one request to {es}. -This setting restricts the number of unique API keys are allowed per minute. -The minimum value for this setting should be the number of API keys configured in your monitored services. - -*Default:* `100` -// end::api_key_limit-setting[] - -// ============================================================================= - -// tag::secret_token-setting[] -| -[id="input-{input-type}-secret_token-setting"] -Secret token - -| (text) Authorization token for sending APM data. -The same token must also be set in each {apm-agent}. -This token is not used for RUM endpoints. - -*Default:* No secret token set -// end::secret_token-setting[] - -// ============================================================================= - -// tag::anonymous_enabled-setting[] -| -[id="input-{input-type}-anonymous_enabled-setting"] -Anonymous Agent access - -| (bool) Enable or disable anonymous authentication. RUM agents do not support authentication, so disabling anonymous access will effectively disable RUM agents. - -*Default:* `true` (enabled) -// end::anonymous_enabled-setting[] - -// ============================================================================= - -// tag::anonymous_allow_agent-setting[] -| -[id="input-{input-type}-anonymous_allow_agent-setting"] -Allowed Anonymous agents - -| (array) A list of permitted {apm-agent} names for anonymous authentication. -Names in this list must match the agent's `agent.name`. - -*Default:* `[rum-js, js-base, iOS/swift]` (only RUM and iOS/Swift agent events are accepted) -// end::anonymous_allow_agent-setting[] - -// ============================================================================= - -// tag::anonymous_allow_service-setting[] -| -[id="input-{input-type}-anonymous_allow_service-setting"] -Allowed Anonymous services - -| (array) A list of permitted service names for anonymous authentication. -Names in this list must match the agent's `service.name`. -This can be used to limit the number of service-specific indices or data streams created. - -*Default:* Not set (any service name is accepted) -// end::anonymous_allow_service-setting[] - -// ============================================================================= - -// tag::anonymous_rate_limit_ip_limit-setting[] -| -[id="input-{input-type}-anonymous_rate_limit_ip_limit-setting"] -Anonymous Rate limit (IP limit) - -| (int) The number of unique IP addresses to track in a least recently used (LRU) cache. -IP addresses in the cache will be rate limited according to the `anonymous_rate_limit_event_limit` setting. -Consider increasing this default if your application has many concurrent clients. - -*Default:* `10000` -// end::anonymous_rate_limit_ip_limit-setting[] - -// ============================================================================= - -// tag::anonymous_rate_limit_event_limit-setting[] -| -[id="input-{input-type}-anonymous_rate_limit_event_limit-setting"] -Anonymous Event rate limit (event limit) - -| (int) The maximum amount of events allowed to be sent to the APM Server anonymous auth endpoint per IP per second. - -*Default:* `10` -// end::anonymous_rate_limit_event_limit-setting[] - -// ============================================================================= - -// tag::tail_sampling_enabled-setting[] -| -[id="input-{input-type}-tail_sampling_enabled"] -Enable Tail-based sampling - -| (bool) Enable and disable tail-based sampling. - -*Default:* `false` -// end::tail_sampling_enabled-setting[] - -// ============================================================================= - -// tag::tail_sampling_interval-setting[] -| -[id="input-{input-type}-tail_sampling_interval"] -Interval - -| (duration) Synchronization interval for multiple APM Servers. -Should be in the order of tens of seconds or low minutes. - -*Default:* `1m` -// end::tail_sampling_interval-setting[] - -// ============================================================================= - -// tag::tail_sampling_policies-setting[] -| -[id="input-{input-type}-tail_sampling_policies"] -Policies - -| (`[]policy`) Criteria used to match a root transaction to a sample rate. -Order is important; the first policy on the list that an event matches is the winner. -Each policy list must conclude with a default policy that only specifies a sample rate. -The default policy is used to catch remaining trace events that don’t match a stricter policy. - -Required when tail-based sampling is enabled. - -// end::tail_sampling_policies-setting[] - -// ============================================================================= - -// tag::sample_rate-setting[] -| -[id="input-{input-type}-sample_rate"] -Sample rate - -`sample_rate` - -| (int) The sample rate to apply to trace events matching this policy. -Required in each policy. - -The sample rate must be greater than `0` and less than or equal to `1`. -For example, a `sample_rate` of `0.01` means that 1% of trace events matching the policy will be sampled. -A `sample_rate` of `1` means that 100% of trace events matching the policy will be sampled. - -// end::sample_rate-setting[] - -// ============================================================================= - -// tag::trace_name-setting[] -| -[id="input-{input-type}-trace_name"] -Trace name - -`trace.name` - -| (string) The trace name for events to match a policy. -A match occurs when the configured `trace.name` matches the `transaction.name` of the root transaction of a trace. -A root transaction is any transaction without a `parent.id`. - -// end::trace_name-setting[] - -// ============================================================================= - -// tag::trace_outcome-setting[] -| -[id="input-{input-type}-trace_outcome"] -Trace outcome - -`trace.outcome` - -| (string) The trace outcome for events to match a policy. -A match occurs when the configured `trace.outcome` matches a trace's `event.outcome` field. -Trace outcome can be `success`, `failure`, or `unknown`. - -// end::trace_outcome-setting[] - -// ============================================================================= - -// tag::service_name-setting[] -| -[id="input-{input-type}-service_name"] -Service name - -`service.name` - -| (string) The service name for events to match a policy. - -// end::service_name-setting[] - -// ============================================================================= - -// tag::service_env-setting[] -| -[id="input-{input-type}-service_env"] -Service Environment - -`service.environment` - -| (string) The service environment for events to match a policy. - -// end::service_env-setting[] - -// ============================================================================= diff --git a/docs/apm-overview.asciidoc b/docs/apm-overview.asciidoc index 299726c529c..3ffa3ebb380 100644 --- a/docs/apm-overview.asciidoc +++ b/docs/apm-overview.asciidoc @@ -22,6 +22,5 @@ like JVM metrics in the Java Agent, and Go runtime metrics in the Go Agent. [float] === Give Elastic APM a try -Learn more about the <> that make up Elastic APM -// , -// or jump right into the <>. +Use the <> to quickly spin up an APM deployment. +Want to host everything yourself instead? See <>. \ No newline at end of file diff --git a/docs/apm-quick-start.asciidoc b/docs/apm-quick-start.asciidoc index 8b6c1e20e4f..62cf252e561 100644 --- a/docs/apm-quick-start.asciidoc +++ b/docs/apm-quick-start.asciidoc @@ -1,9 +1,23 @@ [[apm-quick-start]] -== Quick start +== Quick start with {ecloud} -// * Point to EA APT/YUM -// * Point to EA for running on Docker -// * Point to EA for directory layout -// * Point to EA for systemd +The easiest way to get started with Elastic APM is by using our +{ess-product}[hosted {es} Service] on {ecloud}. +The {es} Service is available on AWS, GCP, and Azure. +The {es} Service provisions the following components of the {stack}: + +* *{es}* -- A highly scalable free and open full-text search and analytics engine. +* *{kib}* -- An analytics and visualization platform designed to work with {es}. +* *Integrations Server* -- A combined *APM Server* and *Fleet-managed {agent}*. +** *APM Server* -- An application that receives, processes, and validates performance data from your APM agents. +** *Fleet-managed {agent}* -- A server that runs Fleet Server and provides a control plane for easily configuring and updating APM and other integrations. + +Don't worry--in order to get started, +you don't need to understand how all of these pieces work together! +When you use our hosted {es} Service, +simply spin-up your instance and point your *APM agents* towards it. + +[float] +== What will I learn in this guide? include::{obs-repo-dir}/observability/ingest-traces.asciidoc[tag=apm-quick-start] diff --git a/docs/apm-response-codes.asciidoc b/docs/apm-response-codes.asciidoc new file mode 100644 index 00000000000..fdbbfc31e0c --- /dev/null +++ b/docs/apm-response-codes.asciidoc @@ -0,0 +1,43 @@ +[[common-response-codes]] +=== APM Server response codes + +[[bad-request]] +[float] +==== HTTP 400: Data decoding error / Data validation error + +The most likely cause for this error is using incompatible versions of {apm-agent} and APM Server. +See the <> to verify compatibility. + +[[event-too-large]] +[float] +==== HTTP 400: Event too large + +APM agents communicate with the APM server by sending events in an HTTP request. Each event is sent as its own line in the HTTP request body. If events are too large, you should consider increasing the <> +setting in the APM integration, and adjusting relevant settings in the agent. + +[[unauthorized]] +[float] +==== HTTP 401: Invalid token + +Either the <> in the request header doesn't match the secret token configured in the APM integration, +or the <> is invalid. + +[[forbidden]] +[float] +==== HTTP 403: Forbidden request + +Either you are sending requests to a <> endpoint without RUM enabled, or a request +is coming from an origin not specified in the APM integration settings. +See the <> setting for more information. + +[[request-timed-out]] +[float] +==== HTTP 503: Request timed out waiting to be processed + +This happens when APM Server exceeds the maximum number of requests that it can process concurrently. +To alleviate this problem, you can try to: reduce the sample rate and/or reduce the collected stack trace information. +See <> for more information. + +Another option is to increase processing power. +This can be done by either migrating your {agent} to a more powerful machine +or adding more APM Server instances. \ No newline at end of file diff --git a/docs/apm-server-down.asciidoc b/docs/apm-server-down.asciidoc new file mode 100644 index 00000000000..89c89999a7d --- /dev/null +++ b/docs/apm-server-down.asciidoc @@ -0,0 +1,29 @@ +[[server-es-down]] +=== What happens when APM Server or {es} is down? + +*If {es} is down* + +APM Server does not have an internal queue to buffer requests, +but instead leverages an HTTP request timeout to act as back-pressure. +If {es} goes down, the APM Server will eventually deny incoming requests. +Both the APM Server and {apm-agent}(s) will issue logs accordingly. + +*If APM Server is down* + +Some agents have internal queues or buffers that will temporarily store data if the APM Server goes down. +As a general rule of thumb, queues fill up quickly. Assume data will be lost if APM Server goes down. +Adjusting these queues/buffers can increase the agent's overhead, so use caution when updating default values. + +* **Go agent** - Circular buffer with configurable size: +{apm-go-ref}/configuration.html#config-api-buffer-size[`ELASTIC_APM_BUFFER_SIZE`]. +// * **iOS agent** - ?? +* **Java agent** - Internal buffer with configurable size: +{apm-java-ref}/config-reporter.html#config-max-queue-size[`max_queue_size`]. +* **Node.js agent** - No internal queue. Data is lost. +* **PHP agent** - No internal queue. Data is lost. +* **Python agent** - Internal {apm-py-ref}/tuning-and-overhead.html#tuning-queue[Transaction queue] +with configurable size and time between flushes. +* **Ruby agent** - Internal queue with configurable size: +{apm-ruby-ref}/configuration.html#config-api-buffer-size[`api_buffer_size`]. +* **RUM agent** - No internal queue. Data is lost. +* **.NET agent** - No internal queue. Data is lost. \ No newline at end of file diff --git a/docs/apm-tune-elasticsearch.asciidoc b/docs/apm-tune-elasticsearch.asciidoc deleted file mode 100644 index 518fbdec244..00000000000 --- a/docs/apm-tune-elasticsearch.asciidoc +++ /dev/null @@ -1,22 +0,0 @@ -[[apm-tune-elasticsearch]] -=== Tune {es} for data ingestion - -++++ -Tune {es} -++++ - -The {es} Reference provides insight on tuning {es}. - -{ref}/tune-for-indexing-speed.html[Tune for indexing speed] provides information on: - -* Refresh interval -* Disabling swapping -* Optimizing file system cache -* Considerations regarding faster hardware -* Setting the indexing buffer size - -{ref}/tune-for-disk-usage.html[Tune for disk usage] provides information on: - -* Disabling unneeded features -* Shard size -* Shrink index diff --git a/docs/common-problems.asciidoc b/docs/common-problems.asciidoc index 01616dbda6a..ba0eb7ba8f7 100644 --- a/docs/common-problems.asciidoc +++ b/docs/common-problems.asciidoc @@ -1,15 +1,13 @@ [[common-problems]] === Common problems -This section describes common problems for users running {agent} and the APM integration. -If you're using the standalone (legacy) APM Server binary, see -<> instead. +This section describes common problems you might encounter when using a Fleet-managed APM Server. * <> * <> * <> * <> -* <> +* <> [float] [[no-data-indexed]] @@ -17,83 +15,58 @@ If you're using the standalone (legacy) APM Server binary, see If no data shows up in {es}, first make sure that your APM components are properly connected. -**Is {agent} healthy?** - -In {kib} open **{fleet}** and find the host that is running the APM integration; -confirm that its status is **Healthy**. -If it isn't, check the {agent} logs to diagnose potential causes. -See {fleet-guide}/monitor-elastic-agent.html[Monitor {agent}s] to learn more. - -**Is APM Server happy?** - -In {kib}, open **{fleet}** and select the host that is running the APM integration. -Open the **Logs** tab and select the `elastic_agent.apm_server` dataset. -Look for any APM Server errors that could help diagnose the problem. - -**Can the {apm-agent} connect to APM Server** - -To determine if the {apm-agent} can connect to the APM Server, send requests to the instrumented service and look for lines -containing `[request]` in the APM Server logs. - -If no requests are logged, confirm that: - -. SSL isn't <>. -. The host is correct. For example, if you're using Docker, ensure a bind to the right interface (for example, set -`apm-server.host = 0.0.0.0:8200` to match any IP) and set the `SERVER_URL` setting in the {apm-agent} accordingly. - -If you see requests coming through the APM Server but they are not accepted (a response code other than `202`), -see <> to narrow down the possible causes. - -**Instrumentation gaps** - -APM agents provide auto-instrumentation for many popular frameworks and libraries. -If the {apm-agent} is not auto-instrumenting something that you were expecting, data won't be sent to the {stack}. -Reference the relevant {apm-agents-ref}/index.html[{apm-agent} documentation] for details on what is automatically instrumented. +include::./tab-widgets/no-data-indexed-widget.asciidoc[] +[[data-indexed-no-apm-legacy]] [float] -[[common-response-codes]] -=== APM Server response codes +=== Data is indexed but doesn't appear in the APM app -[[bad-request]] -[float] -==== HTTP 400: Data decoding error / Data validation error +The {apm-app} relies on index mappings to query and display data. +If your APM data isn't showing up in the {apm-app}, but is elsewhere in {kib}, like the Discover app, +you may have a missing index mapping. -The most likely cause for this error is using incompatible versions of {apm-agent} and APM Server. -See the <> to verify compatibility. +You can determine if a field was mapped correctly with the `_mapping` API. +For example, run the following command in the {kib} {kibana-ref}/console-kibana.html[console]. +This will display the field data type of the `service.name` field. -[[event-too-large]] -[float] -==== HTTP 400: Event too large - -APM agents communicate with the APM server by sending events in an HTTP request. Each event is sent as its own line in the HTTP request body. If events are too large, you should consider increasing the <> -setting in the APM integration, and adjusting relevant settings in the agent. +[source,curl] +---- +GET *apm*/_mapping/field/service.name +---- -[[unauthorized]] -[float] -==== HTTP 401: Invalid token +If the `mapping.name.type` is `"text"`, your APM indices were not set up correctly. -Either the <> in the request header doesn't match the secret token configured in the APM integration, -or the <> is invalid. +[source,yml] +---- +".ds-metrics-apm.transaction.1m-default-2023.04.12-000038": { + "mappings": { + "service.name": { + "full_name": "service.name", + "mapping": { + "name": { + "type": "text" <1> + } + } + } + } +} +---- +<1> The `service.name` `mapping.name.type` would be `"keyword"` if this field had been set up correctly. -[[forbidden]] -[float] -==== HTTP 403: Forbidden request +To fix this problem, install the APM integration by following these steps: -Either you are sending requests to a <> endpoint without RUM enabled, or a request -is coming from an origin not specified in the APM integration settings. -See the <> setting for more information. +-- +include::./legacy/getting-started-apm-server.asciidoc[tag=install-apm-integration] +-- -[[request-timed-out]] -[float] -==== HTTP 503: Request timed out waiting to be processed +This will reinstall the APM index templates and trigger a data stream index rollover. -This happens when APM Server exceeds the maximum number of requests that it can process concurrently. -To alleviate this problem, you can try to: reduce the sample rate and/or reduce the collected stack trace information. -See <> for more information. +You can verify the correct index templates were installed by running the following command in the {kib} console: -Another option is to increase processing power. -This can be done by either migrating your {agent} to a more powerful machine -or adding more APM Server instances. +[source,curl] +---- +GET /_index_template/traces-apm +---- [float] [[common-ssl-problems]] @@ -189,33 +162,28 @@ APM agent --> Load Balancer --> APM Server ---- The APM Server timeout can be configured by updating the -<>. +<>. -[[server-es-down]] +[[field-limit-exceeded-legacy]] [float] -=== What happens when APM Server or {es} is down? - -APM Server does not have an internal queue to buffer requests, -but instead leverages an HTTP request timeout to act as back-pressure. -If {es} goes down, the APM Server will eventually deny incoming requests. -Both the APM Server and {apm-agent}(s) will issue logs accordingly. - -If either {es} or the APM Server goes down, -some APM agents have internal queues or buffers that will temporarily store data. -As a general rule of thumb, queues fill up quickly. Assume data will be lost if APM Server or {es} goes down. - -Adjusting {apm-agent} queues/buffers can increase the agent's overhead, so use caution when updating default values. - -* **Go agent** - Circular buffer with configurable size: -{apm-go-ref}/configuration.html#config-api-buffer-size[`ELASTIC_APM_BUFFER_SIZE`]. -// * **iOS agent** - -* **Java agent** - Internal buffer with configurable size: -{apm-java-ref}/config-reporter.html#config-max-queue-size[`max_queue_size`]. -* **Node.js agent** - No internal queue. Data is lost. -* **PHP agent** - No internal queue. Data is lost. -* **Python agent** - Internal {apm-py-ref}/tuning-and-overhead.html#tuning-queue[Transaction queue] -with configurable size and time between flushes. -* **Ruby agent** - Internal queue with configurable size: -{apm-ruby-ref}/configuration.html#config-api-buffer-size[`api_buffer_size`]. -* **RUM agent** - No internal queue. Data is lost. -* **.NET agent** - No internal queue. Data is lost. +=== Field limit exceeded + +When adding too many distinct tag keys on a transaction or span, +you risk creating a link:{ref}/mapping.html#mapping-limit-settings[mapping explosion]. + +For example, you should avoid that user-specified data, +like URL parameters, is used as a tag key. +Likewise, using the current timestamp or a user ID as a tag key is not a good idea. +However, tag *values* with a high cardinality are not a problem. +Just try to keep the number of distinct tag keys at a minimum. + +The symptom of a mapping explosion is that transactions and spans are not indexed anymore after a certain time. Usually, on the next day, +the spans and transactions will be indexed again because a new index is created each day. +But as soon as the field limit is reached, indexing stops again. + +In the agent logs, you won't see a sign of failures as the APM server asynchronously sends the data it received from the agents to {es}. However, the APM server and {es} log a warning like this: + +[source,logs] +---- +{\"type\":\"illegal_argument_exception\",\"reason\":\"Limit of total fields [1000] in [INDEX_NAME] has been exceeded\"} +---- diff --git a/docs/legacy/configuration-agent-config.asciidoc b/docs/configure/agent-config.asciidoc similarity index 78% rename from docs/legacy/configuration-agent-config.asciidoc rename to docs/configure/agent-config.asciidoc index 8b9b7d11cb5..d6e162ac63c 100644 --- a/docs/legacy/configuration-agent-config.asciidoc +++ b/docs/configure/agent-config.asciidoc @@ -1,11 +1,15 @@ [[configure-agent-config]] -== Configure APM agent configuration += Configure APM agent configuration ++++ APM agent configuration ++++ -IMPORTANT: {deprecation-notice-config} +**** +image:./binary-yes-fm-yes.svg[supported deployment methods] + +APM agent configuration is supported by all APM Server deployment methods. +**** APM agent configuration allows you to fine-tune your APM agents from within the APM app. Changes are automatically propagated to your APM agents, so there's no need to redeploy your applications. @@ -21,23 +25,26 @@ apm-server.agent.config.elasticsearch.api_key: TiNAGG4BaaMdaH1tRfuU:KnR6yE41RrSo ---- [float] -=== APM agent configuration options += APM agent configuration options -You can specify the following options in the `apm-server.agent.config` section of the +The following options are only supported for APM Server binary users. +You can specify these options in the `apm-server.agent.config` section of the +{beatname_lc}.yml+ config file: [float] [[agent-config-cache]] -==== `apm-server.agent.config.cache.expiration` +== `apm-server.agent.config.cache.expiration` When using APM agent configuration, information fetched from {es} will be cached in memory for some time. Specify the cache expiration time via this setting. Defaults to `30s` (30 seconds). [float] -[[agent-config-authentication]] -=== Authentication credentials +[[agent-config-elasticsearch]] +== `apm-server.agent.config.elasticsearch` + +Takes the same options as <>. -For APM Server legacy users and Elastic Agent standalone-managed APM Server, +For APM Server binary users and Elastic Agent standalone-managed APM Server, APM agent configuration is automatically fetched from {es} using the `output.elasticsearch` configuration. If `output.elasticsearch` isn't set or doesn't have sufficient privileges, use these authentication configuration variables provide {es} access. @@ -58,7 +65,7 @@ The basic authentication password for connecting to {es}. Authentication with an API key. Formatted as `id:api_key` [float] -=== Common problems +== Common problems You may see either of the following HTTP 403 errors from APM Server when it attempts to fetch APM agent configuration: diff --git a/docs/legacy/configuration-anonymous.asciidoc b/docs/configure/anonymous-auth.asciidoc similarity index 70% rename from docs/legacy/configuration-anonymous.asciidoc rename to docs/configure/anonymous-auth.asciidoc index b6023709540..ec67a8571f3 100644 --- a/docs/legacy/configuration-anonymous.asciidoc +++ b/docs/configure/anonymous-auth.asciidoc @@ -1,12 +1,15 @@ [[configuration-anonymous]] -== Anonymous auth configuration options += Configure anonymous authentication ++++ Anonymous authentication ++++ -IMPORTANT: {deprecation-notice-config} -If you're using {fleet} and the Elastic APM integration, please see <> instead. +**** +image:./binary-yes-fm-yes.svg[supported deployment methods] + +Most options on this page are supported by all APM Server deployment methods. +**** Elastic APM agents can send unauthenticated (anonymous) events to the APM Server. An event is considered to be anonymous if no authentication token can be extracted from the incoming request. @@ -16,34 +19,27 @@ agent running in a browser, or the iOS/Swift agent running in a user application Enable anonymous authentication in the APM Server to allow the ingestion of unauthenticated client-side APM data while still requiring authentication for server-side services. -Example configuration: +include::./tab-widgets/anon-auth-widget.asciidoc[] + -["source","yaml"] ----- -apm-server.auth.anonymous.enabled: true -apm-server.auth.anonymous.allow_agent: [rum-js] -apm-server.auth.anonymous.allow_service: [my_service_name] -apm-server.auth.anonymous.rate_limit.event_limit: 300 -apm-server.auth.anonymous.rate_limit.ip_limit: 1000 ----- IMPORTANT: All anonymous access configuration is ignored if -<> is disabled. +<> is disabled. [float] [[config-auth-anon-rum]] -=== Real User Monitoring (RUM) += Real User Monitoring (RUM) -If an <> or <> is configured, +If an <> or <> is configured, then anonymous authentication must be enabled to collect RUM data. -For this reason, anonymous auth will be enabled automatically if <> -is set to `true`, and <> is not explicitly defined. +For this reason, anonymous auth will be enabled automatically if <> +is set to `true`, and <> is not explicitly defined. See <> for additional RUM configuration options. [float] [[config-auth-anon-mitigating]] -=== Mitigating malicious requests +== Mitigating malicious requests There are a few configuration variables that can mitigate the impact of malicious requests to an unauthenticated APM Server endpoint. @@ -57,7 +53,7 @@ This allows you to specify the maximum number of requests allowed per unique IP [float] [[config-auth-anon-client-ip]] -==== Deriving an incoming request's `client.ip` address +== Deriving an incoming request's `client.ip` address The remote IP address of an incoming request might be different from the end-user's actual IP address, for example, because of a proxy. For this reason, @@ -72,7 +68,7 @@ If none of these headers are present, the remote address for the incoming reques [float] [[config-auth-anon-client-ip-concerns]] -==== Using a reverse proxy or load balancer +== Using a reverse proxy or load balancer HTTP headers are easily modified; it's possible for anyone to spoof the derived `client.ip` value by changing or setting, @@ -87,47 +83,65 @@ APM Server's rate limiting feature. [float] [[config-auth-anon]] -=== Configuration reference - -Specify the following options in the `apm-server.auth.anonymous` section of the `apm-server.yml` config file: += Configuration reference [float] [[config-auth-anon-enabled]] -==== `enabled` +== Anonymous Agent access Enable or disable anonymous authentication. +Default: `false` (disabled). (bool) -Default: `false` (disabled) +|==== +| APM Server binary | `apm-server.auth.anonymous.enabled` +| Fleet-managed | `Anonymous Agent access` +|==== [float] [[config-auth-anon-allow-agent]] -==== `allow_agent` +== Allowed anonymous agents A list of permitted {apm-agent} names for anonymous authentication. Names in this list must match the agent's `agent.name`. +Default: `[rum-js, js-base]` (only RUM agent events are accepted). (array) -Default: `[rum-js, js-base]` (only RUM agent events are accepted) +|==== +| APM Server binary | `apm-server.auth.anonymous.allow_agent` +| Fleet-managed | `Allowed Anonymous agents` +|==== [float] [[config-auth-anon-allow-service]] -==== `allow_service` +== Allowed services A list of permitted service names for anonymous authentication. Names in this list must match the agent's `service.name`. This can be used to limit the number of service-specific indices or data streams created. +Default: Not set (any service name is accepted). (array) -Default: Not set (any service name is accepted) +|==== +| APM Server binary | `apm-server.auth.anonymous.allow_service` +| Fleet-managed | `Allowed Anonymous services` +|==== [float] [[config-auth-anon-ip-limit]] -==== `rate_limit.ip_limit` +== IP limit The number of unique IP addresses to track in an LRU cache. IP addresses in the cache will be rate limited according to the <> setting. Consider increasing this default if your application has many concurrent clients. +Default: `1000`. (int) -Default: `1000` +|==== +| APM Server binary | `apm-server.auth.anonymous.rate_limit.ip_limit` +| Fleet-managed | `Anonymous Rate limit (IP limit)` +|==== [float] [[config-auth-anon-event-limit]] -==== `rate_limit.event_limit` +== Event limit The maximum number of events allowed per second, per agent IP address. +Default: `300`. (int) -Default: `300` +|==== +| APM Server binary | `apm-server.auth.anonymous.rate_limit.event_limit` +| Fleet-managed | `Anonymous Event rate limit (event limit)` +|==== diff --git a/docs/configure/auth.asciidoc b/docs/configure/auth.asciidoc new file mode 100644 index 00000000000..490c0e73108 --- /dev/null +++ b/docs/configure/auth.asciidoc @@ -0,0 +1,172 @@ +[[apm-agent-auth]] += APM agent authorization + +**** +image:./binary-yes-fm-yes.svg[supported deployment methods] + +Most options in this section are supported by all APM Server deployment methods. +**** + +Agent authorization APM Server configuration options. + +include::./tab-widgets/auth-config-widget.asciidoc[] + +[float] +[[api-key-auth-settings]] += API key authentication options + +These settings apply to API key communication between the APM Server and APM Agents. + +NOTE: These settings are different from the API key settings used for {es} output and monitoring. + +[float] +== API key for agent authentication + +Enable API key authorization by setting `enabled` to `true`. +By default, `enabled` is set to `false`, and API key support is disabled. (bool) + +|==== +| APM Server binary | `auth.api_key.enabled` +| Fleet-managed | `API key for agent authentication` +|==== + +TIP: Not using Elastic APM agents? +When enabled, third-party APM agents must include a valid API key in the following format: +`Authorization: ApiKey `. The key must be the base64 encoded representation of the API key's `id:name`. + +[float] +== API key limit + +Each unique API key triggers one request to {es}. +This setting restricts the number of unique API keys are allowed per minute. +The minimum value for this setting should be the number of API keys configured in your monitored services. +The default `limit` is `100`. (int) + +|==== +| APM Server binary | `auth.api_key.limit` +| Fleet-managed | `Number of keys` +|==== + +[float] +== Secret token + +Authorization token for sending APM data. +The same token must also be set in each {apm-agent}. +This token is not used for RUM endpoints. (text) + +|==== +| APM Server binary | `auth.api_key.token` +| Fleet-managed | `Secret token` +|==== + +[float] += `auth.api_key.elasticsearch.*` configuration options + +**** +image:./binary-yes-fm-no.svg[supported deployment methods] + +The below options are only supported by the APM Server binary. + +All of the `auth.api_key.elasticsearch.*` configurations are optional. +If none are set, configuration settings from the `apm-server.output` section will be reused. +**** + +[float] +== `elasticsearch.hosts` + +API keys are fetched from {es}. +This configuration needs to point to a secured {es} cluster that is able to serve API key requests. + + +[float] +== `elasticsearch.protocol` + +The name of the protocol {es} is reachable on. +The options are: `http` or `https`. The default is `http`. +If nothing is configured, configuration settings from the `output` section will be reused. + +[float] +== `elasticsearch.path` + +An optional HTTP path prefix that is prepended to the HTTP API calls. +If nothing is configured, configuration settings from the `output` section will be reused. + +[float] +== `elasticsearch.proxy_url` + +The URL of the proxy to use when connecting to the {es} servers. +The value may be either a complete URL or a "host[:port]", in which case the "http"scheme is assumed. +If nothing is configured, configuration settings from the `output` section will be reused. + +[float] +== `elasticsearch.timeout` + +The HTTP request timeout in seconds for the {es} request. +If nothing is configured, configuration settings from the `output` section will be reused. + +[float] += `auth.api_key.elasticsearch.ssl.*` configuration options + +SSL is off by default. Set `elasticsearch.protocol` to `https` if you want to enable `https`. + +[float] +== `elasticsearch.ssl.enabled` + +Enable custom SSL settings. +Set to false to ignore custom SSL settings for secure communication. + +[float] +== `elasticsearch.ssl.verification_mode` + +Configure SSL verification mode. +If `none` is configured, all server hosts and certificates will be accepted. +In this mode, SSL based connections are susceptible to man-in-the-middle attacks. +**Use only for testing**. Default is `full`. + +[float] +== `elasticsearch.ssl.supported_protocols` + +List of supported/valid TLS versions. +By default, all TLS versions from 1.0 to 1.2 are enabled. + +[float] +== `elasticsearch.ssl.certificate_authorities` + +List of root certificates for HTTPS server verifications. + +[float] +== `elasticsearch.ssl.certificate` + +The path to the certificate for SSL client authentication. + +[float] +== `elasticsearch.ssl.key` + +The client certificate key used for client authentication. +This option is required if certificate is specified. + +[float] +== `elasticsearch.ssl.key_passphrase` + +An optional passphrase used to decrypt an encrypted key stored in the configured key file. + +[float] +== `elasticsearch.ssl.cipher_suites` + +The list of cipher suites to use. The first entry has the highest priority. +If this option is omitted, the Go crypto library’s default suites are used (recommended). + +[float] +== `elasticsearch.ssl.curve_types` + +The list of curve types for ECDHE (Elliptic Curve Diffie-Hellman ephemeral key exchange). + +[float] +== `elasticsearch.ssl.renegotiation` + +Configure what types of renegotiation are supported. +Valid options are `never`, `once`, and `freely`. Default is `never`. + +* `never` - Disables renegotiation. +* `once` - Allows a remote server to request renegotiation once per connection. +* `freely` - Allows a remote server to repeatedly request renegotiation. diff --git a/docs/configure/binary-no-fm-yes.svg b/docs/configure/binary-no-fm-yes.svg new file mode 100644 index 00000000000..b8b3120f2fc --- /dev/null +++ b/docs/configure/binary-no-fm-yes.svg @@ -0,0 +1,13 @@ + + + + + + + + + + + + + diff --git a/docs/configure/binary-yes-fm-no.svg b/docs/configure/binary-yes-fm-no.svg new file mode 100644 index 00000000000..db26e2fc39b --- /dev/null +++ b/docs/configure/binary-yes-fm-no.svg @@ -0,0 +1,13 @@ + + + + + + + + + + + + + diff --git a/docs/configure/binary-yes-fm-yes.svg b/docs/configure/binary-yes-fm-yes.svg new file mode 100644 index 00000000000..07c0a2705f8 --- /dev/null +++ b/docs/configure/binary-yes-fm-yes.svg @@ -0,0 +1,12 @@ + + + + + + + + + + + + diff --git a/docs/legacy/copied-from-beats/docs/shared-env-vars.asciidoc b/docs/configure/env.asciidoc similarity index 74% rename from docs/legacy/copied-from-beats/docs/shared-env-vars.asciidoc rename to docs/configure/env.asciidoc index 92bdeca5e13..86742ca7577 100644 --- a/docs/legacy/copied-from-beats/docs/shared-env-vars.asciidoc +++ b/docs/configure/env.asciidoc @@ -1,26 +1,11 @@ -////////////////////////////////////////////////////////////////////////// -//// This content is shared by all Elastic Beats. Make sure you keep the -//// descriptions here generic enough to work for all Beats that include -//// this file. When using cross references, make sure that the cross -//// references resolve correctly for any files that include this one. -//// Use the appropriate variables defined in the index.asciidoc file to -//// resolve Beat names: beatname_uc and beatname_lc. -//// Use the following include to pull this content into a doc file: -//// :standalone: -//// include::../../libbeat/docs/shared-env-vars.asciidoc[] -//// Specify :standalone: when this file is pulled into and index. When -//// the file is embedded in another file, do no specify :standalone: -////////////////////////////////////////////////////////////////////////// - -ifdef::standalone[] - -[[using-environ-vars]] -== Use environment variables in the configuration - -endif::[] - -IMPORTANT: {deprecation-notice-config} -If you're using {fleet} and the Elastic APM integration, please see the {fleet-guide}[{fleet} User Guide] instead. +[[config-env]] += Use environment variables in the configuration + +**** +image:./binary-yes-fm-no.svg[supported deployment methods] + +This documentation is only relevant for APM Server binary users. +**** You can use environment variable references in the config file to set values that need to be configurable during deployment. To do this, use: @@ -64,7 +49,7 @@ setting from the command line by using the `-E` option. For example: ================================== [float] -=== Examples +== Examples Here are some examples of configurations that use environment variables and what each configuration looks like after replacement: @@ -81,7 +66,7 @@ and what each configuration looks like after replacement: |================================== [float] -=== Specify complex objects in environment variables +== Specify complex objects in environment variables You can specify complex objects, such as lists or dictionaries, in environment variables by using a JSON-like syntax. diff --git a/docs/configure/general.asciidoc b/docs/configure/general.asciidoc new file mode 100644 index 00000000000..92507f8ab72 --- /dev/null +++ b/docs/configure/general.asciidoc @@ -0,0 +1,183 @@ +[[configuration-process]] += General configuration options + +**** +image:./binary-yes-fm-yes.svg[supported deployment methods] + +Most options on this page are supported by all APM Server deployment methods. +**** + +General APM Server configuration options. + +include::./tab-widgets/general-config-widget.asciidoc[] + +[float] +[[configuration-apm-server]] += Configuration options + +[[host]] +[float] +== Host +Defines the host and port the server is listening on. +Use `"unix:/path/to.sock"` to listen on a Unix domain socket. +Defaults to 'localhost:8200'. (text) + +|==== +| APM Server binary | `apm-server.host` +| Fleet-managed | `Host` +|==== + +[float] +== URL +The publicly reachable server URL. For deployments on Elastic Cloud or ECK, the default is unchangeable. + +|==== +| APM Server binary | N/A +| Fleet-managed | `URL` +|==== + +[[max_header_size]] +[float] +== Max header size +Maximum permitted size of a request's header accepted by the server to be processed (in Bytes). +Defaults to 1048576 Bytes (1 MB). (int) + +|==== +| APM Server binary | `apm-server.max_header_size` +| Fleet-managed | `Maximum size of a request's header` +|==== + +[[idle_timeout]] +[float] +== Idle timeout +Maximum amount of time to wait for the next incoming request before underlying connection is closed. +Defaults to `45s` (45 seconds). (text) + +|==== +| APM Server binary | `apm-server.idle_timeout` +| Fleet-managed | `Idle time before underlying connection is closed` +|==== + +[[read_timeout]] +[float] +== Read timeout +Maximum permitted duration for reading an entire request. +Defaults to `3600s` (3600 seconds). (text) + +|==== +| APM Server binary | `apm-server.read_timeout` +| Fleet-managed | `Maximum duration for reading an entire request` +|==== + +[[write_timeout]] +[float] +== Write timeout +Maximum permitted duration for writing a response. +Defaults to `30s` (30 seconds). (text) + +|==== +| APM Server binary | `apm-server.write_timeout` +| Fleet-managed | `Maximum duration for writing a response` +|==== + +[[shutdown_timeout]] +[float] +=== Shutdown timeout +Maximum duration in seconds before releasing resources when shutting down the server. +Defaults to `30s` (30 seconds). (text) + +|==== +| APM Server binary | `apm-server.shutdown_timeout` +| Fleet-managed | `Maximum duration before releasing resources when shutting down` +|==== + +[[max_event_size]] +[float] +== Max event size +Maximum permitted size of an event accepted by the server to be processed (in Bytes). +Defaults to `307200` Bytes. (int) + +|==== +| APM Server binary | `apm-server.max_event_size` +| Fleet-managed | `Maximum size per event` +|==== + +[[max_connections]] +[float] +== Max connections +Maximum number of TCP connections to accept simultaneously. +Default value is 0, which means _unlimited_. (int) + +|==== +| APM Server binary | `apm-server.max_connections` +| Fleet-managed | `Simultaneously accepted connections` +|==== + +[[custom_http_headers]] +[float] +== Custom HTTP response headers +Custom HTTP headers to add to HTTP responses. Useful for security policy compliance. (text) + +|==== +| APM Server binary | `apm-server.response_headers` +| Fleet-managed | `Custom HTTP response headers` +|==== + +[[capture_personal_data]] +[float] +== Capture personal data +If true, +APM Server captures the IP of the instrumented service and its User Agent if any. +Enabled by default. (bool) + +|==== +| APM Server binary | `apm-server.capture_personal_data` +| Fleet-managed | `Capture personal data` +|==== + + +[[default_service_environment]] +[float] +== Default service environment +Sets the default service environment to associate with data and requests received from agents which have no service environment defined. Default: none. (text) + +|==== +| APM Server binary | `apm-serer.default_service_environment` +| Fleet-managed | `Default Service Environment` +|==== + +[[expvar.enabled]] +[float] +== expvar support +When set to true APM Server exposes https://golang.org/pkg/expvar/[golang expvar] under `/debug/vars`. +Disabled by default. + +|==== +| APM Server binary | `apm-server.expvar.enabled` +| Fleet-managed | `Enable APM Server Golang expvar support` +|==== + +[[expvar.url]] +[float] +== expvar URL +Configure the URL to expose expvar. +Defaults to `debug/vars`. + +|==== +| APM Server binary | `apm-server.expvar.url` +| Fleet-managed | N/A +|==== + +[[data_streams.namespace]] +[float] +== Data stream namespace + +Change the default namespace. +This setting changes the name of the integration's data stream. + +For {fleet}-managed users, the namespace is inherited from the selected {agent} policy. + +|==== +| APM Server binary | `apm-server.data_streams.namespace` +| Fleet-managed | `Namespace` (Integration settings > Advanced options) +|==== diff --git a/docs/configure/index.asciidoc b/docs/configure/index.asciidoc new file mode 100644 index 00000000000..2b13d43e632 --- /dev/null +++ b/docs/configure/index.asciidoc @@ -0,0 +1,52 @@ +[[configuring-howto-apm-server]] += Configure + +How you configure the APM Server depends on your deployment method. + +APM Server binary users need to edit the `apm-server.yml` configuration file. +The location of the file varies by platform. To locate the file, see <>. + +Fleet-managed users configure the APM Server directly in {kib}. +Each configuration page describes the specific location. + +The following topics describe how to configure APM Server: + +* <> +* <> +* <> +* <> +* <> +* <> +* <> +* <> +* <> +* <> +* <> +* <> +* <> + +include::general.asciidoc[leveloffset=+1] + +include::anonymous-auth.asciidoc[leveloffset=+1] + +include::auth.asciidoc[leveloffset=+1] + +include::agent-config.asciidoc[leveloffset=+1] + +include::instrumentation.asciidoc[leveloffset=+1] + +include::kibana.asciidoc[leveloffset=+1] + +include::logging.asciidoc[leveloffset=+1] + +include::output.asciidoc[leveloffset=+1] + +include::path.asciidoc[leveloffset=+1] + +include::rum.asciidoc[leveloffset=+1] + +include::tls.asciidoc[leveloffset=+1] + +include::sampling.asciidoc[leveloffset=+1] + +include::env.asciidoc[leveloffset=+1] \ No newline at end of file diff --git a/docs/configure/instrumentation.asciidoc b/docs/configure/instrumentation.asciidoc new file mode 100644 index 00000000000..2f381001e1d --- /dev/null +++ b/docs/configure/instrumentation.asciidoc @@ -0,0 +1,62 @@ +[[configuration-instrumentation]] += Configure APM instrumentation + +++++ +Instrumentation +++++ + +**** +image:./binary-yes-fm-no.svg[supported deployment methods] + +Instrumentation of APM Server is not yet supported for Fleet-managed APM. +**** + +APM Server uses the Elastic APM Go Agent to instrument its publishing pipeline. +To gain insight into the performance of {beatname_uc}, you can enable this instrumentation and send trace data to APM Server. +Currently, only the {es} output is instrumented. + +Example configuration with instrumentation enabled: + +["source","yaml"] +---- +instrumentation: + enabled: true + environment: production + hosts: + - "http://localhost:8200" + api_key: L5ER6FEvjkmlfalBealQ3f3fLqf03fazfOV +---- + +[float] +== Configuration options + +You can specify the following options in the `instrumentation` section of the +{beatname_lc}.yml+ config file: + +[float] +=== `enabled` + +Set to `true` to enable instrumentation of {beatname_uc}. +Defaults to `false`. + +[float] +=== `environment` + +Set the environment in which {beatname_uc} is running, for example, `staging`, `production`, `dev`, etc. +Environments can be filtered in the {kibana-ref}/xpack-apm.html[{apm-app}]. + +[float] +=== `hosts` + +The {apm-guide-ref}/getting-started-apm-server.html[APM Server] hosts to report instrumentation data to. +Defaults to `http://localhost:8200`. + +[float] +=== `api_key` + +{apm-guide-ref}/api-key.html[API key] used to secure communication with the APM Server(s). +If `api_key` is set then `secret_token` will be ignored. + +[float] +=== `secret_token` + +{apm-guide-ref}/secret-token.html[Secret token] used to secure communication with the APM Server(s). diff --git a/docs/legacy/configure-kibana-endpoint.asciidoc b/docs/configure/kibana.asciidoc similarity index 76% rename from docs/legacy/configure-kibana-endpoint.asciidoc rename to docs/configure/kibana.asciidoc index 0e3a9c9ce76..fb070965ad3 100644 --- a/docs/legacy/configure-kibana-endpoint.asciidoc +++ b/docs/configure/kibana.asciidoc @@ -1,19 +1,22 @@ [[setup-kibana-endpoint]] -== Configure the {kib} endpoint += Configure the {kib} endpoint ++++ {kib} endpoint ++++ -IMPORTANT: {deprecation-notice-config} +**** -You must configure the {kib} endpoint if running APM Server standalone--this allows APM Server to verify that -the APM package has been installed. The {kib} endpoint is also equired for APM agent configuration when using +image:./binary-yes-fm-no.svg[supported deployment methods] + +You must configure the {kib} endpoint when running the APM Server binary with a non-{es} output. +Configuring the {kib} endpoint allows the APM Server to communicate with {kib} and ensure that the APM integration was properly set up. It is also required for APM agent configuration when using an output other than {es}. -For all other use-cases, starting in version 8.7.0, APM agent configurations will be fetched directly from {es}. +For all other use-cases, starting in version 8.7.0, APM agent configurations is fetched directly from {es}. Configuring and enabling the {kib} endpoint is only used as a fallback. Please see <> instead. +**** Here's a sample configuration: @@ -24,20 +27,20 @@ apm-server.kibana.host: "http://localhost:5601" ---- [float] -=== {kib} endpoint configuration options +== {kib} endpoint configuration options You can specify the following options in the `apm-server.kibana` section of the -+{beatname_lc}.yml+ config file: ++{beatname_lc}.yml+ config file. These options are not required for a Fleet-managed APM Server. [float] [[kibana-enabled]] -==== `apm-server.kibana.enabled` +=== `apm-server.kibana.enabled` Defaults to `false`. Must be `true` to use APM Agent configuration. [float] [[kibana-host]] -==== `apm-server.kibana.host` +=== `apm-server.kibana.host` The {kib} host that APM Server will communicate with. The default is `127.0.0.1:5601`. The value of `host` can be a `URL` or `IP:PORT`. For example: `http://192.15.3.2`, `192:15.3.2:5601` or `http://192.15.3.2:6701/path`. If no @@ -52,7 +55,7 @@ IPv6 addresses must be defined using the following format: [float] [[kibana-protocol-option]] -==== `apm-server.kibana.protocol` +=== `apm-server.kibana.protocol` The name of the protocol {kib} is reachable on. The options are: `http` or `https`. The default is `http`. However, if you specify a URL for host, the @@ -69,30 +72,30 @@ apm-server.kibana.path: /kibana [float] -==== `apm-server.kibana.username` +=== `apm-server.kibana.username` The basic authentication username for connecting to {kib}. [float] -==== `apm-server.kibana.password` +=== `apm-server.kibana.password` The basic authentication password for connecting to {kib}. [float] -==== `apm-server.kibana.api_key` +=== `apm-server.kibana.api_key` Authentication with an API key. Formatted as `id:api_key` [float] [[kibana-path-option]] -==== `apm-server.kibana.path` +=== `apm-server.kibana.path` An HTTP path prefix that is prepended to the HTTP API calls. This is useful for the cases where {kib} listens behind an HTTP reverse proxy that exports the API under a custom prefix. [float] -==== `apm-server.kibana.ssl.enabled` +=== `apm-server.kibana.ssl.enabled` Enables {beatname_uc} to use SSL settings when connecting to {kib} via HTTPS. If you configure {beatname_uc} to connect over HTTPS, this setting defaults to diff --git a/docs/legacy/copied-from-beats/docs/loggingconfig.asciidoc b/docs/configure/logging.asciidoc similarity index 74% rename from docs/legacy/copied-from-beats/docs/loggingconfig.asciidoc rename to docs/configure/logging.asciidoc index a018135dd54..9e11b5fbf40 100644 --- a/docs/legacy/copied-from-beats/docs/loggingconfig.asciidoc +++ b/docs/configure/logging.asciidoc @@ -1,31 +1,25 @@ -////////////////////////////////////////////////////////////////////////// -//// This content is shared by all Elastic Beats. Make sure you keep the -//// descriptions here generic enough to work for all Beats that include -//// this file. When using cross references, make sure that the cross -//// references resolve correctly for any files that include this one. -//// Use the appropriate variables defined in the index.asciidoc file to -//// resolve Beat names: beatname_uc and beatname_lc -//// Use the following include to pull this content into a doc file: -//// include::../../libbeat/docs/loggingconfig.asciidoc[] -//// Make sure this content appears below a level 2 heading. -////////////////////////////////////////////////////////////////////////// - [[configuration-logging]] -== Configure logging += Configure logging ++++ Logging ++++ -IMPORTANT: {deprecation-notice-config} +**** + +image:./binary-yes-fm-no.svg[supported deployment methods] + +These configuration options are only relevant to APM Server binary users. +Fleet-managed users should see {fleet-guide}/monitor-elastic-agent.html[View {agent} logs] +to learn how to view logs and change the logging level of {agent}. +**** The `logging` section of the +{beatname_lc}.yml+ config file contains options for configuring the logging output. -ifndef::serverless[] + The logging system can write logs to the syslog or rotate log files. If logging is not explicitly configured the file output is used. -ifndef::win_only[] ["source","yaml",subs="attributes"] ---- logging.level: info @@ -36,80 +30,45 @@ logging.files: keepfiles: 7 permissions: 0640 ---- -endif::win_only[] -ifdef::win_only[] -["source","yaml",subs="attributes"] ----- -logging.level: info -logging.to_files: true -logging.files: - path: C:{backslash}ProgramData{backslash}{beatname_lc}{backslash}Logs - name: {beatname_lc} - keepfiles: 7 - permissions: 0640 ----- -endif::win_only[] + TIP: In addition to setting logging options in the config file, you can modify the logging output configuration from the command line. See <>. -ifndef::win_only[] WARNING: When {beatname_uc} is running on a Linux system with systemd, it uses by default the `-e` command line option, that makes it write all the logging output to stderr so it can be captured by journald. Other outputs are disabled. See <> to know more and learn how to change this. -endif::win_only[] -endif::serverless[] - -ifdef::serverless[] -For example, the following options configure {beatname_uc} to log all the debug -messages related to event publishing: - -["source","yaml",subs="attributes"] ----- -logging.level: debug -logging.selectors: ["publisher"] ----- - -The logs generated by {beatname_uc} are written to the CloudWatch log group for -the function running on Amazon Web Services (AWS). To view the logs, go to the -monitoring area of the AWS Lambda console and view the CloudWatch log group -for the function. - -// TODO: When we add support for other cloud providers, we will need to modify -// this statement and possibly have a different attribute for each provider to -// show the correct text. -endif::serverless[] [float] -=== Configuration options +== Configuration options You can specify the following options in the `logging` section of the +{beatname_lc}.yml+ config file: ifndef::serverless[] [float] -==== `logging.to_stderr` +=== `logging.to_stderr` When true, writes all logging output to standard error output. This is equivalent to using the `-e` command line option. [float] -==== `logging.to_syslog` +=== `logging.to_syslog` When true, writes all logging output to the syslog. NOTE: This option is not supported on Windows. [float] -==== `logging.to_eventlog` +=== `logging.to_eventlog` When true, writes all logging output to the Windows Event Log. [float] -==== `logging.to_files` +=== `logging.to_files` When true, writes all logging output to files. The log files are automatically rotated when the log file size limit is reached. @@ -121,7 +80,7 @@ endif::serverless[] [float] [[level]] -==== `logging.level` +=== `logging.level` Minimum log level. One of `debug`, `info`, `warning`, or `error`. The default log level is `info`. @@ -142,7 +101,7 @@ published. Also logs any warnings, errors, or critical errors. [float] [[selectors]] -==== `logging.selectors` +=== `logging.selectors` The list of debugging-only selector tags used by different {beatname_uc} components. Use `*` to enable debug output for all components. Use `publisher` to display @@ -170,7 +129,7 @@ sets the debug log level). For more information, see <>. endif::serverless[] [float] -==== `logging.metrics.enabled` +=== `logging.metrics.enabled` By default, {beatname_uc} periodically logs its internal metrics that have changed in the last period. For each metric that changed, the delta from the @@ -189,37 +148,37 @@ Note that we currently offer no backwards compatible guarantees for the internal metrics and for this reason they are also not documented. [float] -==== `logging.metrics.period` +=== `logging.metrics.period` The period after which to log the internal metrics. The default is `30s`. ifndef::serverless[] [float] -==== `logging.files.path` +=== `logging.files.path` The directory that log files are written to. The default is the logs path. See the <> section for details. [float] -==== `logging.files.name` +=== `logging.files.name` The name of the file that logs are written to. The default is '{beatname_lc}'. [float] -==== `logging.files.rotateeverybytes` +=== `logging.files.rotateeverybytes` The maximum size of a log file. If the limit is reached, a new log file is generated. The default size limit is 10485760 (10 MB). [float] -==== `logging.files.keepfiles` +=== `logging.files.keepfiles` The number of most recent rotated log files to keep on disk. Older files are deleted during log rotation. The default value is 7. The `keepfiles` options has to be in the range of 2 to 1024 files. [float] -==== `logging.files.permissions` +=== `logging.files.permissions` The permissions mask to apply when rotating log files. The default value is 0600. The `permissions` option must be a valid Unix-style file permissions mask @@ -235,7 +194,7 @@ Examples: * 0600: give read and write access to the file owner, and no access to all others. [float] -==== `logging.files.interval` +=== `logging.files.interval` Enable log file rotation on time intervals in addition to size-based rotation. Intervals must be at least `1s`. Values of `1m`, `1h`, `24h`, `7*24h`, `30*24h`, and `365*24h` @@ -245,26 +204,15 @@ Unix epoch. Defaults to disabled. endif::serverless[] [float] -==== `logging.files.rotateonstartup` +=== `logging.files.rotateonstartup` If the log file already exists on startup, immediately rotate it and start writing to a new file instead of appending to the existing one. Defaults to true. -[float] -==== `logging.json` - -When true, logs messages in JSON format. The default is false. - -[float] -==== `logging.ecs` - -When true, logs messages with minimal required Elastic Common Schema (ECS) -information. - ifndef::serverless[] [float] -==== `logging.files.redirect_stderr` experimental[] +=== `logging.files.redirect_stderr` experimental[] When true, diagnostic messages printed to {beatname_uc}'s standard error output will also be logged to the log file. This can be helpful in situations were @@ -275,7 +223,7 @@ Disabled by default. endif::serverless[] [float] -=== Logging format +== Logging format The logging format is generally the same for each logging output. The one exception is with the syslog output where the timestamp is not included in the diff --git a/docs/legacy/configuring-output-after.asciidoc b/docs/configure/output.asciidoc similarity index 68% rename from docs/legacy/configuring-output-after.asciidoc rename to docs/configure/output.asciidoc index 36ffabd711a..fe8b99422d1 100644 --- a/docs/legacy/configuring-output-after.asciidoc +++ b/docs/configure/output.asciidoc @@ -1,7 +1,26 @@ +[[configuring-output]] += Configure the output + +++++ +Output +++++ + +Output configuration options. + +// You configure {beatname_uc} to write to a specific output by setting options +// in the Outputs section of the +{beatname_lc}.yml+ config file. Only a single +// output may be defined. + +// The following topics describe how to configure each supported output. If you've +// secured the {stack}, also read <> for more about +// security-related configuration options. + +include::outputs/outputs-list.asciidoc[tag=outputs-list] + [[sourcemap-output]] [float] -=== Source maps +== Source maps Source maps can be uploaded through all outputs but must eventually be stored in {es}. When using outputs other than {es}, `source_mapping.elasticsearch` must be set for source maps to be applied. @@ -10,7 +29,7 @@ See <> for more details. [[libbeat-configuration-fields]] [float] -=== `fields` +== `fields` Fields are optional tags that can be added to the documents that APM Server outputs. They are defined at the top-level in your configuration file, and will apply to any configured output. @@ -29,3 +48,6 @@ output.elasticsearch: To store the custom fields as top-level fields, set the `fields_under_root` option to true. This is not recommended as when new fields are added to APM documents backward compatibility cannot be ensured. + + +include::outputs/outputs-list.asciidoc[tag=outputs-include] \ No newline at end of file diff --git a/docs/legacy/copied-from-beats/outputs/codec/docs/codec.asciidoc b/docs/configure/outputs/codec.asciidoc similarity index 94% rename from docs/legacy/copied-from-beats/outputs/codec/docs/codec.asciidoc rename to docs/configure/outputs/codec.asciidoc index 2f95ad7a9d3..b6045b798b0 100644 --- a/docs/legacy/copied-from-beats/outputs/codec/docs/codec.asciidoc +++ b/docs/configure/outputs/codec.asciidoc @@ -1,7 +1,5 @@ [[configuration-output-codec]] -=== Change the output codec - -IMPORTANT: {deprecation-notice-config} +== Change the output codec For outputs that do not require a specific encoding, you can change the encoding by using the codec configuration. You can specify either the `json` or `format` diff --git a/docs/legacy/copied-from-beats/outputs/console/docs/console.asciidoc b/docs/configure/outputs/console.asciidoc similarity index 80% rename from docs/legacy/copied-from-beats/outputs/console/docs/console.asciidoc rename to docs/configure/outputs/console.asciidoc index 16548231f78..c50c4825d58 100644 --- a/docs/legacy/copied-from-beats/outputs/console/docs/console.asciidoc +++ b/docs/configure/outputs/console.asciidoc @@ -1,11 +1,15 @@ [[console-output]] -=== Configure the Console output +== Configure the Console output ++++ Console ++++ -IMPORTANT: {deprecation-notice-config} +**** +image:./binary-yes-fm-no.svg[supported deployment methods] + +The Console output is not yet supported by {fleet}-managed APM Server. +**** The Console output writes events in JSON format to stdout. @@ -24,33 +28,33 @@ output.console: ifdef::apm-server[] [float] -==== Configure the {kib} output +=== {kib} configuration -include::../../../../shared-kibana-endpoint.asciidoc[tag=shared-kibana-config] +include::../../shared-kibana-endpoint.asciidoc[tag=shared-kibana-config] endif::[] -==== Configuration options +=== Configuration options You can specify the following `output.console` options in the +{beatname_lc}.yml+ config file: -===== `enabled` +==== `enabled` The enabled config is a boolean setting to enable or disable the output. If set to false, the output is disabled. The default value is `true`. -===== `pretty` +==== `pretty` If `pretty` is set to true, events written to stdout will be nicely formatted. The default is false. -===== `codec` +==== `codec` Output codec configuration. If the `codec` section is missing, events will be JSON encoded using the `pretty` option. See <> for more information. -===== `bulk_max_size` +==== `bulk_max_size` The maximum number of events to buffer internally during publishing. The default is 2048. @@ -60,3 +64,5 @@ setting does not affect how events are published. Setting `bulk_max_size` to values less than or equal to 0 disables the splitting of batches. When splitting is disabled, the queue decides on the number of events to be contained in a batch. + +include::codec.asciidoc[leveloffset=+1] \ No newline at end of file diff --git a/docs/legacy/copied-from-beats/outputs/elasticsearch/docs/elasticsearch.asciidoc b/docs/configure/outputs/elasticsearch.asciidoc similarity index 94% rename from docs/legacy/copied-from-beats/outputs/elasticsearch/docs/elasticsearch.asciidoc rename to docs/configure/outputs/elasticsearch.asciidoc index fb60b42995a..6b3ec539a15 100644 --- a/docs/legacy/copied-from-beats/outputs/elasticsearch/docs/elasticsearch.asciidoc +++ b/docs/configure/outputs/elasticsearch.asciidoc @@ -1,11 +1,16 @@ [[elasticsearch-output]] -=== Configure the {es} output +== Configure the {es} output ++++ {es} ++++ -IMPORTANT: {deprecation-notice-config} +**** +image:./binary-yes-fm-no.svg[supported deployment methods] + +This documentation only applies to APM Server binary users. +Fleet-managed users should see {fleet-guide}/elasticsearch-output.html[Configure the {es} output]. +**** The {es} output sends events directly to {es} using the {es} HTTP API. @@ -56,17 +61,17 @@ output.elasticsearch: See <> for details on each authentication method. -==== Compatibility +=== Compatibility This output works with all compatible versions of {es}. See the https://www.elastic.co/support/matrix#matrix_compatibility[Elastic Support Matrix]. -==== Configuration options +=== Configuration options You can specify the following options in the `elasticsearch` section of the +{beatname_lc}.yml+ config file: -===== `enabled` +==== `enabled` The enabled config is a boolean setting to enable or disable the output. If set to `false`, the output is disabled. @@ -75,7 +80,7 @@ The default value is `true`. [[hosts-option]] -===== `hosts` +==== `hosts` The list of {es} nodes to connect to. The events are distributed to these nodes in round robin order. If one node becomes unreachable, the event is @@ -97,7 +102,7 @@ output.elasticsearch: In the previous example, the {es} nodes are available at `https://10.45.3.2:9220/elasticsearch` and `https://10.45.3.1:9230/elasticsearch`. -===== `compression_level` +==== `compression_level` The gzip compression level. Setting this value to `0` disables compression. The compression level must be in the range of `1` (best speed) to `9` (best compression). @@ -106,36 +111,36 @@ Increasing the compression level will reduce the network usage but will increase The default value is `0`. -===== `escape_html` +==== `escape_html` Configure escaping of HTML in strings. Set to `true` to enable escaping. The default value is `false`. -===== `api_key` +==== `api_key` Instead of using a username and password, you can use API keys to secure communication with {es}. The value must be the ID of the API key and the API key joined by a colon: `id:api_key`. See <> for more information. -===== `username` +==== `username` The basic authentication username for connecting to {es}. This user needs the privileges required to publish events to {es}. To create a user like this, see <>. -===== `password` +==== `password` The basic authentication password for connecting to {es}. -===== `parameters` +==== `parameters` Dictionary of HTTP parameters to pass within the URL with index operations. [[protocol-option]] -===== `protocol` +==== `protocol` The name of the protocol {es} is reachable on. The options are: `http` or `https`. The default is `http`. However, if you specify a URL for @@ -143,13 +148,13 @@ The name of the protocol {es} is reachable on. The options are: specify in the URL. [[path-option]] -===== `path` +==== `path` An HTTP path prefix that is prepended to the HTTP API calls. This is useful for the cases where {es} listens behind an HTTP reverse proxy that exports the API under a custom prefix. -===== `headers` +==== `headers` Custom HTTP headers to add to each request created by the {es} output. Example: @@ -163,7 +168,7 @@ output.elasticsearch.headers: It is possible to specify multiple header values for the same header name by separating them with a comma. -===== `proxy_url` +==== `proxy_url` The URL of the proxy to use when connecting to the {es} servers. The value may be either a complete URL or a "host[:port]", in which case the "http" @@ -176,7 +181,7 @@ for more information about the environment variables. ifndef::apm-server[] [[index-option-es]] -===== `index` +==== `index` The index name to write events to when you're using daily indices. The default is +"{beatname_lc}-%{[{beat_version_key}]}-%{+yyyy.MM.dd}"+, for example, @@ -228,7 +233,7 @@ endif::apm-server[] ifndef::apm-server[] [[indices-option-es]] -===== `indices` +==== `indices` An array of index selector rules. Each rule specifies the index to use for events that match the rule. During publishing, {beatname_uc} uses the first @@ -302,7 +307,7 @@ endif::apm-server[] ifndef::no_ilm[] [[ilm-es]] -===== `ilm` +==== `ilm` Configuration options for {ilm}. @@ -311,7 +316,7 @@ endif::no_ilm[] ifndef::no-pipeline[] [[pipeline-option-es]] -===== `pipeline` +==== `pipeline` A format string value that specifies the ingest node pipeline to write events to. @@ -322,7 +327,7 @@ output.elasticsearch: pipeline: my_pipeline_id ------------------------------------------------------------------------------ -For more information, see <>. +For more information, see <>. You can set the ingest node pipeline dynamically by using a format string to access any event field. For example, this configuration uses a custom field, @@ -346,7 +351,7 @@ See the <> setting for other ways to set the ingest node pipeline dynamically. [[pipelines-option-es]] -===== `pipelines` +==== `pipelines` An array of pipeline selector rules. Each rule specifies the ingest node pipeline to use for events that match the rule. During publishing, {beatname_uc} @@ -412,11 +417,11 @@ With this configuration, all events with `log_type: critical` are sent to `sev2_pipeline`, and all other events are sent to `sev3_pipeline`. For more information about ingest node pipelines, see -<>. +<>. endif::[] -===== `max_retries` +==== `max_retries` ifdef::ignores_max_retries[] {beatname_uc} ignores the `max_retries` setting and retries indefinitely. @@ -431,17 +436,17 @@ Set `max_retries` to a value less than 0 to retry until all events are published The default is 3. endif::[] -===== `flush_bytes` +==== `flush_bytes` The bulk request size threshold, in bytes, before flushing to {es}. The value must have a suffix, e.g. `"2MB"`. The default is `1MB`. -===== `flush_interval` +==== `flush_interval` The maximum duration to accumulate events for a bulk request before being flushed to {es}. The value must have a duration suffix, e.g. `"5s"`. The default is `1s`. -===== `backoff.init` +==== `backoff.init` The number of seconds to wait before trying to reconnect to {es} after a network error. After waiting `backoff.init` seconds, {beatname_uc} tries to @@ -450,16 +455,16 @@ to `backoff.max`. After a successful connection, the backoff timer is reset. The default is `1s`. -===== `backoff.max` +==== `backoff.max` The maximum number of seconds to wait before attempting to connect to {es} after a network error. The default is `60s`. -===== `timeout` +==== `timeout` The HTTP request timeout in seconds for the {es} request. The default is 90. -===== `ssl` +==== `ssl` Configuration options for SSL parameters like the certificate authority to use for HTTPS-based connections. If the `ssl` section is missing, the host CAs are used for HTTPS connections to @@ -468,8 +473,5 @@ for HTTPS-based connections. If the `ssl` section is missing, the host CAs are u See the <> guide or <> for more information. -===== `kerberos` - -Configuration options for Kerberos authentication. - -See <> for more information. +// Elasticsearch security +include::../../legacy/copied-from-beats/docs/https.asciidoc[] \ No newline at end of file diff --git a/docs/legacy/copied-from-beats/outputs/kafka/docs/kafka.asciidoc b/docs/configure/outputs/kafka.asciidoc similarity index 90% rename from docs/legacy/copied-from-beats/outputs/kafka/docs/kafka.asciidoc rename to docs/configure/outputs/kafka.asciidoc index 2f8839f0cc1..9c62c89154b 100644 --- a/docs/legacy/copied-from-beats/outputs/kafka/docs/kafka.asciidoc +++ b/docs/configure/outputs/kafka.asciidoc @@ -1,11 +1,15 @@ [[kafka-output]] -=== Configure the Kafka output +== Configure the Kafka output ++++ Kafka ++++ -IMPORTANT: {deprecation-notice-config} +**** +image:./binary-yes-fm-no.svg[supported deployment methods] + +The Kafka output is not yet supported by {fleet}-managed APM Server. +**** The Kafka output sends events to Apache Kafka. @@ -35,22 +39,22 @@ NOTE: Events bigger than <> will be ifdef::apm-server[] [float] -==== Configure the {kib} output +=== {kib} configuration -include::../../../../shared-kibana-endpoint.asciidoc[tag=shared-kibana-config] +include::../../shared-kibana-endpoint.asciidoc[tag=shared-kibana-config] endif::[] [[kafka-compatibility]] -==== Compatibility +=== Compatibility This output works with all Kafka versions in between 0.11 and 2.2.2. Older versions might work as well, but are not supported. -==== Configuration options +=== Configuration options You can specify the following options in the `kafka` section of the +{beatname_lc}.yml+ config file: -===== `enabled` +==== `enabled` The `enabled` config is a boolean setting to enable or disable the output. If set to false, the output is disabled. @@ -62,12 +66,12 @@ ifdef::apm-server[] The default value is `false`. endif::[] -===== `hosts` +==== `hosts` The list of Kafka broker addresses from where to fetch the cluster metadata. The cluster metadata contain the actual Kafka brokers events are published to. -===== `version` +==== `version` Kafka version {beatname_lc} is assumed to run against. Defaults to 1.0.0. @@ -77,16 +81,16 @@ Valid values are all Kafka releases in between `0.8.2.0` and `2.0.0`. See <> for information on supported versions. -===== `username` +==== `username` The username for connecting to Kafka. If username is configured, the password must be configured as well. -===== `password` +==== `password` The password for connecting to Kafka. -===== `sasl.mechanism` +==== `sasl.mechanism` beta[] @@ -99,12 +103,9 @@ The SASL mechanism to use when connecting to Kafka. It can be one of: If `sasl.mechanism` is not set, `PLAIN` is used if `username` and `password` are provided. Otherwise, SASL authentication is disabled. -To use `GSSAPI` mechanism to authenticate with Kerberos, you must leave this -field empty, and use the <> options. - [[topic-option-kafka]] -===== `topic` +==== `topic` The Kafka topic used for produced events. @@ -124,7 +125,7 @@ See the <> setting for other ways to set the topic dynamically. [[topics-option-kafka]] -===== `topics` +==== `topics` An array of topic selector rules. Each rule specifies the `topic` to use for events that match the rule. During publishing, {beatname_uc} sets the `topic` @@ -172,7 +173,7 @@ output.kafka: This configuration results in topics named +critical-{version}+, +error-{version}+, and +logs-{version}+. -===== `key` +==== `key` Optional formatted string specifying the Kafka event key. If configured, the event key can be extracted from the event using a format string. @@ -180,7 +181,7 @@ event key can be extracted from the event using a format string. See the Kafka documentation for the implications of a particular choice of key; by default, the key is chosen by the Kafka cluster. -===== `partition` +==== `partition` Kafka output broker event partitioning strategy. Must be one of `random`, `round_robin`, or `hash`. By default the `hash` partitioner is used. @@ -206,21 +207,21 @@ available partitions only. NOTE: Publishing to a subset of available partitions potentially increases resource usage because events may become unevenly distributed. -===== `client_id` +==== `client_id` The configurable client ID used for logging, debugging, and auditing purposes. The default is "beats". -===== `worker` +==== `worker` The number of concurrent load-balanced Kafka output workers. -===== `codec` +==== `codec` Output codec configuration. If the `codec` section is missing, events will be JSON encoded. See <> for more information. -===== `metadata` +==== `metadata` Kafka metadata update settings. The metadata do contain information about brokers, topics, partition, and active leaders to use for publishing. @@ -235,7 +236,7 @@ metadata for the configured topics. The default is false. *`retry.backoff`*:: Waiting time between retries during leader elections. Default is `250ms`. -===== `max_retries` +==== `max_retries` ifdef::ignores_max_retries[] {beatname_uc} ignores the `max_retries` setting and retries indefinitely. @@ -250,7 +251,7 @@ Set `max_retries` to a value less than 0 to retry until all events are published The default is 3. endif::[] -===== `backoff.init` +==== `backoff.init` The number of seconds to wait before trying to republish to Kafka after a network error. After waiting `backoff.init` seconds, {beatname_uc} @@ -258,37 +259,37 @@ tries to republish. If the attempt fails, the backoff timer is increased exponentially up to `backoff.max`. After a successful publish, the backoff timer is reset. The default is `1s`. -===== `backoff.max` +==== `backoff.max` The maximum number of seconds to wait before attempting to republish to Kafka after a network error. The default is `60s`. -===== `bulk_max_size` +==== `bulk_max_size` The maximum number of events to bulk in a single Kafka request. The default is 2048. -===== `bulk_flush_frequency` +==== `bulk_flush_frequency` Duration to wait before sending bulk Kafka request. 0 is no delay. The default is 0. -===== `timeout` +==== `timeout` The number of seconds to wait for responses from the Kafka brokers before timing out. The default is 30 (seconds). -===== `broker_timeout` +==== `broker_timeout` The maximum duration a broker will wait for number of required ACKs. The default is `10s`. -===== `channel_buffer_size` +==== `channel_buffer_size` Per Kafka broker number of messages buffered in output pipeline. The default is 256. -===== `keep_alive` +==== `keep_alive` The keep-alive period for an active network connection. If `0s`, keep-alives are disabled. The default is `0s`. -===== `compression` +==== `compression` Sets the output compression codec. Must be one of `none`, `snappy`, `lz4` and `gzip`. The default is `gzip`. @@ -298,7 +299,7 @@ Sets the output compression codec. Must be one of `none`, `snappy`, `lz4` and `g When targeting Azure Event Hub for Kafka, set `compression` to `none` as the provided codecs are not supported. ==== -===== `compression_level` +==== `compression_level` Sets the compression level used by gzip. Setting this value to 0 disables compression. The compression level must be in the range of 1 (best speed) to 9 (best compression). @@ -308,35 +309,26 @@ Increasing the compression level will reduce the network usage but will increase The default value is 4. [[kafka-max_message_bytes]] -===== `max_message_bytes` +==== `max_message_bytes` The maximum permitted size of JSON-encoded messages. Bigger messages will be dropped. The default value is 1000000 (bytes). This value should be equal to or less than the broker's `message.max.bytes`. -===== `required_acks` +==== `required_acks` The ACK reliability level required from broker. 0=no response, 1=wait for local commit, -1=wait for all replicas to commit. The default is 1. Note: If set to 0, no ACKs are returned by Kafka. Messages might be lost silently on error. -===== `enable_krb5_fast` +==== `enable_krb5_fast` beta[] Enable Kerberos FAST authentication. This may conflict with some Active Directory installations. It is separate from the standard Kerberos settings because this flag only applies to the Kafka output. The default is `false`. -===== `ssl` +==== `ssl` Configuration options for SSL parameters like the root CA for Kafka connections. The Kafka host keystore should be created with the `-keyalg RSA` argument to ensure it uses a cipher supported by https://github.com/Shopify/sarama/wiki/Frequently-Asked-Questions#why-cant-sarama-connect-to-my-kafka-cluster-using-ssl[{filebeat}'s Kafka library]. See <> for more information. - -[[kerberos-option-kafka]] -===== `kerberos` - -beta[] - -Configuration options for Kerberos authentication. - -See <> for more information. diff --git a/docs/legacy/copied-from-beats/outputs/logstash/docs/logstash.asciidoc b/docs/configure/outputs/logstash.asciidoc similarity index 93% rename from docs/legacy/copied-from-beats/outputs/logstash/docs/logstash.asciidoc rename to docs/configure/outputs/logstash.asciidoc index d89f84ce4b2..e338e6d4db3 100644 --- a/docs/legacy/copied-from-beats/outputs/logstash/docs/logstash.asciidoc +++ b/docs/configure/outputs/logstash.asciidoc @@ -1,11 +1,15 @@ [[logstash-output]] -=== Configure the {ls} output +== Configure the {ls} output ++++ {ls} ++++ -IMPORTANT: {deprecation-notice-config} +**** +image:./binary-yes-fm-no.svg[supported deployment methods] + +The {ls} output is not yet supported by {fleet}-managed APM Server. +**** The {ls} output sends events directly to {ls} by using the lumberjack protocol, which runs over TCP. {ls} allows for additional processing and routing of @@ -41,9 +45,9 @@ The `hosts` option specifies the {ls} server and the port (`5044`) where {ls} is ifdef::apm-server[] [float] -==== Configure the {kib} output +=== {kib} configuration -include::../../../../shared-kibana-endpoint.asciidoc[tag=shared-kibana-config] +include::../../shared-kibana-endpoint.asciidoc[tag=shared-kibana-config] endif::[] ifeval::["{beatname_lc}"=="filebeat"] @@ -54,7 +58,7 @@ endif::[] // end::shared-logstash-config[] -==== Accessing metadata fields +=== Accessing metadata fields Every event sent to {ls} contains the following metadata fields that you can use in {ls} for indexing and filtering: @@ -103,7 +107,7 @@ with a {logstash-ref}/use-ingest-pipelines.html[{ls} pipeline config]. <4> The current version of {beatname_uc}. In addition to metadata, {beatname_uc} provides the `processor.event` field, which -can be used to separate {apm-overview-ref-v}/apm-data-model.html[event types] into different indices. +can be used to separate {apm-guide-ref}/data-model.html[event types] into different indices. endif::[] ifndef::apm-server[] @@ -184,9 +188,9 @@ NOTE: If {ilm-init} is not being used, set `index` to `%{[@metadata][beat]}-%{[@ endif::[] ifdef::apm-server[] -==== {ls} and {ilm-init} +=== {ls} and {ilm-init} -When used with {apm-server-ref}/ilm.html[{ilm-cap}], {ls} does not need to create a new index each day. +When used with {apm-guide-ref}/ilm-how-to.html[{ilm-cap}], {ls} does not need to create a new index each day. Here's a sample {ls} configuration file that would accomplish this: [source,logstash] @@ -211,18 +215,18 @@ output { For example: +{beat_default_index_prefix}-{version}-sourcemap+. endif::[] -==== Compatibility +=== Compatibility This output works with all compatible versions of {ls}. See the https://www.elastic.co/support/matrix#matrix_compatibility[Elastic Support Matrix]. -==== Configuration options +=== Configuration options You can specify the following options in the `logstash` section of the +{beatname_lc}.yml+ config file: -===== `enabled` +==== `enabled` The enabled config is a boolean setting to enable or disable the output. If set to false, the output is disabled. @@ -235,7 +239,7 @@ The default value is `false`. endif::[] [[hosts]] -===== `hosts` +==== `hosts` The list of known {ls} servers to connect to. If load balancing is disabled, but multiple hosts are configured, one host is selected randomly (there is no precedence). @@ -243,7 +247,7 @@ If one host becomes unreachable, another one is selected randomly. All entries in this list can contain a port number. The default port number 5044 will be used if no number is given. -===== `compression_level` +==== `compression_level` The gzip compression level. Setting this value to 0 disables compression. The compression level must be in the range of 1 (best speed) to 9 (best compression). @@ -252,20 +256,20 @@ Increasing the compression level will reduce the network usage but will increase The default value is 3. -===== `escape_html` +==== `escape_html` Configure escaping of HTML in strings. Set to `true` to enable escaping. The default value is `false`. -===== `worker` +==== `worker` The number of workers per configured host publishing events to {ls}. This is best used with load balancing mode enabled. Example: If you have 2 hosts and 3 workers, in total 6 workers are started (3 for each host). [[loadbalance]] -===== `loadbalance` +==== `loadbalance` If set to true and multiple {ls} hosts are configured, the output plugin load balances published events onto all {ls} hosts. If set to false, @@ -280,7 +284,7 @@ output.logstash: index: {beatname_lc} ------------------------------------------------------------------------------ -===== `ttl` +==== `ttl` Time to live for a connection to {ls} after which the connection will be re-established. Useful when {ls} hosts represent load balancers. Since the connections to {ls} hosts @@ -292,14 +296,14 @@ The default value is 0. NOTE: The "ttl" option is not yet supported on an asynchronous {ls} client (one with the "pipelining" option set). -===== `pipelining` +==== `pipelining` Configures the number of batches to be sent asynchronously to {ls} while waiting for ACK from {ls}. Output only becomes blocking once number of `pipelining` batches have been written. Pipelining is disabled if a value of 0 is configured. The default value is 2. -===== `proxy_url` +==== `proxy_url` The URL of the SOCKS5 proxy to use when connecting to the {ls} servers. The value must be a URL with a scheme of `socks5://`. The protocol used to @@ -320,14 +324,14 @@ output.logstash: ------------------------------------------------------------------------------ [[logstash-proxy-use-local-resolver]] -===== `proxy_use_local_resolver` +==== `proxy_use_local_resolver` The `proxy_use_local_resolver` option determines if {ls} hostnames are resolved locally when using a proxy. The default value is false, which means that when a proxy is used the name resolution occurs on the proxy server. [[logstash-index]] -===== `index` +==== `index` The index root name to write events to. The default is the Beat name. For example +"{beat_default_index_prefix}"+ generates +"[{beat_default_index_prefix}-]{version}-YYYY.MM.DD"+ @@ -336,17 +340,17 @@ indices (for example, +"{beat_default_index_prefix}-{version}-2017.04.26"+). NOTE: This parameter's value will be assigned to the `metadata.beat` field. It can then be accessed in {ls}'s output section as `%{[@metadata][beat]}`. -===== `ssl` +==== `ssl` Configuration options for SSL parameters like the root CA for {ls} connections. See <> for more information. To use SSL, you must also configure the https://www.elastic.co/guide/en/logstash/current/plugins-inputs-beats.html[{beats} input plugin for {ls}] to use SSL/TLS. -===== `timeout` +==== `timeout` The number of seconds to wait for responses from the {ls} server before timing out. The default is 30 (seconds). -===== `max_retries` +==== `max_retries` ifdef::ignores_max_retries[] {beatname_uc} ignores the `max_retries` setting and retries indefinitely. @@ -361,7 +365,7 @@ Set `max_retries` to a value less than 0 to retry until all events are published The default is 3. endif::[] -===== `bulk_max_size` +==== `bulk_max_size` The maximum number of events to bulk in a single {ls} request. The default is 2048. @@ -379,7 +383,7 @@ splitting of batches. When splitting is disabled, the queue decides on the number of events to be contained in a batch. -===== `slow_start` +==== `slow_start` If enabled, only a subset of events in a batch of events is transferred per transaction. The number of events to be sent increases up to `bulk_max_size` if no error is encountered. @@ -387,7 +391,7 @@ On error, the number of events per transaction is reduced again. The default is `false`. -===== `backoff.init` +==== `backoff.init` The number of seconds to wait before trying to reconnect to {ls} after a network error. After waiting `backoff.init` seconds, {beatname_uc} tries to @@ -395,7 +399,10 @@ reconnect. If the attempt fails, the backoff timer is increased exponentially up to `backoff.max`. After a successful connection, the backoff timer is reset. The default is `1s`. -===== `backoff.max` +==== `backoff.max` The maximum number of seconds to wait before attempting to connect to {ls} after a network error. The default is `60s`. + +// Logstash security +include::../../legacy/copied-from-beats/docs/shared-ssl-logstash-config.asciidoc[] \ No newline at end of file diff --git a/docs/legacy/copied-from-beats/docs/output-cloud.asciidoc b/docs/configure/outputs/output-cloud.asciidoc similarity index 90% rename from docs/legacy/copied-from-beats/docs/output-cloud.asciidoc rename to docs/configure/outputs/output-cloud.asciidoc index 380672c14ed..28bbf5a4618 100644 --- a/docs/legacy/copied-from-beats/docs/output-cloud.asciidoc +++ b/docs/configure/outputs/output-cloud.asciidoc @@ -1,12 +1,16 @@ [[configure-cloud-id]] -=== Configure the output for {ess} on {ecloud} +== Configure the output for {ess} on {ecloud} [subs="attributes"] ++++ {ess} ++++ -IMPORTANT: {deprecation-notice-config} +**** +image:./binary-yes-fm-no.svg[supported deployment methods] + +This documentation only applies to APM Server binary users. +**** ifdef::apm-server[] NOTE: This page refers to using a separate instance of APM Server with an existing @@ -37,7 +41,7 @@ These settings can be also specified at the command line, like this: ------------------------------------------------------------------------------ -==== `cloud.id` +=== `cloud.id` The Cloud ID, which can be found in the {ess} web console, is used by {beatname_uc} to resolve the {es} and {kib} URLs. This setting @@ -45,7 +49,7 @@ overwrites the `output.elasticsearch.hosts` and `setup.kibana.host` settings. NOTE: The base64 encoded `cloud.id` found in the {ess} web console does not explicitly specify a port. This means that {beatname_uc} will default to using port 443 when using `cloud.id`, not the commonly configured cloud endpoint port 9243. -==== `cloud.auth` +=== `cloud.auth` When specified, the `cloud.auth` overwrites the `output.elasticsearch.username` and `output.elasticsearch.password` settings. Because the {kib} settings inherit diff --git a/docs/configure/outputs/outputs-list.asciidoc b/docs/configure/outputs/outputs-list.asciidoc new file mode 100644 index 00000000000..b0b925c0de5 --- /dev/null +++ b/docs/configure/outputs/outputs-list.asciidoc @@ -0,0 +1,23 @@ +//# tag::outputs-list[] +* <> +* <> +* <> +* <> +* <> +* <> +//# end::outputs-list[] + +//# tag::outputs-include[] +include::output-cloud.asciidoc[] + +include::elasticsearch.asciidoc[] + +include::logstash.asciidoc[] + +include::kafka.asciidoc[] + +include::redis.asciidoc[] + +include::console.asciidoc[] + +//# end::outputs-include[] diff --git a/docs/legacy/copied-from-beats/outputs/redis/docs/redis.asciidoc b/docs/configure/outputs/redis.asciidoc similarity index 93% rename from docs/legacy/copied-from-beats/outputs/redis/docs/redis.asciidoc rename to docs/configure/outputs/redis.asciidoc index 758afe67426..d7b7dbe3152 100644 --- a/docs/legacy/copied-from-beats/outputs/redis/docs/redis.asciidoc +++ b/docs/configure/outputs/redis.asciidoc @@ -1,11 +1,15 @@ [[redis-output]] -=== Configure the Redis output +== Configure the Redis output ++++ Redis ++++ -IMPORTANT: {deprecation-notice-config} +**** +image:./binary-yes-fm-no.svg[supported deployment methods] + +The Redis output is not yet supported by {fleet}-managed APM Server. +**** The Redis output inserts the events into a Redis list or a Redis channel. This output plugin is compatible with @@ -28,28 +32,28 @@ output.redis: ifdef::apm-server[] [float] -==== Configure the {kib} output +=== {kib} configuration -include::../../../../shared-kibana-endpoint.asciidoc[tag=shared-kibana-config] +include::../../shared-kibana-endpoint.asciidoc[tag=shared-kibana-config] endif::[] -==== Compatibility +=== Compatibility This output is expected to work with all Redis versions between 3.2.4 and 5.0.8. Other versions might work as well, but are not supported. -==== Configuration options +=== Configuration options You can specify the following `output.redis` options in the +{beatname_lc}.yml+ config file: -===== `enabled` +==== `enabled` The enabled config is a boolean setting to enable or disable the output. If set to false, the output is disabled. The default value is `true`. -===== `hosts` +==== `hosts` The list of Redis servers to connect to. If load balancing is enabled, the events are distributed to the servers in the list. If one server becomes unreachable, the events are @@ -63,12 +67,12 @@ The `redis` scheme will disable the `ssl` settings for the host, while `rediss` will enforce TLS. If `rediss` is specified and no `ssl` settings are configured, the output uses the system certificate store. -===== `index` +==== `index` The index name added to the events metadata for use by {ls}. The default is "{beatname_lc}". [[key-option-redis]] -===== `key` +==== `key` The name of the Redis list or channel the events are published to. If not configured, the value of the `index` setting is used. @@ -92,7 +96,7 @@ See the <> setting for other ways to set the key dynamically. [[keys-option-redis]] -===== `keys` +==== `keys` An array of key selector rules. Each rule specifies the `key` to use for events that match the rule. During publishing, {beatname_uc} uses the first matching @@ -138,15 +142,15 @@ output.redis: mysql: "backend_list" ------------------------------------------------------------------------------ -===== `password` +==== `password` The password to authenticate with. The default is no authentication. -===== `db` +==== `db` The Redis database number where the events are published. The default is 0. -===== `datatype` +==== `datatype` The Redis data type to use for publishing events.If the data type is `list`, the Redis RPUSH command is used and all events are added to the list with the key defined under `key`. @@ -154,28 +158,28 @@ If the data type `channel` is used, the Redis `PUBLISH` command is used and mean are pushed to the pub/sub mechanism of Redis. The name of the channel is the one defined under `key`. The default value is `list`. -===== `codec` +==== `codec` Output codec configuration. If the `codec` section is missing, events will be JSON encoded. See <> for more information. -===== `worker` +==== `worker` The number of workers to use for each host configured to publish events to Redis. Use this setting along with the `loadbalance` option. For example, if you have 2 hosts and 3 workers, in total 6 workers are started (3 for each host). -===== `loadbalance` +==== `loadbalance` If set to true and multiple hosts or workers are configured, the output plugin load balances published events onto all Redis hosts. If set to false, the output plugin sends all events to only one host (determined at random) and will switch to another host if the currently selected one becomes unreachable. The default value is true. -===== `timeout` +==== `timeout` The Redis connection timeout in seconds. The default is 5 seconds. -===== `backoff.init` +==== `backoff.init` The number of seconds to wait before trying to reconnect to Redis after a network error. After waiting `backoff.init` seconds, {beatname_uc} tries to @@ -183,12 +187,12 @@ reconnect. If the attempt fails, the backoff timer is increased exponentially up to `backoff.max`. After a successful connection, the backoff timer is reset. The default is `1s`. -===== `backoff.max` +==== `backoff.max` The maximum number of seconds to wait before attempting to connect to Redis after a network error. The default is `60s`. -===== `max_retries` +==== `max_retries` ifdef::ignores_max_retries[] {beatname_uc} ignores the `max_retries` setting and retries indefinitely. @@ -204,7 +208,7 @@ The default is 3. endif::[] -===== `bulk_max_size` +==== `bulk_max_size` The maximum number of events to bulk in a single Redis request or pipeline. The default is 2048. @@ -221,13 +225,13 @@ Setting `bulk_max_size` to values less than or equal to 0 disables the splitting of batches. When splitting is disabled, the queue decides on the number of events to be contained in a batch. -===== `ssl` +==== `ssl` Configuration options for SSL parameters like the root CA for Redis connections guarded by SSL proxies (for example https://www.stunnel.org[stunnel]). See <> for more information. -===== `proxy_url` +==== `proxy_url` The URL of the SOCKS5 proxy to use when connecting to the Redis servers. The value must be a URL with a scheme of `socks5://`. You cannot use a web proxy @@ -241,7 +245,7 @@ client. You can change this behavior by setting the <> option. [[redis-proxy-use-local-resolver]] -===== `proxy_use_local_resolver` +==== `proxy_use_local_resolver` This option determines whether Redis hostnames are resolved locally when using a proxy. The default value is false, which means that name resolution occurs on the proxy server. diff --git a/docs/legacy/copied-from-beats/docs/shared-path-config.asciidoc b/docs/configure/path.asciidoc similarity index 80% rename from docs/legacy/copied-from-beats/docs/shared-path-config.asciidoc rename to docs/configure/path.asciidoc index 9ec0860e004..27c720ab6ee 100644 --- a/docs/legacy/copied-from-beats/docs/shared-path-config.asciidoc +++ b/docs/configure/path.asciidoc @@ -1,23 +1,16 @@ -////////////////////////////////////////////////////////////////////////// -//// This content is shared by all Elastic Beats. Make sure you keep the -//// descriptions here generic enough to work for all Beats that include -//// this file. When using cross references, make sure that the cross -//// references resolve correctly for any files that include this one. -//// Use the appropriate variables defined in the index.asciidoc file to -//// resolve Beat names: beatname_uc and beatname_lc. -//// Use the following include to pull this content into a doc file: -//// include::../../libbeat/docs/shared-path-config.asciidoc[] -//// Make sure this content appears below a level 2 heading. -////////////////////////////////////////////////////////////////////////// - [[configuration-path]] -== Configure project paths += Configure project paths ++++ Project paths ++++ -IMPORTANT: {deprecation-notice-config} +**** +image:./binary-yes-fm-no.svg[supported deployment methods] + +This documentation is only relevant for APM Server binary users. +Fleet-managed paths are defined in <>. +**** The `path` section of the +{beatname_lc}.yml+ config file contains configuration options that define where {beatname_uc} looks for its files. For example, {beatname_uc} @@ -42,12 +35,12 @@ path.logs: /var/log/ Note that it is possible to override these options by using command line flags. [float] -=== Configuration options +== Configuration options You can specify the following options in the `path` section of the +{beatname_lc}.yml+ config file: [float] -==== `home` +=== `home` The home path for the {beatname_uc} installation. This is the default base path for all other path settings and for miscellaneous files that come with the distribution (for example, the @@ -62,7 +55,7 @@ path.home: /usr/share/beats ------------------------------------------------------------------------------ [float] -==== `config` +=== `config` The configuration path for the {beatname_uc} installation. This is the default base path for configuration files, including the main YAML configuration file and the @@ -77,7 +70,7 @@ path.config: /usr/share/beats/config ------------------------------------------------------------------------------ [float] -==== `data` +=== `data` The data path for the {beatname_uc} installation. This is the default base path for all the files in which {beatname_uc} needs to store its data. If not set by a CLI @@ -96,7 +89,7 @@ TIP: When running multiple {beatname_uc} instances on the same host, make sure t each have a distinct `path.data` value. [float] -==== `logs` +=== `logs` The logs path for a {beatname_uc} installation. This is the default location for {beatname_uc}'s log files. If not set by a CLI flag or in the configuration file, the default @@ -110,7 +103,7 @@ path.logs: /var/log/beats ------------------------------------------------------------------------------ [float] -==== `system.hostfs` +=== `system.hostfs` Specifies the mount point of the host's file system for use in monitoring a host. This can either be set in the config, or with the `--system.hostfs` CLI flag. This is used for cgroup self-monitoring. diff --git a/docs/legacy/configuration-rum.asciidoc b/docs/configure/rum.asciidoc similarity index 65% rename from docs/legacy/configuration-rum.asciidoc rename to docs/configure/rum.asciidoc index b0db2dd8731..7d8cc7b59ae 100644 --- a/docs/legacy/configuration-rum.asciidoc +++ b/docs/configure/rum.asciidoc @@ -1,143 +1,138 @@ [[configuration-rum]] -== Configure Real User Monitoring (RUM) += Configure Real User Monitoring (RUM) ++++ Real User Monitoring (RUM) ++++ -IMPORTANT: {deprecation-notice-config} -If you're using {fleet} and the Elastic APM integration, please see <> instead. +**** +image:./binary-yes-fm-yes.svg[supported deployment methods] + +Most options in this section are supported by all APM Server deployment methods. +**** The {apm-rum-ref-v}/index.html[Real User Monitoring (RUM) agent] captures user interactions with clients such as web browsers. These interactions are sent as events to the APM Server. Because the RUM agent runs on the client side, the connection between agent and server is unauthenticated. As a security precaution, RUM is therefore disabled by default. -To enable it, set `apm-server.rum.enabled` to `true` in your APM Server configuration file. + +include::./tab-widgets/rum-config-widget.asciidoc[] In addition, if APM Server is deployed in an origin different than the page’s origin, you will need to configure {apm-rum-ref-v}/configuring-cors.html[Cross-Origin Resource Sharing (CORS)] in the Agent. -Example config with RUM enabled: - -["source","yaml"] ----- -apm-server.rum.enabled: true -apm-server.auth.anonymous.rate_limit.event_limit: 300 -apm-server.auth.anonymous.rate_limit.ip_limit: 1000 -apm-server.auth.anonymous.allow_service: [your_service_name] -apm-server.rum.allow_origins: ['*'] -apm-server.rum.allow_headers: ["header1", "header2"] -apm-server.rum.library_pattern: "node_modules|bower_components|~" -apm-server.rum.exclude_from_grouping: "^/webpack" -apm-server.rum.source_mapping.enabled: true -apm-server.rum.source_mapping.cache.expiration: 5m ----- - [float] [[enable-rum-support]] -=== Configuration reference - -Specify the following options in the `apm-server.rum` section of the `apm-server.yml` config file: += Configuration reference [[rum-enable]] [float] -==== `enabled` -To enable RUM support, set `apm-server.rum.enabled` to `true`. -By default this is disabled. +== Enable RUM +To enable RUM support, set to `true`. +By default this is disabled. (bool) + +|==== +| APM Server binary | `apm-server.rum.enabled` +| Fleet-managed | `Enable RUM` +|==== [NOTE] ==== -If an <> or <> is configured, -then enabling RUM support will automatically enable <>. +If an <> or <> is configured, +enabling RUM support will automatically enable <>. Anonymous authentication is required as the RUM agent runs in the browser. ==== -[float] -[[event_rate.limit]] -==== `event_rate.limit` - -deprecated::[7.15.0, Replaced by <>.] - -The maximum number of events allowed per second, per agent IP address. - -Default: `300` - -[float] -==== `event_rate.lru_size` - -deprecated::[7.15.0, Replaced by <>.] - -The number of unique IP addresses to track in an LRU cache. -IP addresses in the cache will be rate limited according to the <> setting. -Consider increasing this default if your site has many concurrent clients. - -Default: `1000` - -[float] -[[rum-allow-service-names]] -==== `allow_service_names` - -deprecated::[7.15.0, Replaced by <>.] -A list of permitted service names for RUM support. -Names in this list must match the agent's `service.name`. -This can be set to restrict RUM events to those with one of a set of known service names, -in order to limit the number of service-specific indices or data streams created. - -Default: Not set (any service name is accepted) - [float] [[rum-allow-origins]] -==== `allow_origins` +== Allowed Origins A list of permitted origins for RUM support. User-agents send an Origin header that will be validated against this list. This is done automatically by modern browsers as part of the https://www.w3.org/TR/cors/[CORS specification]. An origin is made of a protocol scheme, host and port, without the URL path. -Default: `['*']` (allows everything) +Default: `['*']` (allows everything). (text) + +|==== +| APM Server binary | `apm-server.rum.allow_origins` +| Fleet-managed | `Allowed Origins` +|==== [float] [[rum-allow-headers]] -==== `allow_headers` +== Access-Control-Allow-Headers HTTP requests made from the RUM agent to the APM Server are limited in the HTTP headers they are allowed to have. If any other headers are added, the request will be rejected by the browser due to Cross-Origin Resource Sharing (CORS) restrictions. Use this setting to allow additional headers. The default list of allowed headers includes "Content-Type", "Content-Encoding", and "Accept"; custom values configured here are appended to the default list and used as the value for the `Access-Control-Allow-Headers` header. -Default: `[]` +Default: `[]`. (text) + +|==== +| APM Server binary | `apm-server.rum.allow_headers` +| Fleet-managed | `Access-Control-Allow-Headers` +|==== [float] [[rum-response-headers]] -==== `response_headers` +== Custom HTTP response headers Custom HTTP headers to add to RUM responses. This can be useful for security policy compliance. Values set for the same key will be concatenated. -Default: Not set +Default: none. (text) + +|==== +| APM Server binary | `apm-server.rum.response_headers` +| Fleet-managed | `Custom HTTP response headers` +|==== [float] [[rum-library-pattern]] -==== `library_pattern` +== Library Frame Pattern RegExp to be matched against a stack trace frame's `file_name` and `abs_path` attributes. If the RegExp matches, the stack trace frame is considered to be a library frame. When source mapping is applied, the `error.culprit` is set to reflect the _function_ and the _filename_ of the first non library frame. This aims to provide an entry point for identifying issues. -Default: `"node_modules|bower_components|~"` +Default: `"node_modules|bower_components|~"`. (text) + +|==== +| APM Server binary | `apm-server.rum.library_pattern` +| Fleet-managed | `Library Frame Pattern` +|==== [float] -==== `exclude_from_grouping` +== Exclude from grouping RegExp to be matched against a stack trace frame's `file_name`. If the RegExp matches, the stack trace frame is excluded from being used for calculating error groups. -Default: `"^/webpack"` (excludes stack trace frames that have a filename starting with `/webpack`) +Default: `"^/webpack"` (excludes stack trace frames that have a filename starting with `/webpack`). (text) + +|==== +| APM Server binary | `apm-server.rum.exclude_from_grouping` +| Fleet-managed | `Exclude from grouping` +|==== + + +[float] +[[rum-source-map]] += Source map configuration options + +**** +image:./binary-yes-fm-no.svg[supported deployment methods] + +Source maps are supported by all APM Server deployment methods, however, +the options in this section are only supported by the APM Server binary. +**** [[config-sourcemapping-enabled]] [float] -==== `source_mapping.enabled` -Used to enable/disable <> for RUM events. +== `source_mapping.enabled` +Used to enable/disable <> for RUM events. When enabled, the APM Server needs additional privileges to read source maps. See <> for more details. @@ -145,30 +140,67 @@ Default: `true` [[config-sourcemapping-elasticsearch]] [float] -==== `source_mapping.elasticsearch` +== `source_mapping.elasticsearch` Configure the {es} source map retrieval location, taking the same options as <>. This must be set when using an output other than {es}, and that output is writing to {es}. Otherwise leave this section empty. [[rum-sourcemap-cache]] [float] -==== `source_mapping.cache.expiration` +== `source_mapping.cache.expiration` If a source map has been uploaded to the APM Server, -<> is automatically applied to documents sent to the RUM endpoint. +<> is automatically applied to documents sent to the RUM endpoint. Source maps are fetched from {es} and then kept in an in-memory cache for the configured time. Values configured without a time unit are treated as seconds. Default: `5m` (5 minutes) [float] -==== `source_mapping.index_pattern` +== `source_mapping.index_pattern` Previous versions of APM Server stored source maps in `apm-%{[observer.version]}-sourcemap` indices. Search source maps stored in an older version with this setting. Default: `"apm-*-sourcemap*"` [float] -=== Ingest pipelines +[[rum-deprecated]] += Deprecated configuration options + +[float] +[[event_rate.limit]] +== `event_rate.limit` + +deprecated::[7.15.0, Replaced by <>.] + +The maximum number of events allowed per second, per agent IP address. + +Default: `300` + +[float] +== `event_rate.lru_size` + +deprecated::[7.15.0, Replaced by <>.] + +The number of unique IP addresses to track in an LRU cache. +IP addresses in the cache will be rate limited according to the <> setting. +Consider increasing this default if your site has many concurrent clients. + +Default: `1000` + +[float] +[[rum-allow-service-names]] +== `allow_service_names` + +deprecated::[7.15.0, Replaced by <>.] +A list of permitted service names for RUM support. +Names in this list must match the agent's `service.name`. +This can be set to restrict RUM events to those with one of a set of known service names, +in order to limit the number of service-specific indices or data streams created. + +Default: Not set (any service name is accepted) + +[float] += Ingest pipelines The default APM Server pipeline includes processors that enrich RUM data prior to indexing in {es}. -See <> for details on how to locate, edit, or disable this preprocessing. +See <> for details on how to locate, edit, or disable this preprocessing. \ No newline at end of file diff --git a/docs/configure/sampling.asciidoc b/docs/configure/sampling.asciidoc new file mode 100644 index 00000000000..78589387705 --- /dev/null +++ b/docs/configure/sampling.asciidoc @@ -0,0 +1,135 @@ +[[tail-based-samling-config]] += Tail-based sampling + +**** +image:./binary-yes-fm-yes.svg[supported deployment methods] + +Most options on this page are supported by all APM Server deployment methods. +**** + +Tail-based sampling configuration options. + +include::./tab-widgets/sampling-config-widget.asciidoc[] + +[float] +[[configuration-tbs]] += Top-level tail-based sampling settings + +See <> to learn more. + +:input-type: ref +// tag::tbs-top[] + +[float] +[id="sampling-tail-enabled-{input-type}"] +== Enable tail-based sampling +Set to `true` to enable tail based sampling. +Disabled by default. (bool) + +|==== +| APM Server binary | `sampling.tail.enabled` +| Fleet-managed | `Enable tail-based sampling` +|==== + +[float] +[id="sampling-tail-interval-{input-type}"] +== Interval +Synchronization interval for multiple APM Servers. +Should be in the order of tens of seconds or low minutes. +Default: `1m` (1 minute). (duration) + +|==== +| APM Server binary | `sampling.tail.interval` +| Fleet-managed | `Interval` +|==== + +[float] +[id="sampling-tail-policies-{input-type}"] +== Policies +Criteria used to match a root transaction to a sample rate. + +Policies map trace events to a sample rate. +Each policy must specify a sample rate. +Trace events are matched to policies in the order specified. +All policy conditions must be true for a trace event to match. +Each policy list should conclude with a policy that only specifies a sample rate. +This final policy is used to catch remaining trace events that don't match a stricter policy. +(`[]policy`) + +|==== +| APM Server binary | `sampling.tail.policies` +| Fleet-managed | `Policies` +|==== + +[float] +[id="sampling-tail-storage_limit-{input-type}"] +== Storage limit +The amount of storage space allocated for trace events matching tail sampling policies. Caution: Setting this limit higher than the allowed space may cause APM Server to become unhealthy. +Default: `3GB`. (text) + +|==== +| APM Server binary | `sampling.tail.storage_limit` +| Fleet-managed | `Storage limit` +|==== + +// end::tbs-top[] + +[float] +[[configuration-tbs-policy]] += Policy-level tail-based sampling settings + +See <> to learn more. + +// tag::tbs-policy[] + +[float] +[id="sampling-tail-sample-rate-{input-type}"] +== Sample rate + +**`sample_rate`** + +The sample rate to apply to trace events matching this policy. +Required in each policy. + +The sample rate must be greater than `0` and less than or equal to `1`. +For example, a `sample_rate` of `0.01` means that 1% of trace events matching the policy will be sampled. +A `sample_rate` of `1` means that 100% of trace events matching the policy will be sampled. (int) + +[float] +[id="sampling-tail-trace-name-{input-type}"] +== Trace name + +**`trace.name`** + +The trace name for events to match a policy. +A match occurs when the configured `trace.name` matches the `transaction.name` of the root transaction of a trace. +A root transaction is any transaction without a `parent.id`. (string) + +[float] +[id="sampling-tail-trace-outcome-{input-type}"] +== Trace outcome + +**`trace.outcome`** + +The trace outcome for events to match a policy. +A match occurs when the configured `trace.outcome` matches a trace's `event.outcome` field. +Trace outcome can be `success`, `failure`, or `unknown`. (string) + +[float] +[id="sampling-tail-service-name-{input-type}"] +== Service name + +**`service.name`** + +The service name for events to match a policy. (string) + +[float] +[id="sampling-tail-service-environment-{input-type}"] +== Service Environment + +**`service.environment`** + +The service environment for events to match a policy. (string) + +// end::tbs-policy[] +:!input-type: \ No newline at end of file diff --git a/docs/configure/shared/input-apm.asciidoc b/docs/configure/shared/input-apm.asciidoc new file mode 100644 index 00000000000..2f3b13904ba --- /dev/null +++ b/docs/configure/shared/input-apm.asciidoc @@ -0,0 +1,8 @@ + +// tag::fleet-managed-settings[] +Configure and customize Fleet-managed APM settings directly in {kib}: + +. Open {kib} and navigate to **{fleet}**. +. Under the **Agent policies** tab, select the policy you would like to configure. +. Find the Elastic APM integration and select **Actions** > **Edit integration**. +// end::fleet-managed-settings[] diff --git a/docs/configure/tab-widgets/anon-auth-widget.asciidoc b/docs/configure/tab-widgets/anon-auth-widget.asciidoc new file mode 100644 index 00000000000..16746820e42 --- /dev/null +++ b/docs/configure/tab-widgets/anon-auth-widget.asciidoc @@ -0,0 +1,40 @@ +++++ +
+
+ + +
+
+++++ + +include::anon-auth.asciidoc[tag=binary] + +++++ +
+ +
+++++ \ No newline at end of file diff --git a/docs/configure/tab-widgets/anon-auth.asciidoc b/docs/configure/tab-widgets/anon-auth.asciidoc new file mode 100644 index 00000000000..f8f1a29c117 --- /dev/null +++ b/docs/configure/tab-widgets/anon-auth.asciidoc @@ -0,0 +1,18 @@ +// tag::binary[] +Example configuration: + +["source","yaml"] +---- +apm-server.auth.anonymous.enabled: true +apm-server.auth.anonymous.allow_agent: [rum-js] +apm-server.auth.anonymous.allow_service: [my_service_name] +apm-server.auth.anonymous.rate_limit.event_limit: 300 +apm-server.auth.anonymous.rate_limit.ip_limit: 1000 +---- +// end::binary[] + +// tag::fleet-managed[] +include::../shared/input-apm.asciidoc[tag=fleet-managed-settings] ++ +. Look for these settings under **Agent authorization**. +// end::fleet-managed[] diff --git a/docs/configure/tab-widgets/auth-config-widget.asciidoc b/docs/configure/tab-widgets/auth-config-widget.asciidoc new file mode 100644 index 00000000000..d81425414e7 --- /dev/null +++ b/docs/configure/tab-widgets/auth-config-widget.asciidoc @@ -0,0 +1,40 @@ +++++ +
+
+ + +
+
+++++ + +include::auth-config.asciidoc[tag=binary] + +++++ +
+ +
+++++ \ No newline at end of file diff --git a/docs/configure/tab-widgets/auth-config.asciidoc b/docs/configure/tab-widgets/auth-config.asciidoc new file mode 100644 index 00000000000..fd256c3124b --- /dev/null +++ b/docs/configure/tab-widgets/auth-config.asciidoc @@ -0,0 +1,23 @@ +// tag::binary[] +**Example config file:** + +[source,yaml] +---- +apm-server: + host: "localhost:8200" + rum: + enabled: true + +output: + elasticsearch: + hosts: ElasticsearchAddress:9200 + +max_procs: 4 +---- +// end::binary[] + +// tag::fleet-managed[] +include::../shared/input-apm.asciidoc[tag=fleet-managed-settings] ++ +. Look for these settings under **Agent authorization**. +// end::fleet-managed[] diff --git a/docs/configure/tab-widgets/general-config-widget.asciidoc b/docs/configure/tab-widgets/general-config-widget.asciidoc new file mode 100644 index 00000000000..c543b4e77e4 --- /dev/null +++ b/docs/configure/tab-widgets/general-config-widget.asciidoc @@ -0,0 +1,40 @@ +++++ +
+
+ + +
+
+++++ + +include::general-config.asciidoc[tag=binary] + +++++ +
+ +
+++++ \ No newline at end of file diff --git a/docs/configure/tab-widgets/general-config.asciidoc b/docs/configure/tab-widgets/general-config.asciidoc new file mode 100644 index 00000000000..8c34c7eca81 --- /dev/null +++ b/docs/configure/tab-widgets/general-config.asciidoc @@ -0,0 +1,19 @@ +// tag::binary[] +**Example config file:** + +[source,yaml] +---- +apm-server: + host: "localhost:8200" + rum: + enabled: true + +max_procs: 4 +---- +// end::binary[] + +// tag::fleet-managed[] +include::../shared/input-apm.asciidoc[tag=fleet-managed-settings] ++ +. Look for these settings under **General**. +// end::fleet-managed[] diff --git a/docs/configure/tab-widgets/rum-config-widget.asciidoc b/docs/configure/tab-widgets/rum-config-widget.asciidoc new file mode 100644 index 00000000000..192121fc5b7 --- /dev/null +++ b/docs/configure/tab-widgets/rum-config-widget.asciidoc @@ -0,0 +1,40 @@ +++++ +
+
+ + +
+
+++++ + +include::rum-config.asciidoc[tag=binary] + +++++ +
+ +
+++++ \ No newline at end of file diff --git a/docs/configure/tab-widgets/rum-config.asciidoc b/docs/configure/tab-widgets/rum-config.asciidoc new file mode 100644 index 00000000000..9e194624aca --- /dev/null +++ b/docs/configure/tab-widgets/rum-config.asciidoc @@ -0,0 +1,28 @@ +// tag::binary[] +To enable RUM support, set `apm-server.rum.enabled` to `true` in your APM Server configuration file. + +Example config: + +["source","yaml"] +---- +apm-server.rum.enabled: true +apm-server.auth.anonymous.rate_limit.event_limit: 300 +apm-server.auth.anonymous.rate_limit.ip_limit: 1000 +apm-server.auth.anonymous.allow_service: [your_service_name] +apm-server.rum.allow_origins: ['*'] +apm-server.rum.allow_headers: ["header1", "header2"] +apm-server.rum.library_pattern: "node_modules|bower_components|~" +apm-server.rum.exclude_from_grouping: "^/webpack" +apm-server.rum.source_mapping.enabled: true +apm-server.rum.source_mapping.cache.expiration: 5m +apm-server.rum.source_mapping.elasticsearch.api_key: TiNAGG4BaaMdaH1tRfuU:KnR6yE41RrSowb0kQ0HWoA +---- +// end::binary[] + +// tag::fleet-managed[] +To enable RUM, set <> to `true`. + +include::../shared/input-apm.asciidoc[tag=fleet-managed-settings] ++ +. Look for these options under **Real User Monitoring**. +// end::fleet-managed[] diff --git a/docs/configure/tab-widgets/sampling-config-widget.asciidoc b/docs/configure/tab-widgets/sampling-config-widget.asciidoc new file mode 100644 index 00000000000..902636efb3d --- /dev/null +++ b/docs/configure/tab-widgets/sampling-config-widget.asciidoc @@ -0,0 +1,40 @@ +++++ +
+
+ + +
+
+++++ + +include::sampling-config.asciidoc[tag=binary] + +++++ +
+ +
+++++ \ No newline at end of file diff --git a/docs/configure/tab-widgets/sampling-config.asciidoc b/docs/configure/tab-widgets/sampling-config.asciidoc new file mode 100644 index 00000000000..2b1a70d0fd4 --- /dev/null +++ b/docs/configure/tab-widgets/sampling-config.asciidoc @@ -0,0 +1,23 @@ +// tag::binary[] +**Example config file:** + +[source,yaml] +---- +apm-server: + host: "localhost:8200" + rum: + enabled: true + +output: + elasticsearch: + hosts: ElasticsearchAddress:9200 + +max_procs: 4 +---- +// end::binary[] + +// tag::fleet-managed[] +include::../shared/input-apm.asciidoc[tag=fleet-managed-settings] ++ +. Look for these options under **Tail-based sampling**. +// end::fleet-managed[] diff --git a/docs/configure/tab-widgets/tls-config-widget.asciidoc b/docs/configure/tab-widgets/tls-config-widget.asciidoc new file mode 100644 index 00000000000..1099f74a207 --- /dev/null +++ b/docs/configure/tab-widgets/tls-config-widget.asciidoc @@ -0,0 +1,40 @@ +++++ +
+
+ + +
+
+++++ + +include::tls-config.asciidoc[tag=binary] + +++++ +
+ +
+++++ \ No newline at end of file diff --git a/docs/configure/tab-widgets/tls-config.asciidoc b/docs/configure/tab-widgets/tls-config.asciidoc new file mode 100644 index 00000000000..423a82002d4 --- /dev/null +++ b/docs/configure/tab-widgets/tls-config.asciidoc @@ -0,0 +1,21 @@ +// tag::binary[] +**Example config file:** + +[source,yaml] +---- +apm-server: + host: "localhost:8200" + rum: + enabled: true + +output: + elasticsearch: + hosts: ElasticsearchAddress:9200 + +max_procs: 4 +---- +// end::binary[] + +// tag::fleet-managed[] +include::../shared/input-apm.asciidoc[tag=fleet-managed-settings] +// end::fleet-managed[] diff --git a/docs/configure/tls.asciidoc b/docs/configure/tls.asciidoc new file mode 100644 index 00000000000..ba6eed8218d --- /dev/null +++ b/docs/configure/tls.asciidoc @@ -0,0 +1,15 @@ +[[configuration-ssl-landing]] += SSL/TLS settings + +SSL/TLS is available for: + +* <> (APM Agents) +* <> that support SSL, like {es}, {ls}, or Kafka. + +Additional information on getting started with SSL/TLS is available in <>. + +// :leveloffset: +2 +include::{libbeat-dir}/shared-ssl-config.asciidoc[] +// :leveloffset: -2 + +include::../legacy/ssl-input-settings.asciidoc[leveloffset=-1] \ No newline at end of file diff --git a/docs/diagrams/apm-decision-tree.asciidoc b/docs/diagrams/apm-decision-tree.asciidoc new file mode 100644 index 00000000000..f169b8d9340 --- /dev/null +++ b/docs/diagrams/apm-decision-tree.asciidoc @@ -0,0 +1,51 @@ +++++ +
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+++++ \ No newline at end of file diff --git a/docs/how-to.asciidoc b/docs/how-to.asciidoc index 4afc717db56..f9e553802ec 100644 --- a/docs/how-to.asciidoc +++ b/docs/how-to.asciidoc @@ -5,7 +5,6 @@ Learn how to perform common APM configuration and management tasks. * <> * <> -* <> * <> * <> @@ -13,8 +12,6 @@ include::./source-map-how-to.asciidoc[] include::./jaeger-integration.asciidoc[] -include::./monitor.asciidoc[] - include::./ingest-pipelines.asciidoc[] include::./custom-index-template.asciidoc[] diff --git a/docs/images/bin-ov.png b/docs/images/bin-ov.png new file mode 100644 index 00000000000..7702dd7d765 Binary files /dev/null and b/docs/images/bin-ov.png differ diff --git a/docs/images/fm-ov.png b/docs/images/fm-ov.png new file mode 100644 index 00000000000..7aace8b2873 Binary files /dev/null and b/docs/images/fm-ov.png differ diff --git a/docs/input-apm.asciidoc b/docs/input-apm.asciidoc deleted file mode 100644 index 0a0cf3ed3e5..00000000000 --- a/docs/input-apm.asciidoc +++ /dev/null @@ -1,108 +0,0 @@ -:input-type: apm - -[[input-apm]] -== APM input settings - -++++ -Input settings -++++ - -Configure and customize APM integration settings directly in {kib}: - -// tag::edit-integration-settings[] -. Open {kib} and navigate to **{fleet}**. -. Under the **Agent policies** tab, select the policy you would like to configure. -. Find the Elastic APM integration and select **Actions** > **Edit integration**. -// end::edit-integration-settings[] - -[float] -[[apm-input-general-settings]] -=== General settings - -[cols="2*> -* <> +* <> +* <> [float] [[configure-sampling-central-jaeger]] diff --git a/docs/legacy/agent-configuration.asciidoc b/docs/legacy/agent-configuration.asciidoc deleted file mode 100644 index 60edf109183..00000000000 --- a/docs/legacy/agent-configuration.asciidoc +++ /dev/null @@ -1,103 +0,0 @@ -[[agent-configuration-api]] -== Agent configuration API - -++++ -Agent configuration -++++ - -IMPORTANT: {deprecation-notice-api} -If you've already upgraded, see <>. - -APM Server exposes an API endpoint that allows agents to query the server for configuration changes. -More information on this feature is available in {kibana-ref}/agent-configuration.html[APM Agent configuration in {kib}]. - -Starting with release 7.14, agent configuration can be declared directly within -`apm-server.yml`. Requests to the endpoint are unchanged; `apm-server` responds -directly without querying {kib} for the agent configuration. Refer to the -example in `apm-server.yml` under Agent Configuration. - -[[agent-config-endpoint]] -[float] -=== Agent configuration endpoint - -The Agent configuration endpoint accepts both `HTTP GET` and `HTTP POST` requests. -If an <> or <> has been configured, it will also apply to this endpoint. - -[[agent-config-api-get]] -[float] -==== HTTP GET - -`service.name` is a required query string parameter. - -[source,bash] ------------------------------------------------------------- -http(s)://{hostname}:{port}/config/v1/agents?service.name=SERVICE_NAME ------------------------------------------------------------- - -[[agent-config-api-post]] -[float] -==== HTTP POST - -Encode parameters as a JSON object in the body. -`service.name` is a required parameter. - -[source,bash] ------------------------------------------------------------- -http(s)://{hostname}:{port}/config/v1/agents -{ - "service": { - "name": "test-service", - "environment": "all" - }, - "CAPTURE_BODY": "off" -} ------------------------------------------------------------- - -[[agent-config-api-response]] -[float] -==== Responses - -* Successful - `200` -* {kib} endpoint is disabled - `403` -* {kib} is unreachable - `503` - -[[agent-config-api-example]] -[float] -==== Example request - -Example Agent configuration `GET` request including the service name "test-service": - -["source","sh",subs="attributes"] ---------------------------------------------------------------------------- -curl -i http://127.0.0.1:8200/config/v1/agents?service.name=test-service ---------------------------------------------------------------------------- - -Example Agent configuration `POST` request including the service name "test-service": - -["source","sh",subs="attributes"] ---------------------------------------------------------------------------- -curl -X POST http://127.0.0.1:8200/config/v1/agents \ - -H "Authorization: Bearer secret_token" \ - -H 'content-type: application/json' \ - -d '{"service": {"name": "test-service"}}' ---------------------------------------------------------------------------- - -[[agent-config-api-ex-response]] -[float] -==== Example response - -["source","sh",subs="attributes"] ---------------------------------------------------------------------------- -HTTP/1.1 200 OK -Cache-Control: max-age=30, must-revalidate -Content-Type: application/json -Etag: "7b23d63c448a863fa" -Date: Mon, 24 Feb 2020 20:53:07 GMT -Content-Length: 98 - -{ - "capture_body": "off", - "transaction_max_spans": "500", - "transaction_sample_rate": "0.3" -} ---------------------------------------------------------------------------- diff --git a/docs/legacy/api-keys.asciidoc b/docs/legacy/api-keys.asciidoc index cab1e68dceb..8093843c674 100644 --- a/docs/legacy/api-keys.asciidoc +++ b/docs/legacy/api-keys.asciidoc @@ -1,9 +1,6 @@ [role="xpack"] [[beats-api-keys]] -== Grant access using API keys - -IMPORTANT: {deprecation-notice-config} -If you've already upgraded, see <>. +=== Grant access using API keys Instead of using usernames and passwords, you can use API keys to grant access to {es} resources. You can set API keys to expire at a certain time, @@ -19,7 +16,7 @@ You can create as many API keys per user as necessary. [float] [[beats-api-key-publish]] -=== Create an API key for writing events +==== Create an API key for writing events In {kib}, navigate to **{stack-manage-app}** > **API keys** and click **Create API key**. @@ -66,7 +63,7 @@ output.elasticsearch: [float] [[beats-api-key-monitor]] -=== Create an API key for monitoring +==== Create an API key for monitoring In {kib}, navigate to **{stack-manage-app}** > **API keys** and click **Create API key**. @@ -110,7 +107,7 @@ monitoring.elasticsearch: [float] [[beats-api-key-es]] -=== Create an API key with {es} APIs +==== Create an API key with {es} APIs You can also use {es}'s {ref}/security-api-create-api-key.html[Create API key API] to create a new API key. For example: @@ -143,7 +140,7 @@ See the {ref}/security-api-create-api-key.html[Create API key] reference for mor [[learn-more-api-keys]] [float] -=== Learn more about API keys +==== Learn more about API keys See the {es} API key documentation for more information: diff --git a/docs/legacy/breaking-changes.asciidoc b/docs/legacy/breaking-changes.asciidoc deleted file mode 100644 index 06630ab2480..00000000000 --- a/docs/legacy/breaking-changes.asciidoc +++ /dev/null @@ -1,132 +0,0 @@ -:issue: https://github.com/elastic/apm-server/issues/ -:pull: https://github.com/elastic/apm-server/pull/ - -[[breaking-changes]] -== Breaking Changes -APM Server is built on top of {beats-ref}/index.html[libbeat]. -As such, any breaking change in libbeat is also considered to be a breaking change in APM Server. - -[float] -=== 7.15 - -The following breaking changes were introduced in 7.15: - -- `network.connection_type` is now `network.connection.type` {pull}5671[5671] -- `transaction.page` and `error.page` no longer recorded {pull}5872[5872] -- experimental:["This breaking change applies to the experimental tail-based sampling feature."] `apm-server.sampling.tail` now requires `apm-server.data_streams.enabled` {pull}5952[5952] -- beta:["This breaking change applies to the beta APM integration."] The `traces-sampled-*` data stream is now `traces-apm.sampled-*` {pull}5952[5952] - -[float] -=== 7.14 -There are no breaking changes in APM Server. - -[float] -=== 7.13 -There are no breaking changes in APM Server. - -[float] -=== 7.12 - -There are three breaking changes to be aware of; -these changes only impact users ingesting data with -{apm-server-ref-v}/jaeger.html[Jaeger clients]. - -* Leading zeros are no longer removed from Jaeger client trace/span ids. -+ --- -This change ensures distributed tracing continues to work across platforms by creating -consistent, full trace/span IDs from Jaeger clients, Elastic APM agents, -and OpenTelemetry SDKs. --- - -* Jaeger spans will now have a type of "app" where they previously were "custom". -+ --- -If the Jaeger span type is not inferred, it will now be "app". -This aligns with the OpenTelemetry Collector exporter -and improves the functionality of the _time spent by span type_ charts in the {apm-app}. --- - -* Jaeger spans may now have a more accurate outcome of "unknown". -+ --- -Previously, a "success" outcome was assumed when a span didn't fail. -The new default assigns "unknown", and only sets an outcome of "success" or "failure" when -the outcome is explicitly known. -This change aligns with Elastic APM agents and the OpenTelemetry Collector exporter. --- - -[float] -=== 7.11 -There are no breaking changes in APM Server. - -[float] -=== 7.10 -There are no breaking changes in APM Server. - -[float] -=== 7.9 -There are no breaking changes in APM Server. - -[float] -=== 7.8 -There are no breaking changes in APM Server. - -[float] -=== 7.7 -There are no breaking changes in APM Server. -However, a previously hardcoded feature is now configurable. -Failing to follow these {apm-guide-7x}/upgrading-to-77.html[upgrade steps] will result in increased span metadata ingestion when upgrading to version 7.7. - -[float] -=== 7.6 -There are no breaking changes in APM Server. - -[float] -=== 7.5 -The following breaking changes have been introduced in 7.5: - -* Introduced dedicated `apm-server.ilm.setup.*` flags. -This means you can now customize {ilm-init} behavior from within the APM Server configuration. -As a side effect, `setup.template.*` settings will be ignored for {ilm-init} related templates per event type. -See {apm-server-ref}/ilm.html[set up {ilm-init}] for more information. - -* By default, {ilm-init} policies will not longer be versioned. -All event types will switch to the new default policy: rollover after 30 days or when reaching a size of 50 GB. -See {apm-server-ref}/ilm.html[default policy] for more information. - -* To make use of all the new features introduced in 7.5, -you must ensure you are using version 7.5+ of APM Server and version 7.5+ of {kib}. - -[float] -=== 7.0 -The following breaking changes have been introduced in 7.0: - -* Removed deprecated Intake v1 API endpoints. -Upgrade agents to a version that supports APM Server ≥ 6.5. -{apm-guide-ref}/breaking-7.0.0.html#breaking-remove-v1[More information]. -* Moved fields in {es} to be compliant with the Elastic Common Schema (ECS). -{apm-guide-ref}/breaking-7.0.0.html#breaking-ecs[More information and changed fields]. - -[float] -=== 6.5 -There are no breaking changes in APM Server. -Advanced users may find the {apm-guide-7x}/upgrading-to-65.html[upgrading to 6.5 guide] useful. - -[float] -=== 6.4 -The following breaking changes have been introduced in 6.4: - -* Indexing the `onboarding` document in it's own index by default. - -[float] -=== 6.3 -The following breaking changes have been introduced in 6.3: - -* Indexing events in separate indices by default. -* {beats-ref-63}/breaking-changes-6.3.html[Breaking changes in libbeat] - -[float] -=== 6.2 - -APM Server is now GA (generally available). diff --git a/docs/legacy/common-problems.asciidoc b/docs/legacy/common-problems.asciidoc deleted file mode 100644 index 3852abb9145..00000000000 --- a/docs/legacy/common-problems.asciidoc +++ /dev/null @@ -1,354 +0,0 @@ -[[common-problems-legacy]] -== Common problems - -IMPORTANT: {deprecation-notice-data} - -This section describes common problems you might encounter with APM Server. - -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> - -[[no-data-indexed-legacy]] -[float] -=== No data is indexed - -If no data shows up in {es}, first check that the APM components are properly connected. - -To ensure that APM Server configuration is valid and it can connect to the configured output, {es} by default, -run the following commands: - -["source","sh"] ------------------------------------------------------------- -apm-server test config -apm-server test output ------------------------------------------------------------- - -To see if the agent can connect to the APM Server, send requests to the instrumented service and look for lines -containing `[request]` in the APM Server logs. - -If no requests are logged, it might be that SSL is <> or that the host is wrong. -Particularly, if you are using Docker, ensure to bind to the right interface (for example, set -`apm-server.host = 0.0.0.0:8200` to match any IP) and set the `SERVER_URL` setting in the agent accordingly. - -If you see requests coming through the APM Server but they are not accepted (response code other than `202`), consider -the response code to narrow down the possible causes (see sections below). - -Another reason for data not showing up is that the agent is not auto-instrumenting something you were expecting, check -the {apm-agents-ref}/index.html[agent documentation] for details on what is automatically instrumented. - -APM Server currently relies on {es} to create indices that do not exist. -As a result, {es} must be configured to allow {ref}/docs-index_.html#index-creation[automatic index creation] for APM indices. - -[[data-indexed-no-apm-legacy]] -[float] -=== Data is indexed but doesn't appear in the APM UI - -The {apm-app} relies on index mappings to query and display data. -If your APM data isn't showing up in the {apm-app}, but is elsewhere in {kib}, like the Discover app, -you may have a missing index mapping. - -You can determine if a field was mapped correctly with the `_mapping` API. -For example, run the following command in the {kib} {kibana-ref}/console-kibana.html[console]. -This will display the field data type of the `service.name` field. - -[source,curl] ----- -GET apm-*/_mapping/field/service.name ----- - -If the `mapping.name.type` is `"text"`, your APM indices were not set up correctly. - -[source,yml] ----- -"mappings" : { - "service.name" : { - "full_name" : "service.name", - "mapping" : { - "name" : { - "type" : "text", <1> - "fields" : { - "keyword" : { - "type" : "keyword", - "ignore_above" : 256 - } - } - } - } - } -} ----- -<1> The `service.name` `mapping.name.type` would be `"keyword"` if this field had been set up correctly. - -To fix this problem, you must delete and recreate your APM indices as index templates cannot be applied retroactively. - -. Stop your APM Server(s) so they are not writing any new documents. - -. Delete your existing `apm-*` indices. -In the {kib} console, run: -+ -[source,curl] ----- -DELETE apm-* ----- -+ -Alternatively, you can use the {ref}/index-mgmt.html[Index Management] page in {kib}. -Select all `apm-*` index templates and navigate to **Manage Indices** > **Delete Indices**. - -. Starting in version 8.0.0, {fleet} uses the APM integration to set up and manage APM index templates. -Install the APM integration by following these steps: -+ --- -include::./getting-started-apm-server.asciidoc[tag=install-apm-integration] --- - -. Start APM Server. - -. Verify the correct index templates were installed. In the {kib} console, run: -+ -[source,curl] ----- -GET _template/apm-* ----- -+ -Alternatively, you can use the {ref}/index-mgmt.html[**Index Management**] page in {kib}. -On the **Index Templates** tab, search for `apm` under **Legacy Index Templates**. - - -[[bad-request-legacy]] -[float] -=== HTTP 400: Data decoding error / Data validation error - -The most likely cause for this error is using incompatible versions of {apm-agent} and APM Server. -See the {apm-overview-ref-v}/agent-server-compatibility.html[agent/server compatibility matrix] for more information. - -[[event-too-large-legacy]] -[float] -=== HTTP 400: Event too large - -APM agents communicate with the APM server by sending events in an HTTP request. Each event is sent as its own line in the HTTP request body. If events are too large, you should consider increasing the <> -setting in the APM Server, and adjusting relevant settings in the agent. - -[[unauthorized-legacy]] -[float] -=== HTTP 401: Invalid token - -The <> in the request header doesn't match the configured in the APM Server. - -[[forbidden-legacy]] -[float] -=== HTTP 403: Forbidden request - -Either you are sending requests to a <> endpoint without RUM enabled, or a request -is coming from an origin not specified in `apm-server.rum.allow_origins`. See the <>. - -[[request-timed-out-legacy]] -[float] -=== HTTP 503: Request timed out waiting to be processed - -This happens when APM Server exceeds the maximum number of requests that it can process concurrently. - -To alleviate this problem, you can try to: - -* <> -* <> -* <> -* <> - -[float] -[[ssl-client-fails-legacy]] -=== SSL client fails to connect - -The target host running might be unreachable or the certificate may not be valid. To resolve your issue: - -* Make sure that the APM Server process on the target host is running and you can connect to it. -First, try to ping the target host to verify that you can reach it from the host running {beatname_uc}. -Then use either `nc` or `telnet` to make sure that the port is available. For example: -+ -[source,shell] ----------------------------------------------------------------------- -ping -telnet 5044 ----------------------------------------------------------------------- - -* Verify that the certificate is valid and that the hostname and IP match. - -* Use OpenSSL to test connectivity to the target server and diagnose problems. -See the https://www.openssl.org/docs/manmaster/apps/s_client.html[OpenSSL documentation] for more info. - -[float] -==== Common SSL-Related Errors and Resolutions - -Here are some common errors and ways to fix them: - -* <> -* <> -* <> -* <> - -[float] -[[cannot-validate-certificate-legacy]] -===== x509: cannot validate certificate for because it doesn't contain any IP SANs - -This happens because your certificate is only valid for the hostname present in the Subject field. - -To resolve this problem, try one of these solutions: - -* Create a DNS entry for the hostname, mapping it to the server's IP. -* Create an entry in `/etc/hosts` for the hostname. Or, on Windows, add an entry to -`C:\Windows\System32\drivers\etc\hosts`. -* Re-create the server certificate and add a Subject Alternative Name (SAN) for the IP address of the server. This makes the -server's certificate valid for both the hostname and the IP address. - -[float] -[[getsockopt-no-route-to-host-legacy]] -===== getsockopt: no route to host - -This is not an SSL problem. It's a networking problem. Make sure the two hosts can communicate. - -[float] -[[getsockopt-connection-refused-legacy]] -===== getsockopt: connection refused - -This is not an SSL problem. Make sure that {ls} is running and that there is no firewall blocking the traffic. - -[float] -[[target-machine-refused-connection-legacy]] -===== No connection could be made because the target machine actively refused it - -A firewall is refusing the connection. Check if a firewall is blocking the traffic on the client, the network, or the -destination host. - -[[field-limit-exceeded-legacy]] -[float] -=== Field limit exceeded - -When adding too many distinct tag keys on a transaction or span, -you risk creating a link:{ref}/mapping.html#mapping-limit-settings[mapping explosion]. - -For example, -you should avoid that user-specified data, -like URL parameters, -is used as a tag key. -Likewise, -using the current timestamp or a user ID as a tag key is not a good idea. -However, -tag *values* with a high cardinality are not a problem. -Just try to keep the number of distinct tag keys at a minimum. - -The symptom of a mapping explosion is that transactions and spans are not indexed anymore after a certain time. -Usually, -on the next day, -the spans and transactions will be indexed again because a new index is created each day. -But as soon as the field limit is reached, -indexing stops again. - -In the agent logs, -you won't see a sign of failures as the APM server asynchronously sends the data it received from the agents to {es}. -However, -the APM server and {es} log a warning like this: - -[source,logs] ----- -{\"type\":\"illegal_argument_exception\",\"reason\":\"Limit of total fields [1000] in index [apm-7.0.0-transaction-2017.05.30] has been exceeded\"} ----- - -[[io-timeout-legacy]] -[float] -=== I/O Timeout - -I/O Timeouts can occur when your timeout settings across the stack are not configured correctly, -especially when using a load balancer. - -You may see an error like the one below in the agent logs, and/or a similar error on the APM Server side: - -[source,logs] ----------------------------------------------------------------------- -[ElasticAPM] APM Server responded with an error: -"read tcp 123.34.22.313:8200->123.34.22.40:41602: i/o timeout" ----------------------------------------------------------------------- - -To fix this, ensure timeouts are incrementing from the {apm-agents-ref}[{apm-agent}], -through your load balancer, to the <>. - -By default, the agent timeouts are set at 10 seconds, and the server timeout is set at 30 seconds. -Your load balancer should be set somewhere between these numbers. - -For example: - -[source,txt] ----------------------------------------------------------------------- -APM agent --> Load Balancer --> APM Server - 10s 15s 30s ----------------------------------------------------------------------- - -[[server-es-down-legacy]] -[float] -=== What happens when APM Server or {es} is down? - -*If {es} is down* - -APM Server does not have an internal queue to buffer requests, -but instead leverages an HTTP request timeout to act as back-pressure. -If {es} goes down, the APM Server will eventually deny incoming requests. -Both the APM Server and {apm-agent}(s) will issue logs accordingly. - -*If APM Server is down* - -Some agents have internal queues or buffers that will temporarily store data if the APM Server goes down. -As a general rule of thumb, queues fill up quickly. Assume data will be lost if APM Server goes down. -Adjusting these queues/buffers can increase the agent's overhead, so use caution when updating default values. - -* **Go agent** - Circular buffer with configurable size: -{apm-go-ref}/configuration.html#config-api-buffer-size[`ELASTIC_APM_BUFFER_SIZE`]. -// * **iOS agent** - ?? -* **Java agent** - Internal buffer with configurable size: -{apm-java-ref}/config-reporter.html#config-max-queue-size[`max_queue_size`]. -* **Node.js agent** - No internal queue. Data is lost. -* **PHP agent** - No internal queue. Data is lost. -* **Python agent** - Internal {apm-py-ref}/tuning-and-overhead.html#tuning-queue[Transaction queue] -with configurable size and time between flushes. -* **Ruby agent** - Internal queue with configurable size: -{apm-ruby-ref}/configuration.html#config-api-buffer-size[`api_buffer_size`]. -* **RUM agent** - No internal queue. Data is lost. -* **.NET agent** - No internal queue. Data is lost. - -[[central-config-troubleshooting-legacy]] -[float] -=== `/api/apm/settings/agent-configuration/search` errors - -If you're instrumenting and starting a lot of services at the same time -or using a very large number of service or environment names, -you may see the following APM Server logs related to {apm-agent} central configuration: - -* `.../api/apm/settings/agent-configuration/search: context canceled` -* `.../api/apm/settings/agent-configuration/search: net/http: TLS handshake timeout` - -There are two possible causes: - -1. {kib} is overwhelmed by the number of requests coming from APM Server. -2. {es} can't reply quickly enough to {kib}. - -For cause #1, try one or more of the following: - -* Increase the <> setting. -* Increase the <>. -* Increase {kib}'s resources so that it is able to manage more requests. -* If you're not using APM central configuration, disable it with <>. -Central configuration can also be disabled at the {apm-agent} level. - -For cause #2, investigate why {es} is not responding in a timely manner. -{kib}'s queries to {es} are simple, so it may just be that {es} is unhealthy. -If that's not the problem, you may need to use {ref}/index-modules-slowlog.html[Search Slow Log] to investigate your {es} logs. - -To avoid this problem entirely, -we recommend <>. diff --git a/docs/legacy/configuration-process.asciidoc b/docs/legacy/configuration-process.asciidoc deleted file mode 100644 index f96953616bf..00000000000 --- a/docs/legacy/configuration-process.asciidoc +++ /dev/null @@ -1,159 +0,0 @@ -[[configuration-process]] -== General configuration options - -IMPORTANT: {deprecation-notice-config} -If you've already upgraded, please see <> instead. - -Example config file: - -["source","yaml"] ----- -apm-server: - host: "localhost:8200" - rum: - enabled: true - -output: - elasticsearch: - hosts: ElasticsearchAddress:9200 - -max_procs: 4 ----- - -NOTE: If you are using an X-Pack secured version of {stack}, -you need to specify credentials in the config file before you run the commands that set up and start APM Server. -For example: - -[source,yaml] ----- -output.elasticsearch: - hosts: ["ElasticsearchAddress:9200"] - username: "elastic" - password: "elastic" ----- - -[float] -[[configuration-apm-server]] -=== Configuration options: `apm-server.*` - -[[host]] -[float] -==== `host` -Defines the host and port the server is listening on. -Use `"unix:/path/to.sock"` to listen on a Unix domain socket. -Defaults to 'localhost:8200'. - -[[max_header_size]] -[float] -==== `max_header_size` -Maximum permitted size of a request's header accepted by the server to be processed (in Bytes). -Defaults to 1048576 Bytes (1 MB). - -[[idle_timeout]] -[float] -==== `idle_timeout` -Maximum amount of time to wait for the next incoming request before underlying connection is closed. -Defaults to 45 seconds. - -[[read_timeout]] -[float] -==== `read_timeout` -Maximum permitted duration for reading an entire request. -Defaults to 30 seconds. - -[[write_timeout]] -[float] -==== `write_timeout` -Maximum permitted duration for writing a response. -Defaults to 30 seconds. - -[[shutdown_timeout]] -[float] -==== `shutdown_timeout` -Maximum duration in seconds before releasing resources when shutting down the server. -Defaults to 5 seconds. - -[[max_event_size]] -[float] -==== `max_event_size` -Maximum permitted size of an event accepted by the server to be processed (in Bytes). -Defaults to 307200 Bytes. - -[float] -[[configuration-other]] -=== Configuration options: general - -[[max_connections]] -[float] -==== `max_connections` -Maximum number of TCP connections to accept simultaneously. -Default value is 0, which means _unlimited_. - -[[config-secret-token]] -[float] -==== `auth.secret_token` -Authorization token for sending data to the APM server. -If a token is set, the agents must send it in the following format: -Authorization: Bearer . -The token is not used for RUM endpoints. By default, no authorization token is set. - -It is recommended to use an authorization token in combination with SSL enabled. -Read more about <> and the <>. - -[[config-secret-token-legacy]] -[float] -==== `secret_token` - -deprecated::[7.14.0, Replaced by `auth.secret_token`. See <>] - -In versions prior to 7.14.0, secret token authorization was known as `apm-server.secret_token`. In 7.14.0 this was renamed `apm-server.auth.secret_token`. -The old configuration will continue to work until 8.0.0, and the new configuration will take precedence. - -[[capture_personal_data]] -[float] -==== `capture_personal_data` -If true, -APM Server captures the IP of the instrumented service and its User Agent if any. -Enabled by default. - -[[default_service_environment]] -[float] -==== `default_service_environment` -Sets the default service environment to associate with data and requests received from agents which have no service environment defined. - -[[expvar.enabled]] -[float] -==== `expvar.enabled` -When set to true APM Server exposes https://golang.org/pkg/expvar/[golang expvar]. -Disabled by default. - -[[expvar.url]] -[float] -==== `expvar.url` -Configure the URL to expose expvar. -Defaults to `debug/vars`. - -[[instrumentation.enabled]] -[float] -==== `instrumentation.enabled` -Enables self instrumentation of the APM Server itself. -Disabled by default. - -[float] -=== Configuration options: `max_procs` - -[[max_procs]] -[float] -==== `max_procs` -Sets the maximum number of CPUs that can be executing simultaneously. -The default is the number of logical CPUs available in the system. - -[float] -=== Configuration options: `data_streams` - -[[data_streams.wait_for_integration]] -[float] -==== `wait_for_integration` -Wait for the `apm` {fleet} integration to be installed by {kib}. Requires either <> -or for the <> to be configured. -Defaults to true. diff --git a/docs/legacy/configuring-ingest.asciidoc b/docs/legacy/configuring-ingest.asciidoc deleted file mode 100644 index fe62a3c6117..00000000000 --- a/docs/legacy/configuring-ingest.asciidoc +++ /dev/null @@ -1,9 +0,0 @@ -[[configuring-ingest-node]] -== Parse data using ingest node pipelines - -deprecated::[7.16.0,Users should now use the <>. See <> if you've already upgraded.] - -// Appends `-legacy` to each section's ID so that they are different from the APM integration IDs -:append-legacy: -legacy - -include::../ingest-pipelines.asciidoc[tag=ingest-pipelines] diff --git a/docs/legacy/configuring.asciidoc b/docs/legacy/configuring.asciidoc deleted file mode 100644 index 9a6865a479d..00000000000 --- a/docs/legacy/configuring.asciidoc +++ /dev/null @@ -1,80 +0,0 @@ -[[configuring-howto-apm-server]] -= Configure APM Server - -++++ -Configure -++++ - -IMPORTANT: {deprecation-notice-config} -If you've already upgraded, please see <> instead. - -include::{libbeat-dir}/shared/configuring-intro.asciidoc[] - -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> - -include::./configuration-process.asciidoc[] - -include::./configuration-agent-config.asciidoc[] - -include::./configuration-anonymous.asciidoc[] - -include::{libbeat-dir}/shared-instrumentation.asciidoc[] - -include::./jaeger-reference.asciidoc[] - -ifndef::no_kerberos[] -include::{libbeat-dir}/shared-kerberos-config.asciidoc[] -endif::[] - -include::./configure-kibana-endpoint.asciidoc[] - -include::{libbeat-dir}/loggingconfig.asciidoc[] - -:no-redis-output: -include::{libbeat-dir}/outputconfig.asciidoc[] - -include::{libbeat-dir}/shared-path-config.asciidoc[] - -include::./configuration-rum.asciidoc[] - -// BEGIN SSL SECTION -------------------------------------------- -[[configuration-ssl-landing]] -== SSL/TLS settings - -IMPORTANT: {deprecation-notice-config} -If you've already upgraded, please see <> instead. - -SSL/TLS is available for: - -* <> (APM Agents) -* <> that support SSL, like {es}, {ls}, or Kafka. - -Additional information on getting started with SSL/TLS is available in <>. - -// The leveloffset attribute pushes all headings in the included document down by -// the specified number of levels. It is required here because the shared Beats -// documentation was created as a level 1 heading. In the APM book, this level -// would break the DTD. Using leveloffset +1, we can include this file here. -// It's important to reset the level heading after including a file. -:leveloffset: +1 -include::{libbeat-dir}/shared-ssl-config.asciidoc[] -:leveloffset: -1 - -include::ssl-input-settings.asciidoc[] -// END SSL SECTION -------------------------------------------- - -:standalone: -include::{libbeat-dir}/shared-env-vars.asciidoc[] -:standalone!: diff --git a/docs/legacy/copied-from-beats/docs/command-reference.asciidoc b/docs/legacy/copied-from-beats/docs/command-reference.asciidoc index 00740535cd8..e8f7df12f42 100644 --- a/docs/legacy/copied-from-beats/docs/command-reference.asciidoc +++ b/docs/legacy/copied-from-beats/docs/command-reference.asciidoc @@ -34,7 +34,6 @@ ifdef::serverless[] endif::serverless[] :help-command-short-desc: Shows help for any command -:keystore-command-short-desc: Manages the <> :modules-command-short-desc: Manages configured modules :package-command-short-desc: Packages the configuration and executable into a zip file :remove-command-short-desc: Removes the specified function from your serverless environment @@ -65,7 +64,7 @@ endif::[] Command reference ++++ -IMPORTANT: {deprecation-notice-config} +IMPORTANT: These commands only apply to the APM Server binary installation method. ifndef::no_dashboards[] {beatname_uc} provides a command-line interface for starting {beatname_uc} and @@ -107,9 +106,6 @@ ifdef::apm-server[] endif::[] |<> |{export-command-short-desc}. |<> |{help-command-short-desc}. -ifndef::serverless[] -|<> |{keystore-command-short-desc}. -endif::[] ifeval::["{beatname_lc}"=="functionbeat"] |<> |{package-command-short-desc}. |<> |{remove-command-short-desc}. @@ -139,10 +135,10 @@ ifdef::apm-server[] experimental::[] -deprecated::[8.6.0, Users should create API Keys through {kib} or the {es} REST API. See <>.] +deprecated::[8.6.0, Users should create API Keys through {kib} or the {es} REST API. See <>.] Communication between APM agents and APM Server now supports sending an -<>. +<>. APM Server provides an `apikey` command that can create, verify, invalidate, and show information about API Keys for agent/server communication. Most operations require the `manage_own_api_key` cluster privilege, @@ -229,7 +225,7 @@ When used with `info`, only returns valid API Keys (not expired or invalidated). {beatname_lc} apikey invalidate --name example-001 ----- -For more information, see <>. +For more information, see <>. endif::[] @@ -434,65 +430,6 @@ Specifies the name of the command to show help for. {beatname_lc} help export ----- -ifndef::serverless[] -[float] -[[keystore-command]] -==== `keystore` command - -{keystore-command-short-desc}. - -*SYNOPSIS* - -["source","sh",subs="attributes"] ----- -{beatname_lc} keystore SUBCOMMAND [FLAGS] ----- - -*`SUBCOMMAND`* - -*`add KEY`*:: -Adds the specified key to the keystore. Use the `--force` flag to overwrite an -existing key. Use the `--stdin` flag to pass the value through `stdin`. - -*`create`*:: -Creates a keystore to hold secrets. Use the `--force` flag to overwrite the -existing keystore. - -*`list`*:: -Lists the keys in the keystore. - -*`remove KEY`*:: -Removes the specified key from the keystore. - -*FLAGS* - -*`--force`*:: -Valid with the `add` and `create` subcommands. When used with `add`, overwrites -the specified key. When used with `create`, overwrites the keystore. - -*`--stdin`*:: -When used with `add`, uses the stdin as the source of the key's value. - -*`-h, --help`*:: -Shows help for the `keystore` command. - - -{global-flags} - -*EXAMPLES* - -["source","sh",subs="attributes"] ------ -{beatname_lc} keystore create -{beatname_lc} keystore add ES_PWD -{beatname_lc} keystore remove ES_PWD -{beatname_lc} keystore list ------ - -See <> for more examples. - -endif::[] - ifeval::["{beatname_lc}"=="functionbeat"] [float] [[package-command]] @@ -649,9 +586,6 @@ network. This option is useful only for testing {beatname_uc}. endif::[] *`-N, --N`*:: Disables publishing for testing purposes. -ifndef::no_file_output[] -This option disables all outputs except the <>. -endif::[] ifeval::["{beatname_lc}"=="packetbeat"] *`-O, --O`*:: @@ -811,7 +745,7 @@ template, {ilm-init} policy, and write alias (if supported and configured). ifdef::apm-server[] *`--pipelines`*:: -Registers the <> definitions set in `ingest/pipeline/definition.json`. +Registers the <> definitions set in `ingest/pipeline/definition.json`. endif::apm-server[] *`--template`*:: diff --git a/docs/legacy/copied-from-beats/docs/debugging.asciidoc b/docs/legacy/copied-from-beats/docs/debugging.asciidoc index ee564f6eb4b..65d18bcec77 100644 --- a/docs/legacy/copied-from-beats/docs/debugging.asciidoc +++ b/docs/legacy/copied-from-beats/docs/debugging.asciidoc @@ -9,7 +9,15 @@ //// include::../../libbeat/docs/debugging.asciidoc[] ////////////////////////////////////////////////////////////////////////// -IMPORTANT: {deprecation-notice-data} +[[enable-apm-server-debugging]] +=== Enable APM Server binary debugging + +++++ +APM Server binary debugging +++++ + +NOTE: Fleet-managed users should see {fleet-guide}/monitor-elastic-agent.html[View {agent} logs] +to learn how to view logs and change the logging level of {agent}. By default, {beatname_uc} sends all its output to syslog. When you run {beatname_uc} in the foreground, you can use the `-e` command line flag to redirect the output to @@ -43,4 +51,4 @@ use `*`, like this: ["source","sh",subs="attributes"] ------------------------------------------------------------ {beatname_lc} -e -d "*" ------------------------------------------------------------- +------------------------------------------------------------ \ No newline at end of file diff --git a/docs/legacy/copied-from-beats/docs/getting-help.asciidoc b/docs/legacy/copied-from-beats/docs/getting-help.asciidoc deleted file mode 100644 index 5dbbec5165c..00000000000 --- a/docs/legacy/copied-from-beats/docs/getting-help.asciidoc +++ /dev/null @@ -1,26 +0,0 @@ -////////////////////////////////////////////////////////////////////////// -//// This content is shared by all Elastic Beats. Make sure you keep the -//// descriptions here generic enough to work for all Beats that include -//// this file. When using cross references, make sure that the cross -//// references resolve correctly for any files that include this one. -//// Use the appropriate variables defined in the index.asciidoc file to -//// resolve Beat names: beatname_uc and beatname_lc. -//// Use the following include to pull this content into a doc file: -//// include::../../libbeat/docs/getting-help.asciidoc[] -////////////////////////////////////////////////////////////////////////// - -IMPORTANT: {deprecation-notice-data} - -Start by searching the https://discuss.elastic.co/c/{discuss_forum}[{beatname_uc} discussion forum] for your issue. If you can't find a resolution, open a new issue or add a comment to an existing one. Make sure you provide the following information, and we'll help -you troubleshoot the problem: - -* {beatname_uc} version -* Operating System -* Configuration -* Any supporting information, such as debugging output, that will help us diagnose your -problem. See <> for more details. - -If you're sure you found a bug, you can open a ticket on -https://github.com/elastic/{github_repo_name}/issues?state=open[GitHub]. Note, however, -that we close GitHub issues containing questions or requests for help if they -don't indicate the presence of a bug. diff --git a/docs/legacy/copied-from-beats/docs/howto/load-index-templates.asciidoc b/docs/legacy/copied-from-beats/docs/howto/load-index-templates.asciidoc deleted file mode 100644 index 5610e913995..00000000000 --- a/docs/legacy/copied-from-beats/docs/howto/load-index-templates.asciidoc +++ /dev/null @@ -1,10 +0,0 @@ -[id="{beatname_lc}-template"] -== View the {es} index template - -// Appends `-legacy` to each section's ID so that they are different from the APM integration IDs -:append-legacy: -legacy - -IMPORTANT: {deprecation-notice-data} -If you've already upgraded, please see <> instead. - -include::../../../../custom-index-template.asciidoc[tag=index-template-integration] diff --git a/docs/legacy/copied-from-beats/docs/https.asciidoc b/docs/legacy/copied-from-beats/docs/https.asciidoc index 3b2969f1f17..e335e57b957 100644 --- a/docs/legacy/copied-from-beats/docs/https.asciidoc +++ b/docs/legacy/copied-from-beats/docs/https.asciidoc @@ -10,12 +10,10 @@ //// This content is structured to be included as a whole file. ////////////////////////////////////////////////////////////////////////// -[role="xpack"] +[float] [[securing-communication-elasticsearch]] == Secure communication with {es} -IMPORTANT: {deprecation-notice-config} - When sending data to a secured cluster through the `elasticsearch` output, {beatname_uc} can use any of the following authentication methods: @@ -34,18 +32,10 @@ For example: output.elasticsearch: hosts: ["https://myEShost:9200"] username: "{beat_default_index_prefix}_writer" <1> - password: "{pwd}" <2> + password: "{pwd}" ---------------------------------------------------------------------- <1> This user needs the privileges required to publish events to {es}. To create a user like this, see <>. -<2> This example shows a hard-coded password, but you should store sensitive -values -ifndef::serverless[] -in the <>. -endif::[] -ifdef::serverless[] -in environment variables. -endif::[] -- * To use token-based *API key authentication*, specify the `api_key` under `output.elasticsearch`. @@ -138,17 +128,9 @@ For example, specify a unique username and password to connect to {kib} like thi setup.kibana: host: "mykibanahost:5601" username: "{beat_default_index_prefix}_kib_setup" <1> - password: "{pwd}" <2> + password: "{pwd}" ---- <1> This user needs privileges required to set up dashboards -<2> This example shows a hard-coded password, but you should store sensitive -values -ifndef::serverless[] -in the <>. -endif::[] -ifdef::serverless[] -in environment variables. -endif::[] endif::no_dashboards[] -- diff --git a/docs/legacy/copied-from-beats/docs/keystore.asciidoc b/docs/legacy/copied-from-beats/docs/keystore.asciidoc deleted file mode 100644 index bb94d88e3e8..00000000000 --- a/docs/legacy/copied-from-beats/docs/keystore.asciidoc +++ /dev/null @@ -1,124 +0,0 @@ -////////////////////////////////////////////////////////////////////////// -//// This content is shared by all Elastic Beats. Make sure you keep the -//// descriptions here generic enough to work for all Beats that include -//// this file. When using cross references, make sure that the cross -//// references resolve correctly for any files that include this one. -//// Use the appropriate variables defined in the index.asciidoc file to -//// resolve Beat names: beatname_uc and beatname_lc -//// Use the following include to pull this content into a doc file: -//// include::../../libbeat/docs/keystore.asciidoc[] -////////////////////////////////////////////////////////////////////////// - -[[keystore]] -=== Secrets keystore for secure settings - -IMPORTANT: {deprecation-notice-installation} - -++++ -Secrets keystore -++++ - -When you configure {beatname_uc}, you might need to specify sensitive settings, -such as passwords. Rather than relying on file system permissions to protect -these values, you can use the {beatname_uc} keystore to securely store secret -values for use in configuration settings. - -After adding a key and its secret value to the keystore, you can use the key in -place of the secret value when you configure sensitive settings. - -The syntax for referencing keys is identical to the syntax for environment -variables: - -`${KEY}` - -Where KEY is the name of the key. - -For example, imagine that the keystore contains a key called `ES_PWD` with the -value `yourelasticsearchpassword`: - -* In the configuration file, use `output.elasticsearch.password: "${ES_PWD}"` -* On the command line, use: `-E "output.elasticsearch.password=\${ES_PWD}"` - -When {beatname_uc} unpacks the configuration, it resolves keys before resolving -environment variables and other variables. - -Notice that the {beatname_uc} keystore differs from the {es} keystore. -Whereas the {es} keystore lets you store `elasticsearch.yml` values by -name, the {beatname_uc} keystore lets you specify arbitrary names that you can -reference in the {beatname_uc} configuration. - -To create and manage keys, use the `keystore` command. See the -<> for the full command syntax, including -optional flags. - -NOTE: The `keystore` command must be run by the same user who will run -{beatname_uc}. - -[float] -[[creating-keystore]] -=== Create a keystore - -To create a secrets keystore, use: - -["source","sh",subs="attributes"] ----------------------------------------------------------------- -{beatname_lc} keystore create ----------------------------------------------------------------- - - -{beatname_uc} creates the keystore in the directory defined by the `path.data` -configuration setting. - -[float] -[[add-keys-to-keystore]] -=== Add keys - -To store sensitive values, such as authentication credentials for {es}, -use the `keystore add` command: - -["source","sh",subs="attributes"] ----------------------------------------------------------------- -{beatname_lc} keystore add ES_PWD ----------------------------------------------------------------- - - -When prompted, enter a value for the key. - -To overwrite an existing key's value, use the `--force` flag: - -["source","sh",subs="attributes"] ----------------------------------------------------------------- -{beatname_lc} keystore add ES_PWD --force ----------------------------------------------------------------- - -To pass the value through stdin, use the `--stdin` flag. You can also use -`--force`: - -["source","sh",subs="attributes"] ----------------------------------------------------------------- -cat /file/containing/setting/value | {beatname_lc} keystore add ES_PWD --stdin --force ----------------------------------------------------------------- - - -[float] -[[list-settings]] -=== List keys - -To list the keys defined in the keystore, use: - -["source","sh",subs="attributes"] ----------------------------------------------------------------- -{beatname_lc} keystore list ----------------------------------------------------------------- - - -[float] -[[remove-settings]] -=== Remove keys - -To remove a key from the keystore, use: - -["source","sh",subs="attributes"] ----------------------------------------------------------------- -{beatname_lc} keystore remove ES_PWD ----------------------------------------------------------------- diff --git a/docs/legacy/copied-from-beats/docs/monitoring/monitoring-beats.asciidoc b/docs/legacy/copied-from-beats/docs/monitoring/monitoring-beats.asciidoc index 7a4638e0c43..b902461970f 100644 --- a/docs/legacy/copied-from-beats/docs/monitoring/monitoring-beats.asciidoc +++ b/docs/legacy/copied-from-beats/docs/monitoring/monitoring-beats.asciidoc @@ -1,24 +1,13 @@ -[role="xpack"] [[monitoring]] -= Monitor {beatname_uc} += Monitor the APM Server binary ++++ -Monitor +APM Server binary ++++ -IMPORTANT: {deprecation-notice-monitor} - -You can use the {stack} {monitor-features} to gain insight into the health of -ifndef::apm-server[] -{beatname_uc} instances running in your environment. -endif::[] -ifdef::apm-server[] -{beatname_uc}. -endif::[] - -To monitor {beatname_uc}, make sure monitoring is enabled on your {es} cluster, -then configure the method used to collect {beatname_uc} metrics. You can use one -of following methods: +There are two methods to monitor the APM Server binary. +Make sure monitoring is enabled on your {es} cluster, +then configure one of these methods to collect {beatname_uc} metrics: * <> - Internal collectors send monitoring data directly to your monitoring cluster. @@ -28,11 +17,6 @@ ifndef::serverless[] and sends it directly to your monitoring cluster. endif::[] -//Commenting out this link temporarily until the general monitoring docs can be -//updated. -//To learn about monitoring in general, see -//{ref}/monitor-elasticsearch-cluster.html[Monitor a cluster]. - include::monitoring-internal-collection.asciidoc[] ifndef::serverless[] diff --git a/docs/legacy/copied-from-beats/docs/monitoring/monitoring-internal-collection.asciidoc b/docs/legacy/copied-from-beats/docs/monitoring/monitoring-internal-collection.asciidoc index 14c53505209..430fe49c31e 100644 --- a/docs/legacy/copied-from-beats/docs/monitoring/monitoring-internal-collection.asciidoc +++ b/docs/legacy/copied-from-beats/docs/monitoring/monitoring-internal-collection.asciidoc @@ -16,8 +16,6 @@ Use internal collection ++++ -IMPORTANT: {deprecation-notice-monitor} - Use internal collectors to send {beats} monitoring data directly to your monitoring cluster. ifndef::serverless[] diff --git a/docs/legacy/copied-from-beats/docs/monitoring/monitoring-metricbeat.asciidoc b/docs/legacy/copied-from-beats/docs/monitoring/monitoring-metricbeat.asciidoc index 0386e1f25f7..1f6b15a9403 100644 --- a/docs/legacy/copied-from-beats/docs/monitoring/monitoring-metricbeat.asciidoc +++ b/docs/legacy/copied-from-beats/docs/monitoring/monitoring-metricbeat.asciidoc @@ -6,8 +6,6 @@ Use {metricbeat} collection ++++ -IMPORTANT: {deprecation-notice-monitor} - In 7.3 and later, you can use {metricbeat} to collect data about {beatname_uc} and ship it to the monitoring cluster. The benefit of using {metricbeat} instead of internal collection is that the monitoring agent remains active even if the diff --git a/docs/legacy/copied-from-beats/docs/monitoring/shared-monitor-config.asciidoc b/docs/legacy/copied-from-beats/docs/monitoring/shared-monitor-config.asciidoc index 447d7ebc6d4..71825450dc3 100644 --- a/docs/legacy/copied-from-beats/docs/monitoring/shared-monitor-config.asciidoc +++ b/docs/legacy/copied-from-beats/docs/monitoring/shared-monitor-config.asciidoc @@ -10,18 +10,17 @@ //// Make sure this content appears below a level 2 heading. ////////////////////////////////////////////////////////////////////////// -[role="xpack"] +[float] [[configuration-monitor]] === Settings for internal collection -IMPORTANT: {deprecation-notice-monitor} - Use the following settings to configure internal collection when you are not using {metricbeat} to collect monitoring data. You specify these settings in the X-Pack monitoring section of the +{beatname_lc}.yml+ config file: +[float] ==== `monitoring.enabled` The `monitoring.enabled` config is a boolean setting to enable or disable {monitoring}. @@ -29,21 +28,25 @@ If set to `true`, monitoring is enabled. The default value is `false`. +[float] ==== `monitoring.elasticsearch` The {es} instances that you want to ship your {beatname_uc} metrics to. This configuration option contains the following fields: +[float] ===== `api_key` The detail of the API key to be used to send monitoring information to {es}. See <> for more information. +[float] ===== `bulk_max_size` The maximum number of metrics to bulk in a single {es} bulk API index request. The default is `50`. For more information, see <>. +[float] ===== `backoff.init` The number of seconds to wait before trying to reconnect to {es} after @@ -52,11 +55,13 @@ reconnect. If the attempt fails, the backoff timer is increased exponentially up to `backoff.max`. After a successful connection, the backoff timer is reset. The default is `1s`. +[float] ===== `backoff.max` The maximum number of seconds to wait before attempting to connect to {es} after a network error. The default is `60s`. +[float] ===== `compression_level` The gzip compression level. Setting this value to `0` disables compression. The @@ -64,59 +69,70 @@ compression level must be in the range of `1` (best speed) to `9` (best compression). The default value is `0`. Increasing the compression level reduces the network usage but increases the CPU usage. +[float] ===== `headers` Custom HTTP headers to add to each request. For more information, see <>. +[float] ===== `hosts` The list of {es} nodes to connect to. Monitoring metrics are distributed to these nodes in round robin order. For more information, see <>. +[float] ===== `max_retries` The number of times to retry sending the monitoring metrics after a failure. After the specified number of retries, the metrics are typically dropped. The default value is `3`. For more information, see <>. +[float] ===== `parameters` Dictionary of HTTP parameters to pass within the URL with index operations. +[float] ===== `password` The password that {beatname_uc} uses to authenticate with the {es} instances for shipping monitoring data. +[float] ===== `metrics.period` The time interval (in seconds) when metrics are sent to the {es} cluster. A new snapshot of {beatname_uc} metrics is generated and scheduled for publishing each period. The default value is 10 * time.Second. +[float] ===== `state.period` The time interval (in seconds) when state information are sent to the {es} cluster. A new snapshot of {beatname_uc} state is generated and scheduled for publishing each period. The default value is 60 * time.Second. +[float] ===== `protocol` The name of the protocol to use when connecting to the {es} cluster. The options are: `http` or `https`. The default is `http`. If you specify a URL for `hosts`, however, the value of protocol is overridden by the scheme you specify in the URL. +[float] ===== `proxy_url` The URL of the proxy to use when connecting to the {es} cluster. For more information, see <>. +[float] ===== `timeout` The HTTP request timeout in seconds for the {es} request. The default is `90`. +[float] ===== `ssl` Configuration options for Transport Layer Security (TLS) or Secure Sockets Layer @@ -124,6 +140,7 @@ Configuration options for Transport Layer Security (TLS) or Secure Sockets Layer connections. If the `ssl` section is missing, the host CAs are used for HTTPS connections to {es}. For more information, see <>. +[float] ===== `username` The user ID that {beatname_uc} uses to authenticate with the {es} instances for diff --git a/docs/legacy/copied-from-beats/docs/outputconfig.asciidoc b/docs/legacy/copied-from-beats/docs/outputconfig.asciidoc deleted file mode 100644 index 2580b4f4509..00000000000 --- a/docs/legacy/copied-from-beats/docs/outputconfig.asciidoc +++ /dev/null @@ -1,37 +0,0 @@ -////////////////////////////////////////////////////////////////////////// -//// This content is shared by all Elastic Beats. Make sure you keep the -//// descriptions here generic enough to work for all Beats that include -//// this file. When using cross references, make sure that the cross -//// references resolve correctly for any files that include this one. -//// Use the appropriate variables defined in the index.asciidoc file to -//// resolve Beat names: beatname_uc and beatname_lc. -//// Use the following include to pull this content into a doc file: -//// include::../../libbeat/docs/outputconfig.asciidoc[] -//// Make sure this content appears below a level 2 heading. -////////////////////////////////////////////////////////////////////////// - - -[[configuring-output]] -== Configure the output - -++++ -Output -++++ - -IMPORTANT: {deprecation-notice-config} - -You configure {beatname_uc} to write to a specific output by setting options -in the Outputs section of the +{beatname_lc}.yml+ config file. Only a single -output may be defined. - -The following topics describe how to configure each supported output. If you've -secured the {stack}, also read <> for more about -security-related configuration options. - -include::outputs-list.asciidoc[tag=outputs-list] - -ifdef::beat-specific-output-config[] -include::{beat-specific-output-config}[] -endif::[] - -include::outputs-list.asciidoc[tag=outputs-include] diff --git a/docs/legacy/copied-from-beats/docs/outputs-list.asciidoc b/docs/legacy/copied-from-beats/docs/outputs-list.asciidoc deleted file mode 100644 index 4181c10f64f..00000000000 --- a/docs/legacy/copied-from-beats/docs/outputs-list.asciidoc +++ /dev/null @@ -1,87 +0,0 @@ -// TODO: Create script that generates this file. Conditional coding needs to -// be preserved. - -//# tag::outputs-list[] - -ifndef::no_cloud_id[] -* <> -endif::[] -ifndef::no_es_output[] -* <> -endif::[] -ifndef::no_ls_output[] -* <> -endif::[] -ifndef::no_kafka_output[] -* <> -endif::[] -ifndef::no_redis_output[] -* <> -endif::[] -ifndef::no_file_output[] -* <> -endif::[] -ifndef::no_console_output[] -* <> -endif::[] - -//# end::outputs-list[] - -//# tag::outputs-include[] -ifndef::no_cloud_id[] -ifdef::requires_xpack[] -[role="xpack"] -endif::[] -include::output-cloud.asciidoc[] -endif::[] - -ifndef::no_es_output[] -ifdef::requires_xpack[] -[role="xpack"] -endif::[] -include::{libbeat-outputs-dir}/elasticsearch/docs/elasticsearch.asciidoc[] -endif::[] - -ifndef::no_ls_output[] -ifdef::requires_xpack[] -[role="xpack"] -endif::[] -include::{libbeat-outputs-dir}/logstash/docs/logstash.asciidoc[] -endif::[] - -ifndef::no_kafka_output[] -ifdef::requires_xpack[] -[role="xpack"] -endif::[] -include::{libbeat-outputs-dir}/kafka/docs/kafka.asciidoc[] -endif::[] - -ifndef::no_redis_output[] -ifdef::requires_xpack[] -[role="xpack"] -endif::[] -include::{libbeat-outputs-dir}/redis/docs/redis.asciidoc[] -endif::[] - -ifndef::no_file_output[] -ifdef::requires_xpack[] -[role="xpack"] -endif::[] -include::{libbeat-outputs-dir}/fileout/docs/fileout.asciidoc[] -endif::[] - -ifndef::no_console_output[] -ifdef::requires_xpack[] -[role="xpack"] -endif::[] -include::{libbeat-outputs-dir}/console/docs/console.asciidoc[] -endif::[] - -ifndef::no_codec[] -ifdef::requires_xpack[] -[role="xpack"] -endif::[] -include::{libbeat-outputs-dir}/codec/docs/codec.asciidoc[] -endif::[] - -//# end::outputs-include[] diff --git a/docs/legacy/copied-from-beats/docs/repositories.asciidoc b/docs/legacy/copied-from-beats/docs/repositories.asciidoc index 5b61d434308..5bf4676f9ab 100644 --- a/docs/legacy/copied-from-beats/docs/repositories.asciidoc +++ b/docs/legacy/copied-from-beats/docs/repositories.asciidoc @@ -10,9 +10,7 @@ ////////////////////////////////////////////////////////////////////////// [[setup-repositories]] -=== Repositories for APT and YUM - -IMPORTANT: {deprecation-notice-installation} +==== Repositories for APT and YUM We have repositories available for APT and YUM-based distributions. Note that we provide binary packages, but no source packages. @@ -25,7 +23,7 @@ We use the PGP key https://pgp.mit.edu/pks/lookup?op=vindex&search=0xD27D666CD88 to sign all our packages. It is available from https://pgp.mit.edu. [float] -==== APT +===== APT ifeval::["{release-state}"=="unreleased"] @@ -105,7 +103,7 @@ sudo systemctl enable {beatname_pkg} endif::[] [float] -==== YUM +===== YUM ifeval::["{release-state}"=="unreleased"] diff --git a/docs/legacy/copied-from-beats/docs/security/linux-seccomp.asciidoc b/docs/legacy/copied-from-beats/docs/security/linux-seccomp.asciidoc deleted file mode 100644 index 96773aa8ddd..00000000000 --- a/docs/legacy/copied-from-beats/docs/security/linux-seccomp.asciidoc +++ /dev/null @@ -1,95 +0,0 @@ -[[linux-seccomp]] -== Use Linux Secure Computing Mode (seccomp) - -IMPORTANT: {deprecation-notice-config} - -beta[] - -On Linux 3.17 and later, {beatname_uc} can take advantage of secure computing -mode, also known as seccomp. Seccomp restricts the system calls that a process -can issue. Specifically {beatname_uc} can load a seccomp BPF filter at process -start-up that drops the privileges to invoke specific system calls. Once a -filter is loaded by the process it cannot be removed. - -The kernel exposes a large number of system calls that are not used by -{beatname_uc}. By installing a seccomp filter, you can limit the total kernel -surface exposed to {beatname_uc} (principle of least privilege). This minimizes -the impact of unknown vulnerabilities that might be found in the process. - -The filter is expressed as a Berkeley Packet Filter (BPF) program. The BPF -program is generated based on a policy defined by {beatname_uc}. The policy -can be customized through configuration as well. - -A seccomp policy is architecture specific due to the fact that system calls vary -by architecture. {beatname_uc} includes an allowlist seccomp policy for the -AMD64 and 386 architectures. You can view those policies -https://github.com/elastic/beats/tree/{branch}/libbeat/common/seccomp[here]. - -[float] -[[seccomp-policy-config]] -=== Seccomp Policy Configuration - -The seccomp policy can be customized through the configuration policy. This is -an example blocklist policy that prohibits `execve`, `execveat`, `fork`, and -`vfork` syscalls. - -[source,yaml] ----- -seccomp: - default_action: allow <1> - syscalls: - - action: errno <2> - names: <3> - - execve - - execveat - - fork - - vfork ----- -<1> If the system call being invoked by the process does not match one of the -names below then it will be allowed. -<2> If the system call being invoked matches one of the names below then an -error will be returned to caller. This is known as a blocklist policy. -<3> These are system calls being prohibited. - -These are the configuration options for a seccomp policy. - -*`enabled`*:: On Linux, this option is enabled by default. To disable seccomp -filter loading, set this option to `false`. - -*`default_action`*:: The default action to take when none of the defined system -calls match. See <> for the full list of -values. This is required. - -*`syscalls`*:: Each object in this list must contain an `action` and a list of -system call `names`. The list must contain at least one item. - -*`names`*:: A list of system call names. The system call name must exist for -the runtime architecture, otherwise an error will be logged and the filter will -not be installed. At least one system call must be defined. - -[[seccomp-policy-config-action]] -*`action`*:: The action to take when any of the system calls listed in `names` -is executed. This is required. These are the available action values. The -actions that are available depend on the kernel version. - -- `errno` - The system call will return `EPERM` (permission denied) to the - caller. -- `trace` - The kernel will notify a `ptrace` tracer. If no tracer is present - then the system call fails with `ENOSYS` (function not implemented). -- `trap` - The kernel will send a `SIGSYS` signal to the calling thread and not - execute the system call. The Go runtime will exit. -- `kill_thread` - The kernel will immediately terminate the thread. Other - threads will continue to execute. -- `kill_process` - The kernel will terminate the process. Available in Linux - 4.14 and later. -- `log` - The kernel will log the system call before executing it. Available in - Linux 4.14 and later. (This does not go to the Beat's log.) -- `allow` - The kernel will allow the system call to execute. - -[float] -=== {auditbeat} Reports Seccomp Violations - -You can use {auditbeat} to report any seccomp violations that occur on the system. -The kernel generates an event for each violation and {auditbeat} reports the -event. The `event.action` value will be `violated-seccomp-policy` and the event -will contain information about the process and system call. diff --git a/docs/legacy/copied-from-beats/docs/shared-directory-layout.asciidoc b/docs/legacy/copied-from-beats/docs/shared-directory-layout.asciidoc index 1545ccaebcf..83f4fb6ae13 100644 --- a/docs/legacy/copied-from-beats/docs/shared-directory-layout.asciidoc +++ b/docs/legacy/copied-from-beats/docs/shared-directory-layout.asciidoc @@ -10,105 +10,25 @@ ////////////////////////////////////////////////////////////////////////// [[directory-layout]] -=== Directory layout +=== Installation layout -// lint disable usr - -IMPORTANT: {deprecation-notice-installation} - -The directory layout of an installation is as follows: - -[cols="> in the configuration file. -endif::serverless[] +View the installation layout and default paths for both Fleet-managed APM Server and the APM Server binary. [float] -==== Default paths - -{beatname_uc} uses the following default paths unless you explicitly change them. - -ifdef::deb_os,rpm_os[] -[float] -===== deb and rpm -[cols="> for more details. Add labels to your application Docker containers, and they will be picked up by the {beats} autodiscover feature when they are deployed. Here is an example command for an Apache HTTP Server container with labels to configure the {filebeat} and {metricbeat} modules for the Apache HTTP Server: @@ -289,7 +287,7 @@ The +{beatname_lc}.docker.yml+ downloaded earlier should be customized for your endif::[] [float] -===== Custom image configuration +====== Custom image configuration It's possible to embed your {beatname_uc} configuration in a custom image. Here is an example Dockerfile to achieve this: diff --git a/docs/legacy/copied-from-beats/docs/shared-instrumentation.asciidoc b/docs/legacy/copied-from-beats/docs/shared-instrumentation.asciidoc deleted file mode 100644 index cac7084cb48..00000000000 --- a/docs/legacy/copied-from-beats/docs/shared-instrumentation.asciidoc +++ /dev/null @@ -1,93 +0,0 @@ -[[configuration-instrumentation]] -== Configure APM instrumentation - -++++ -Instrumentation -++++ - -IMPORTANT: {deprecation-notice-config} - -Libbeat uses the Elastic APM Go Agent to instrument its publishing pipeline. -Currently, only the {es} output is instrumented. -To gain insight into the performance of {beatname_uc}, you can enable this instrumentation and send trace data to APM Server. - -Example configuration with instrumentation enabled: - -["source","yaml"] ----- -instrumentation: - enabled: true - environment: production - hosts: - - "http://localhost:8200" - api_key: L5ER6FEvjkmlfalBealQ3f3fLqf03fazfOV ----- - -[float] -=== Configuration options - -You can specify the following options in the `instrumentation` section of the +{beatname_lc}.yml+ config file: - -[float] -==== `enabled` - -Set to `true` to enable instrumentation of {beatname_uc}. -Defaults to `false`. - -[float] -==== `environment` - -Set the environment in which {beatname_uc} is running, for example, `staging`, `production`, `dev`, etc. -Environments can be filtered in the {kibana-ref}/xpack-apm.html[{apm-app}]. - -[float] -==== `hosts` - -The {apm-server-ref-v}/getting-started-apm-server.html[APM Server] hosts to report instrumentation data to. -Defaults to `http://localhost:8200`. - -[float] -==== `api_key` - -{apm-server-ref-v}/api-key.html[API key] used to secure communication with the APM Server(s). -If `api_key` is set then `secret_token` will be ignored. - -[float] -==== `secret_token` - -{apm-server-ref-v}/secret-token.html[Secret token] used to secure communication with the APM Server(s). - -[float] -==== `profiling.cpu.enabled` - -Set to `true` to enable CPU profiling, where profile samples are recorded as events. - -This feature is experimental. - -[float] -==== `profiling.cpu.interval` - -Configure the CPU profiling interval. Defaults to `60s`. - -This feature is experimental. - -[float] -==== `profiling.cpu.duration` - -Configure the CPU profiling duration. Defaults to `10s`. - -This feature is experimental. - -[float] -==== `profiling.heap.enabled` - -Set to `true` to enable heap profiling. - -This feature is experimental. - -[float] -==== `profiling.heap.interval` - -Configure the heap profiling interval. Defaults to `60s`. - -This feature is experimental. diff --git a/docs/legacy/copied-from-beats/docs/shared-kerberos-config.asciidoc b/docs/legacy/copied-from-beats/docs/shared-kerberos-config.asciidoc deleted file mode 100644 index e05dd1ea7d0..00000000000 --- a/docs/legacy/copied-from-beats/docs/shared-kerberos-config.asciidoc +++ /dev/null @@ -1,91 +0,0 @@ -[[configuration-kerberos]] -== Configure Kerberos - -++++ -Kerberos -++++ - -IMPORTANT: {deprecation-notice-config} - -You can specify Kerberos options with any output or input that supports Kerberos, like {es}. - -The following encryption types are supported: - -// lint ignore -* aes128-cts-hmac-sha1-96 -* aes128-cts-hmac-sha256-128 -* aes256-cts-hmac-sha1-96 -* aes256-cts-hmac-sha384-192 -* des3-cbc-sha1-kd -* rc4-hmac - -Example output config with Kerberos password based authentication: - -[source,yaml] ----- -output.elasticsearch.hosts: ["http://my-elasticsearch.elastic.co:9200"] -output.elasticsearch.kerberos.auth_type: password -output.elasticsearch.kerberos.username: "elastic" -output.elasticsearch.kerberos.password: "changeme" -output.elasticsearch.kerberos.config_path: "/etc/krb5.conf" -output.elasticsearch.kerberos.realm: "ELASTIC.CO" ----- - -The service principal name for the {es} instance is constructed from these options. Based on this configuration -it is going to be `HTTP/my-elasticsearch.elastic.co@ELASTIC.CO`. - -[float] -=== Configuration options - -You can specify the following options in the `kerberos` section of the +{beatname_lc}.yml+ config file: - -[float] -==== `enabled` - -The `enabled` setting can be used to enable the `kerberos` configuration by setting -it to `false`. The default value is `true`. - -NOTE: Kerberos settings are disabled if either `enabled` is set to `false` or the -`kerberos` section is missing. - -[float] -==== `auth_type` - -There are two options to authenticate with Kerberos KDC: `password` and `keytab`. - -`password` expects the principal name and its password. When choosing `keytab`, you -have to specify a principal name and a path to a keytab. The keytab must contain -the keys of the selected principal. Otherwise, authentication will fail. - -[float] -==== `config_path` - -You need to set the path to the `krb5.conf`, so +{beatname_lc} can find the Kerberos KDC to -retrieve a ticket. - -[float] -==== `username` - -Name of the principal used to connect to the output. - -[float] -==== `password` - -If you configured `password` for `auth_type`, you have to provide a password -for the selected principal. - -[float] -==== `keytab` - -If you configured `keytab` for `auth_type`, you have to provide the path to the -keytab of the selected principal. - -[float] -==== `service_name` - -This option can only be configured for Kafka. It is the name of the Kafka service, usually `kafka`. - -[float] -==== `realm` - -Name of the realm where the output resides. diff --git a/docs/legacy/copied-from-beats/docs/shared-securing-beat.asciidoc b/docs/legacy/copied-from-beats/docs/shared-securing-beat.asciidoc deleted file mode 100644 index bdad0cb0bf4..00000000000 --- a/docs/legacy/copied-from-beats/docs/shared-securing-beat.asciidoc +++ /dev/null @@ -1,79 +0,0 @@ -[id="securing-{beatname_lc}"] -= Secure {beatname_uc} - -++++ -Secure -++++ - -IMPORTANT: {deprecation-notice-config} -If you're using {fleet} and the Elastic APM integration, please see <> instead. - -The following topics provide information about securing the {beatname_uc} -process and connecting to a cluster that has {security-features} enabled. - -You can use role-based access control and optionally, API keys to grant {beatname_uc} users access to -secured resources. - -* <> -* <>. - -After privileged users have been created, use authentication to connect to a secured Elastic cluster. - -* <> -ifndef::no-output-logstash[] -* <> -endif::[] - -ifdef::apm-server[] -For secure communication between APM Server and APM Agents, see <>. -endif::[] - -ifndef::serverless[] -ifndef::win_only[] -On Linux, {beatname_uc} can take advantage of secure computing mode to restrict the -system calls that a process can issue. - -* <> -endif::[] -endif::[] - -// APM HTTPS information -ifdef::beat-specific-security[] -include::{beat-specific-security}[] -endif::[] - - - -ifdef::apm-server[] -// APM privileges -include::{docdir}/legacy/feature-roles.asciidoc[] -// APM API keys -include::{docdir}/legacy/api-keys.asciidoc[] -endif::[] - -ifndef::apm-server[] -// Beat privileges -include::./security/users.asciidoc[] -// Beat API keys -include::./security/api-keys.asciidoc[] -endif::[] - -// APM Agent security -ifdef::apm-server[] -include::{docdir}/legacy/secure-communication-agents.asciidoc[] -endif::[] - -// Elasticsearch security -include::./https.asciidoc[] - -// Logstash security -ifndef::no-output-logstash[] -include::./shared-ssl-logstash-config.asciidoc[] -endif::[] - -// Linux Seccomp -ifndef::serverless[] -ifndef::win_only[] -include::./security/linux-seccomp.asciidoc[] -endif::[] -endif::[] diff --git a/docs/legacy/copied-from-beats/docs/shared-ssl-config.asciidoc b/docs/legacy/copied-from-beats/docs/shared-ssl-config.asciidoc index 2f66aa47077..3be1ecb6fb9 100644 --- a/docs/legacy/copied-from-beats/docs/shared-ssl-config.asciidoc +++ b/docs/legacy/copied-from-beats/docs/shared-ssl-config.asciidoc @@ -9,9 +9,6 @@ endif::apm-server[] ifdef::apm-server[] == SSL output settings -IMPORTANT: {deprecation-notice-config} -If you're using {fleet} and the Elastic APM integration, please see the {fleet-guide}[{fleet} User Guide] instead. - You can specify SSL options with any output that supports SSL, like {es}, {ls}, or Kafka. endif::[] @@ -40,6 +37,9 @@ output.elasticsearch.ssl.certificate: "/etc/pki/client/cert.pem" output.elasticsearch.ssl.key: "/etc/pki/client/cert.key" ---- +// I don't know where to put this +include::../../../configure/tab-widgets/tls-config-widget.asciidoc[] + ifndef::no-output-logstash[] Also see <>. endif::[] @@ -235,8 +235,8 @@ supports SSL. [[client-certificate-authorities]] ==== `certificate_authorities` -The list of root certificates for verifications is required. If `certificate_authorities` is empty or not set, the -system keystore is used. If `certificate_authorities` is self-signed, the host system +The list of root certificates for verifications is required. +If `certificate_authorities` is self-signed, the host system needs to trust that CA cert as well. By default you can specify a list of files that +{beatname_lc}+ will read, but you diff --git a/docs/legacy/copied-from-beats/docs/shared-ssl-logstash-config.asciidoc b/docs/legacy/copied-from-beats/docs/shared-ssl-logstash-config.asciidoc index f6ae9294868..056d04a421b 100644 --- a/docs/legacy/copied-from-beats/docs/shared-ssl-logstash-config.asciidoc +++ b/docs/legacy/copied-from-beats/docs/shared-ssl-logstash-config.asciidoc @@ -9,12 +9,10 @@ //// include::../../libbeat/docs/shared-ssl-logstash-config.asciidoc[] ////////////////////////////////////////////////////////////////////////// -[role="xpack"] +[float] [[configuring-ssl-logstash]] == Secure communication with {ls} -IMPORTANT: {deprecation-notice-config} - You can use SSL mutual authentication to secure connections between {beatname_uc} and {ls}. This ensures that {beatname_uc} sends encrypted data to trusted {ls} servers only, and that the {ls} server receives data from trusted {beatname_uc} clients only. diff --git a/docs/legacy/copied-from-beats/docs/shared-systemd.asciidoc b/docs/legacy/copied-from-beats/docs/shared-systemd.asciidoc index b6d649f2965..3c5daaf44df 100644 --- a/docs/legacy/copied-from-beats/docs/shared-systemd.asciidoc +++ b/docs/legacy/copied-from-beats/docs/shared-systemd.asciidoc @@ -1,7 +1,8 @@ [[running-with-systemd]] === {beatname_uc} and systemd -IMPORTANT: {deprecation-notice-config} +IMPORTANT: These commands only apply to the APM Server binary installation method. +Fleet-managed users should see {fleet-guide}/start-stop-elastic-agent.html[Start and stop {agent}s on edge hosts]. The DEB and RPM packages include a service unit for Linux systems with systemd. On these systems, you can manage {beatname_uc} by using the usual diff --git a/docs/legacy/copied-from-beats/docs/shared/configuring-intro.asciidoc b/docs/legacy/copied-from-beats/docs/shared/configuring-intro.asciidoc deleted file mode 100644 index 82812c34bd1..00000000000 --- a/docs/legacy/copied-from-beats/docs/shared/configuring-intro.asciidoc +++ /dev/null @@ -1,19 +0,0 @@ - -ifndef::apm-server[] -TIP: To get started quickly, read <<{beatname_lc}-installation-configuration>>. -endif::[] - -To configure {beatname_uc}, edit the configuration file. The default -configuration file is called +{beatname_lc}.yml+. The location of the file -varies by platform. To locate the file, see <>. - -ifndef::apm-server[] -There’s also a full example configuration file called +{beatname_lc}.reference.yml+ -that shows all non-deprecated options. -endif::[] - -TIP: See the -{beats-ref}/config-file-format.html[Config File Format] for more about the -structure of the config file. - -The following topics describe how to configure {beatname_uc}: diff --git a/docs/legacy/copied-from-beats/outputs/fileout/docs/fileout.asciidoc b/docs/legacy/copied-from-beats/outputs/fileout/docs/fileout.asciidoc deleted file mode 100644 index 1d84df95d3f..00000000000 --- a/docs/legacy/copied-from-beats/outputs/fileout/docs/fileout.asciidoc +++ /dev/null @@ -1,82 +0,0 @@ -[[file-output]] -=== Configure the File output - -++++ -File -++++ - -IMPORTANT: {deprecation-notice-config} - -The File output dumps the transactions into a file where each transaction is in a JSON format. -Currently, this output is used for testing, but it can be used as input for -{ls}. - -To use this output, edit the {beatname_uc} configuration file to disable the {es} -output by commenting it out, and enable the file output by adding `output.file`. - -Example configuration: - -["source","yaml",subs="attributes"] ------------------------------------------------------------------------------- -output.file: - path: "/tmp/{beatname_lc}" - filename: {beatname_lc} - #rotate_every_kb: 10000 - #number_of_files: 7 - #permissions: 0600 - #rotate_on_startup: true ------------------------------------------------------------------------------- - -ifdef::apm-server[] -[float] -==== Configure the {kib} output - -include::../../../../shared-kibana-endpoint.asciidoc[tag=shared-kibana-config] -endif::[] - -==== Configuration options - -You can specify the following `output.file` options in the +{beatname_lc}.yml+ config file: - -===== `enabled` - -The enabled config is a boolean setting to enable or disable the output. If set -to false, the output is disabled. - -The default value is `true`. - -[[path]] -===== `path` - -The path to the directory where the generated files will be saved. This option is -mandatory. - -===== `filename` - -The name of the generated files. The default is set to the Beat name. For example, the files -generated by default for {beatname_uc} would be "{beatname_lc}", "{beatname_lc}.1", "{beatname_lc}.2", and so on. - -===== `rotate_every_kb` - -The maximum size in kilobytes of each file. When this size is reached, the files are -rotated. The default value is 10240 KB. - -===== `number_of_files` - -The maximum number of files to save under <>. When this number of files is reached, the -oldest file is deleted, and the rest of the files are shifted from last to first. -The number of files must be between 2 and 1024. The default is 7. - -===== `permissions` - -Permissions to use for file creation. The default is 0600. - -===== `rotate_on_startup` - -If the output file already exists on startup, immediately rotate it and start writing to a new file instead of appending to the existing one. Defaults to true. - -===== `codec` - -Output codec configuration. If the `codec` section is missing, events will be JSON encoded. - -See <> for more information. diff --git a/docs/legacy/data-ingestion.asciidoc b/docs/legacy/data-ingestion.asciidoc index e376728fece..cbe7c07cf56 100644 --- a/docs/legacy/data-ingestion.asciidoc +++ b/docs/legacy/data-ingestion.asciidoc @@ -1,38 +1,26 @@ [[tune-data-ingestion]] -== Tune data ingestion - -IMPORTANT: {deprecation-notice-data} -If you've already upgraded, please see <> instead. +=== Tune data ingestion This section explains how to adapt data ingestion according to your needs. -* <> -* <> - - +[float] [[tune-apm-server]] === Tune APM Server -++++ -APM Server -++++ - -IMPORTANT: {deprecation-notice-data} - * <> * <> * <> [[add-apm-server-instances]] [float] -==== Add APM Server instances +==== Add APM Server or {agent} instances If the APM Server cannot process data quickly enough, you will see request timeouts. - One way to solve this problem is to increase processing power. -This can be done by either migrating your APM Server to a more powerful machine -or adding more APM Server instances. + +Increase processing power by either migrating to a more powerful machine +or adding more APM Server/Elastic Agent instances. Having several instances will also increase <>. [[reduce-payload-size]] @@ -54,21 +42,20 @@ Read more in the {apm-agents-ref}/index.html[agents documentation]. Agents make use of long running requests and flush as many events over a single request as possible. Thus, the rate limiter for anonymous authentication is bound to the number of _events_ sent per second, per IP. -If the event rate limit is hit while events on an established request are sent, the request is not immediately terminated. The intake of events is only throttled to <>, which means that events are queued and processed slower. Only when the allowed buffer queue is also full, does the request get terminated with a `429 - rate limit exceeded` HTTP response. If an agent tries to establish a new request, but the rate limit is already hit, a `429` will be sent immediately. - -Increasing the <> default value will help avoid `rate limit exceeded` errors. +If the event rate limit is hit while events on an established request are sent, the request is not immediately terminated. The intake of events is only throttled to anonymous event rate limit, which means that events are queued and processed slower. Only when the allowed buffer queue is also full, does the request get terminated with a `429 - rate limit exceeded` HTTP response. If an agent tries to establish a new request, but the rate limit is already hit, a `429` will be sent immediately. -[[tune-es]] -=== Tune {es} +Increasing the default value for the following configuration variable will help avoid `rate limit exceeded` errors: -++++ -{es} -++++ +|==== +| APM Server binary | <> +| Fleet-managed | `Anonymous Event rate limit (event limit)` +|==== -IMPORTANT: {deprecation-notice-data} -If you've already upgraded, please see <> instead. +[float] +[[apm-tune-elasticsearch]] +=== Tune {es} -The {es} reference provides insight on tuning {es}. +The {es} Reference provides insight on tuning {es}. {ref}/tune-for-indexing-speed.html[Tune for indexing speed] provides information on: diff --git a/docs/legacy/error-api.asciidoc b/docs/legacy/error-api.asciidoc deleted file mode 100644 index 776f2de1bc0..00000000000 --- a/docs/legacy/error-api.asciidoc +++ /dev/null @@ -1,16 +0,0 @@ -[[error-api]] -=== Errors - -An error or a logged error message captured by an agent occurring in a monitored service. - -[float] -[[error-schema]] -==== Error Schema - -APM Server uses JSON Schema to validate requests. The specification for errors is defined on -{github_repo_link}/docs/spec/v2/error.json[GitHub] and included below: - -[source,json] ----- -include::../spec/v2/error.json[] ----- diff --git a/docs/legacy/error-indices.asciidoc b/docs/legacy/error-indices.asciidoc deleted file mode 100644 index 4b7a74ea5db..00000000000 --- a/docs/legacy/error-indices.asciidoc +++ /dev/null @@ -1,13 +0,0 @@ -[[error-indices]] -== Example error documents - -++++ -Error documents -++++ - -This example shows what error documents can look like when indexed in {es}: - -[source,json] ----- -include::../data/elasticsearch/generated/errors.json[] ----- diff --git a/docs/legacy/events-api.asciidoc b/docs/legacy/events-api.asciidoc deleted file mode 100644 index 7add657c429..00000000000 --- a/docs/legacy/events-api.asciidoc +++ /dev/null @@ -1,130 +0,0 @@ -[[events-api]] -== Events Intake API - -++++ -Events intake -++++ - -IMPORTANT: {deprecation-notice-api} -If you've already upgraded, see <>. - -NOTE: Most users do not need to interact directly with the events intake API. - -The events intake API is what we call the internal protocol that APM agents use to talk to the APM Server. -Agents communicate with the Server by sending events -- captured pieces of information -- in an HTTP request. -Events can be: - -* Transactions -* Spans -* Errors -* Metrics - -Each event is sent as its own line in the HTTP request body. -This is known as http://ndjson.org[newline delimited JSON (NDJSON)]. - -With NDJSON, agents can open an HTTP POST request and use chunked encoding to stream events to the APM Server -as soon as they are recorded in the agent. -This makes it simple for agents to serialize each event to a stream of newline delimited JSON. -The APM Server also treats the HTTP body as a compressed stream and thus reads and handles each event independently. - -See the {apm-overview-ref-v}/apm-data-model.html[APM Data Model] to learn more about the different types of events. - -[[events-api-endpoint]] -[float] -=== Endpoint - -Send an `HTTP POST` request to the APM Server `intake/v2/events` endpoint: - -[source,bash] ------------------------------------------------------------- -http(s)://{hostname}:{port}/intake/v2/events ------------------------------------------------------------- - -For <> send an `HTTP POST` request to the APM Server `intake/v2/rum/events` endpoint instead: - -[source,bash] ------------------------------------------------------------- -http(s)://{hostname}:{port}/intake/v2/rum/events ------------------------------------------------------------- - -[[events-api-response]] -[float] -=== Response - -On success, the server will respond with a 202 Accepted status code and no body. - -Keep in mind that events can succeed and fail independently of each other. Only if all events succeed does the server respond with a 202. - -[[events-api-errors]] -[float] -=== Errors - -There are two types of errors that the APM Server may return to an agent: - -* Event related errors (typically validation errors) -* Non-event related errors - -The APM Server processes events one after the other. -If an error is encountered while processing an event, -the error encountered as well as the document causing the error are added to an internal array. -The APM Server will only save 5 event related errors. -If it encounters more than 5 event related errors, -the additional errors will not be returned to agent. -Once all events have been processed, -the error response is sent. - -Some errors, not relating to specific events, -may terminate the request immediately. -For example: IP rate limit reached, wrong metadata, etc. -If at any point one of these errors is encountered, -it is added to the internal array and immediately returned. - -An example error response might look something like this: - -[source,json] ------------------------------------------------------------- -{ - "errors": [ - { - "message": "", <1> - "document": "" <2> - },{ - "message": "", - "document": "" - },{ - "message": "", - "document": "" - },{ - "message": "too many requests" <3> - }, - ], - "accepted": 2320 <4> -} ------------------------------------------------------------- - -<1> An event related error -<2> The document causing the error -<3> An immediately returning non-event related error -<4> The number of accepted events - -If you're developing an agent, these errors can be useful for debugging. - -[[events-api-schema-definition]] -[float] -=== Event API Schemas - -The APM Server uses a collection of JSON Schemas for validating requests to the intake API: - -* <> -* <> -* <> -* <> -* <> -* <> - -include::./metadata-api.asciidoc[] -include::./transaction-api.asciidoc[] -include::./span-api.asciidoc[] -include::./error-api.asciidoc[] -include::./metricset-api.asciidoc[] -include::./example-intake-events.asciidoc[] diff --git a/docs/legacy/example-intake-events.asciidoc b/docs/legacy/example-intake-events.asciidoc deleted file mode 100644 index f7731ae9b3f..00000000000 --- a/docs/legacy/example-intake-events.asciidoc +++ /dev/null @@ -1,9 +0,0 @@ -[[example-intake-events]] -=== Example Request Body - -A request body example containing one event for all currently supported event types. - -[source,json] ----- -include::../data/intake-api/generated/events.ndjson[] ----- diff --git a/docs/legacy/exploring-es-data.asciidoc b/docs/legacy/exploring-es-data.asciidoc index e31088290f9..e06be82a6f9 100644 --- a/docs/legacy/exploring-es-data.asciidoc +++ b/docs/legacy/exploring-es-data.asciidoc @@ -1,61 +1,16 @@ [[exploring-es-data]] = Explore data in {es} -IMPORTANT: {deprecation-notice-data} -If you've already upgraded, see <>. - -Elastic APM data is stored in data streams. -include::../data-streams.asciidoc[tag=data-streams] - -The namespace default is `default`. -To configure a custom namespace, set `data_streams.namespace`: - -[source,yaml] ----- -apm-server: - data_streams.namespace: custom_namespace ----- - -[discrete] -[[apm-data-streams-list-standalone]] -== APM data streams - -By type, the APM data streams are: - -Traces:: -Traces are comprised of {apm-guide-ref}/data-model.html[spans and transactions]. -Traces are stored in the following data streams: -+ -include::../data-streams.asciidoc[tag=traces-data-streams] - -Metrics:: -Metrics include application-based metrics, aggregation metrics, and basic system metrics. -Metrics are stored in the following data streams: -+ -include::../data-streams.asciidoc[tag=metrics-data-streams] - -Logs:: -Logs include application error events and application logs. -Logs are stored in the following data streams: -+ -include::../data-streams.asciidoc[tag=logs-data-streams] - -[float] -[[sample-apm-document]] -== Sample APM documents - -Sample documents for each of the APM event types are available on these pages: - -* <> -* <> -* <> +* <> +* <> * <> -* <> [float] [[elasticsearch-query-examples]] == {es} query examples +Elastic APM data is stored in <>. + The following examples enable you to interact with {es}'s REST API. One possible way to do this is using {kib}'s {kibana-ref}/console-kibana.html[{dev-tools-app} console]. @@ -93,9 +48,60 @@ GET /_template/your-template-name ---- // CONSOLE +[float] +[[sample-apm-document]] +== Sample APM documents + +Sample documents for each of the APM event types are available below: + +[%collapsible] +.Transaction documents +==== +Example transaction documents indexed in {es}: + +[source,json] +---- +include::../data/elasticsearch/generated/transactions.json[] +---- +==== + +[%collapsible] +.Span documents +==== +Example span documents indexed in {es}: + +[source,json] +---- +include::../data/elasticsearch/generated/spans.json[] +---- +==== + +[%collapsible] +.Error documents +==== +Example error documents indexed in {es}: + +[source,json] +---- +include::../data/elasticsearch/generated/errors.json[] +---- +==== + +[%collapsible] +.Metric document +==== +include::./metricset-indices.asciidoc[tag=example] +==== + +[%collapsible] +.Source map documents +==== +Example source map document indexed in {es}: + +[source,json] +---- +include::../data/intake-api/generated/sourcemap/bundle.js.map[] +---- +==== -include::./transaction-indices.asciidoc[] -include::./span-indices.asciidoc[] -include::./error-indices.asciidoc[] -include::./metricset-indices.asciidoc[] -include::./sourcemap-indices.asciidoc[] +include::./metricset-indices.asciidoc[] \ No newline at end of file diff --git a/docs/legacy/feature-roles.asciidoc b/docs/legacy/feature-roles.asciidoc index f035ae87cce..8b60c1abdb0 100644 --- a/docs/legacy/feature-roles.asciidoc +++ b/docs/legacy/feature-roles.asciidoc @@ -1,8 +1,37 @@ -[role="xpack"] -[[feature-roles]] -== Grant users access to secured resources +[[secure-comms-stack]] +== Secure communication with the {stack} + +++++ +With the {stack} +++++ + +NOTE: This documentation only applies to the APM Server binary. + +Use role-based access control or API keys to grant APM Server users access to secured resources. + +* <> +* <>. + +After privileged users have been created, use authentication to connect to a secured Elastic cluster. + +* <> +* <> -IMPORTANT: {deprecation-notice-config} +For secure communication between APM Server and APM Agents, see <>. + +A reference of all available <> is also available. + +[float] +[[security-overview]] +=== Security Overview + +APM Server exposes an HTTP endpoint, and as with anything that opens ports on your servers, +you should be careful about who can connect to it. +Firewall rules are recommended to ensure only authorized systems can connect. + +[float] +[[feature-roles]] +=== Feature roles You can use role-based access control to grant users access to secured resources. The roles that you set up depend on your organization's security @@ -39,8 +68,6 @@ In general, there are three types of privileges you'll work with: Create a _writer_ user ++++ -IMPORTANT: {deprecation-notice-config} - APM users that publish events to {es} need privileges to write to APM data streams. [float] @@ -82,8 +109,6 @@ Assign these extra privileges to the *general writer role*. Create a _monitoring_ user ++++ -IMPORTANT: {deprecation-notice-config} - {es-security-features} provides built-in users and roles for publishing and viewing monitoring data. The privileges and roles needed to publish monitoring data depend on the method used to collect that data. @@ -225,9 +250,7 @@ need to view monitoring data for {beatname_uc}: Create an _API key_ user ++++ -IMPORTANT: {deprecation-notice-config} - -You can configure <> to authorize requests to APM Server. +You can configure <> to authorize requests to APM Server. To create an APM Server user with the required privileges for creating and managing API keys: . Create an **API key role**, called something like `apm_api_key`, @@ -295,8 +318,6 @@ PUT _security/role/apm_api_key <1> Create a _central config_ user ++++ -IMPORTANT: {deprecation-notice-config} - [[privileges-agent-central-config-server]] ==== APM Server central configuration management @@ -332,33 +353,3 @@ See {kibana-ref}/apm-app-central-config-user.html[{apm-app} central configuratio // ++++ // CONTENT - -//// -*********************************** *********************************** -*********************************** *********************************** -//// - -[[more-security-roles]] -=== Additional APM users and roles - -IMPORTANT: {deprecation-notice-config} - -In addition to the {beatname_uc} users described in this documentation, -you'll likely need to create users for other APM tasks: - -* An {kibana-ref}/apm-app-reader.html[APM reader], for {kib} users who need to view the {apm-app}, -or create and edit visualizations that access +{beat_default_index_prefix}-*+ data. -* Various {kibana-ref}/apm-app-api-user.html[{apm-app} API users], -for interacting with the APIs exposed by the {apm-app}. - -[float] -[[learn-more-security]] -=== Learn more about users and roles - -Want to learn more about creating users and roles? See -{ref}/secure-cluster.html[Secure a cluster]. Also see: - -* {ref}/security-privileges.html[Security privileges] for a description of -available privileges -* {ref}/built-in-roles.html[Built-in roles] for a description of roles that -you can assign to users diff --git a/docs/legacy/fields.asciidoc b/docs/legacy/fields.asciidoc deleted file mode 100644 index 9d0bd5fec9e..00000000000 --- a/docs/legacy/fields.asciidoc +++ /dev/null @@ -1,22956 +0,0 @@ - -//// -This file is generated! See _meta/fields.yml and scripts/generate_fields_docs.py -//// - -[[exported-fields]] -= Exported fields - - -This document describes the fields that are exported by Apm-Server. They are -grouped in the following categories: - -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> - - -[[exported-fields-apm-application-metrics]] -== APM Application Metrics fields - -APM application metrics. - - -*`histogram`*:: -+ --- -type: histogram - --- - -[[exported-fields-apm-error]] -== APM Error fields - -Error-specific data for APM - - -*`processor.name`*:: -+ --- -Processor name. - -type: keyword - --- - -*`processor.event`*:: -+ --- -Processor event. - -type: keyword - --- - - -*`timestamp.us`*:: -+ --- -Timestamp of the event in microseconds since Unix epoch. - - -type: long - --- - -*`message`*:: -+ --- -The original error message. - -type: text - -{yes-icon} {ecs-ref}[ECS] field. - --- - -[float] -=== url - -A complete Url, with scheme, host and path. - - - -*`url.scheme`*:: -+ --- -The protocol of the request, e.g. "https:". - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`url.full`*:: -+ --- -The full, possibly agent-assembled URL of the request, e.g https://example.com:443/search?q=elasticsearch#top. - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`url.domain`*:: -+ --- -The hostname of the request, e.g. "example.com". - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`url.port`*:: -+ --- -The port of the request, e.g. 443. - - -type: long - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`url.path`*:: -+ --- -The path of the request, e.g. "/search". - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`url.query`*:: -+ --- -The query string of the request, e.g. "q=elasticsearch". - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`url.fragment`*:: -+ --- -A fragment specifying a location in a web page , e.g. "top". - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - - -*`http.version`*:: -+ --- -The http version of the request leading to this event. - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - - -*`http.request.method`*:: -+ --- -The http method of the request leading to this event. - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`http.request.headers`*:: -+ --- -The canonical headers of the monitored HTTP request. - - -type: object - -Object is not enabled. - --- - -*`http.request.referrer`*:: -+ --- -Referrer for this HTTP request. - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - - -*`http.response.status_code`*:: -+ --- -The status code of the HTTP response. - - -type: long - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`http.response.finished`*:: -+ --- -Used by the Node agent to indicate when in the response life cycle an error has occurred. - - -type: boolean - --- - -*`http.response.headers`*:: -+ --- -The canonical headers of the monitored HTTP response. - - -type: object - -Object is not enabled. - --- - -*`labels`*:: -+ --- -A flat mapping of user-defined labels with string, boolean or number values. - - -type: object - -{yes-icon} {ecs-ref}[ECS] field. - --- - -[float] -=== service - -Service fields. - - - -*`service.name`*:: -+ --- -Immutable name of the service emitting this event. - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`service.version`*:: -+ --- -Version of the service emitting this event. - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`service.environment`*:: -+ --- -Service environment. - - -type: keyword - --- - - -*`service.node.name`*:: -+ --- -Unique meaningful name of the service node. - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - - -*`service.language.name`*:: -+ --- -Name of the programming language used. - - -type: keyword - --- - -*`service.language.version`*:: -+ --- -Version of the programming language used. - - -type: keyword - --- - - -*`service.runtime.name`*:: -+ --- -Name of the runtime used. - - -type: keyword - --- - -*`service.runtime.version`*:: -+ --- -Version of the runtime used. - - -type: keyword - --- - - -*`service.framework.name`*:: -+ --- -Name of the framework used. - - -type: keyword - --- - -*`service.framework.version`*:: -+ --- -Version of the framework used. - - -type: keyword - --- - - -*`transaction.id`*:: -+ --- -The transaction ID. - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`transaction.sampled`*:: -+ --- -Transactions that are 'sampled' will include all available information. Transactions that are not sampled will not have spans or context. - - -type: boolean - --- - -*`transaction.type`*:: -+ --- -Keyword of specific relevance in the service's domain (eg. 'request', 'backgroundjob', etc) - - -type: keyword - --- - -*`transaction.name`*:: -+ --- -Generic designation of a transaction in the scope of a single service (eg. 'GET /users/:id'). - - -type: keyword - --- - -*`transaction.name.text`*:: -+ --- -type: text - --- - - -*`trace.id`*:: -+ --- -The ID of the trace to which the event belongs to. - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - - -*`parent.id`*:: -+ --- -The ID of the parent event. - - -type: keyword - --- - - -*`agent.name`*:: -+ --- -Name of the agent used. - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`agent.version`*:: -+ --- -Version of the agent used. - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`agent.ephemeral_id`*:: -+ --- -The Ephemeral ID identifies a running process. - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -[float] -=== container - -Container fields are used for meta information about the specific container that is the source of information. These fields help correlate data based containers from any runtime. - - - -*`container.id`*:: -+ --- -Unique container id. - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -[float] -=== kubernetes - -Kubernetes metadata reported by agents - - - -*`kubernetes.namespace`*:: -+ --- -Kubernetes namespace - - -type: keyword - --- - - -*`kubernetes.node.name`*:: -+ --- -Kubernetes node name - - -type: keyword - --- - - -*`kubernetes.pod.name`*:: -+ --- -Kubernetes pod name - - -type: keyword - --- - -*`kubernetes.pod.uid`*:: -+ --- -Kubernetes Pod UID - - -type: keyword - --- - -[float] -=== network - -Optional network fields - - - -[float] -=== connection - -Network connection details - - - -*`network.connection.type`*:: -+ --- -Network connection type, eg. "wifi", "cell" - - -type: keyword - --- - -*`network.connection.subtype`*:: -+ --- -Detailed network connection sub-type, e.g. "LTE", "CDMA" - - -type: keyword - --- - -[float] -=== carrier - -Network operator - - - -*`network.carrier.name`*:: -+ --- -Carrier name, eg. Vodafone, T-Mobile, etc. - - -type: keyword - --- - -*`network.carrier.mcc`*:: -+ --- -Mobile country code - - -type: keyword - --- - -*`network.carrier.mnc`*:: -+ --- -Mobile network code - - -type: keyword - --- - -*`network.carrier.icc`*:: -+ --- -ISO country code, eg. US - - -type: keyword - --- - -[float] -=== host - -Optional host fields. - - - -*`host.architecture`*:: -+ --- -The architecture of the host the event was recorded on. - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`host.hostname`*:: -+ --- -The hostname of the host the event was recorded on. - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`host.name`*:: -+ --- -Name of the host the event was recorded on. It can contain same information as host.hostname or a name specified by the user. - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`host.ip`*:: -+ --- -IP of the host that records the event. - - -type: ip - -{yes-icon} {ecs-ref}[ECS] field. - --- - -[float] -=== os - -The OS fields contain information about the operating system. - - - -*`host.os.platform`*:: -+ --- -The platform of the host the event was recorded on. - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -[float] -=== process - -Information pertaining to the running process where the data was collected - - - -*`process.args`*:: -+ --- -Process arguments. May be filtered to protect sensitive information. - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`process.pid`*:: -+ --- -Numeric process ID of the service process. - - -type: long - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`process.ppid`*:: -+ --- -Numeric ID of the service's parent process. - - -type: long - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`process.title`*:: -+ --- -Service process title. - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - - -*`observer.listening`*:: -+ --- -Address the server is listening on. - - -type: keyword - --- - -*`observer.hostname`*:: -+ --- -Hostname of the APM Server. - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`observer.version`*:: -+ --- -APM Server version. - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`observer.type`*:: -+ --- -The type will be set to `apm-server`. - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`observer.id`*:: -+ --- -Unique identifier of the APM Server. - - -type: keyword - --- - -*`observer.ephemeral_id`*:: -+ --- -Ephemeral identifier of the APM Server. - - -type: keyword - --- - - -*`user.name`*:: -+ --- -The username of the logged in user. - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`user.domain`*:: -+ --- -Domain of the logged in user. - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`user.id`*:: -+ --- -Identifier of the logged in user. - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`user.email`*:: -+ --- -Email of the logged in user. - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - - -*`client.domain`*:: -+ --- -Client domain. - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`client.ip`*:: -+ --- -IP address of the client of a recorded event. This is typically obtained from a request's X-Forwarded-For or the X-Real-IP header or falls back to a given configuration for remote address. - - -type: ip - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`client.port`*:: -+ --- -Port of the client. - - -type: long - -{yes-icon} {ecs-ref}[ECS] field. - --- - - -*`source.domain`*:: -+ --- -Source domain. - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`source.ip`*:: -+ --- -IP address of the source of a recorded event. This is typically obtained from a request's X-Forwarded-For or the X-Real-IP header or falls back to a given configuration for remote address. - - -type: ip - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`source.port`*:: -+ --- -Port of the source. - - -type: long - -{yes-icon} {ecs-ref}[ECS] field. - --- - -[float] -=== destination - -Destination fields describe details about the destination of a packet/event. -Destination fields are usually populated in conjunction with source fields. - - -*`destination.address`*:: -+ --- -Some event destination addresses are defined ambiguously. The event will sometimes list an IP, a domain or a unix socket. You should always store the raw address in the `.address` field. Then it should be duplicated to `.ip` or `.domain`, depending on which one it is. - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`destination.ip`*:: -+ --- -IP addess of the destination. Can be one of multiple IPv4 or IPv6 addresses. - -type: ip - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`destination.port`*:: -+ --- -Port of the destination. - -type: long - -format: string - -{yes-icon} {ecs-ref}[ECS] field. - --- - -[float] -=== user_agent - -The user_agent fields normally come from a browser request. They often show up in web service logs coming from the parsed user agent string. - - - -*`user_agent.original`*:: -+ --- -Unparsed version of the user_agent. - - -type: keyword - -example: Mozilla/5.0 (iPhone; CPU iPhone OS 12_1 like Mac OS X) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/12.0 Mobile/15E148 Safari/604.1 - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`user_agent.original.text`*:: -+ --- -Software agent acting in behalf of a user, eg. a web browser / OS combination. - - -type: text - --- - -*`user_agent.name`*:: -+ --- -Name of the user agent. - - -type: keyword - -example: Safari - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`user_agent.version`*:: -+ --- -Version of the user agent. - - -type: keyword - -example: 12.0 - -{yes-icon} {ecs-ref}[ECS] field. - --- - -[float] -=== device - -Information concerning the device. - - - -*`user_agent.device.name`*:: -+ --- -Name of the device. - - -type: keyword - -example: iPhone - -{yes-icon} {ecs-ref}[ECS] field. - --- - -[float] -=== os - -The OS fields contain information about the operating system. - - - -*`user_agent.os.platform`*:: -+ --- -Operating system platform (such centos, ubuntu, windows). - - -type: keyword - -example: darwin - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`user_agent.os.name`*:: -+ --- -Operating system name, without the version. - - -type: keyword - -example: Mac OS X - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`user_agent.os.full`*:: -+ --- -Operating system name, including the version or code name. - - -type: keyword - -example: Mac OS Mojave - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`user_agent.os.family`*:: -+ --- -OS family (such as redhat, debian, freebsd, windows). - - -type: keyword - -example: debian - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`user_agent.os.version`*:: -+ --- -Operating system version as a raw string. - - -type: keyword - -example: 10.14.1 - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`user_agent.os.kernel`*:: -+ --- -Operating system kernel version as a raw string. - - -type: keyword - -example: 4.4.0-112-generic - -{yes-icon} {ecs-ref}[ECS] field. - --- - -[float] -=== cloud - -Cloud metadata reported by agents - - - - -*`cloud.account.id`*:: -+ --- -Cloud account ID - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`cloud.account.name`*:: -+ --- -Cloud account name - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`cloud.availability_zone`*:: -+ --- -Cloud availability zone name - -type: keyword - -example: us-east1-a - -{yes-icon} {ecs-ref}[ECS] field. - --- - - -*`cloud.instance.id`*:: -+ --- -Cloud instance/machine ID - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`cloud.instance.name`*:: -+ --- -Cloud instance/machine name - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - - -*`cloud.machine.type`*:: -+ --- -Cloud instance/machine type - -type: keyword - -example: t2.medium - -{yes-icon} {ecs-ref}[ECS] field. - --- - - -*`cloud.project.id`*:: -+ --- -Cloud project ID - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`cloud.project.name`*:: -+ --- -Cloud project name - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`cloud.provider`*:: -+ --- -Cloud provider name - -type: keyword - -example: gcp - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`cloud.region`*:: -+ --- -Cloud region name - -type: keyword - -example: us-east1 - -{yes-icon} {ecs-ref}[ECS] field. - --- - - -*`cloud.service.name`*:: -+ --- -Cloud service name, intended to distinguish services running on different platforms within a provider. - - -type: keyword - --- - -[float] -=== error - -Data captured by an agent representing an event occurring in a monitored service. - - - -*`error.id`*:: -+ --- -The ID of the error. - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`error.culprit`*:: -+ --- -Function call which was the primary perpetrator of this event. - -type: keyword - --- - -*`error.grouping_key`*:: -+ --- -Hash of select properties of the logged error for grouping purposes. - - -type: keyword - --- - -*`error.grouping_name`*:: -+ --- -Name to associate with an error group. Errors belonging to the same group (same grouping_key) may have differing values for grouping_name. Consumers may choose one arbitrarily. - - -type: keyword - --- - -[float] -=== exception - -Information about the originally thrown error. - - - -*`error.exception.code`*:: -+ --- -The error code set when the error happened, e.g. database error code. - -type: keyword - --- - -*`error.exception.message`*:: -+ --- -The original error message. - -type: text - --- - -*`error.exception.module`*:: -+ --- -The module namespace of the original error. - -type: keyword - --- - -*`error.exception.type`*:: -+ --- -The type of the original error, e.g. the Java exception class name. - -type: keyword - --- - -*`error.exception.handled`*:: -+ --- -Indicator whether the error was caught somewhere in the code or not. - -type: boolean - --- - -[float] -=== log - -Additional information added by logging the error. - - - -*`error.log.level`*:: -+ --- -The severity of the record. - -type: keyword - --- - -*`error.log.logger_name`*:: -+ --- -The name of the logger instance used. - -type: keyword - --- - -*`error.log.message`*:: -+ --- -The additionally logged error message. - -type: text - --- - -*`error.log.param_message`*:: -+ --- -A parametrized message. E.g. 'Could not connect to %s'. The property message is still required, and should be equal to the param_message, but with placeholders replaced. In some situations the param_message is used to group errors together. - - -type: keyword - --- - -[[exported-fields-apm-profile]] -== APM Profile fields - -Profiling-specific data for APM. - - -*`processor.name`*:: -+ --- -Processor name. - -type: keyword - --- - -*`processor.event`*:: -+ --- -Processor event. - -type: keyword - --- - - -*`timestamp.us`*:: -+ --- -Timestamp of the event in microseconds since Unix epoch. - - -type: long - --- - -*`labels`*:: -+ --- -A flat mapping of user-defined labels with string, boolean or number values. - - -type: object - -{yes-icon} {ecs-ref}[ECS] field. - --- - -[float] -=== service - -Service fields. - - - -*`service.name`*:: -+ --- -Immutable name of the service emitting this event. - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`service.version`*:: -+ --- -Version of the service emitting this event. - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`service.environment`*:: -+ --- -Service environment. - - -type: keyword - --- - - -*`service.node.name`*:: -+ --- -Unique meaningful name of the service node. - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - - -*`service.language.name`*:: -+ --- -Name of the programming language used. - - -type: keyword - --- - -*`service.language.version`*:: -+ --- -Version of the programming language used. - - -type: keyword - --- - - -*`service.runtime.name`*:: -+ --- -Name of the runtime used. - - -type: keyword - --- - -*`service.runtime.version`*:: -+ --- -Version of the runtime used. - - -type: keyword - --- - - -*`service.framework.name`*:: -+ --- -Name of the framework used. - - -type: keyword - --- - -*`service.framework.version`*:: -+ --- -Version of the framework used. - - -type: keyword - --- - - -*`agent.name`*:: -+ --- -Name of the agent used. - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`agent.version`*:: -+ --- -Version of the agent used. - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`agent.ephemeral_id`*:: -+ --- -The Ephemeral ID identifies a running process. - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -[float] -=== container - -Container fields are used for meta information about the specific container that is the source of information. These fields help correlate data based containers from any runtime. - - - -*`container.id`*:: -+ --- -Unique container id. - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -[float] -=== network - -Optional network fields - - - -[float] -=== connection - -Network connection details - - - -*`network.connection.type`*:: -+ --- -Network connection type, eg. "wifi", "cell" - - -type: keyword - --- - -*`network.connection.subtype`*:: -+ --- -Detailed network connection sub-type, e.g. "LTE", "CDMA" - - -type: keyword - --- - -[float] -=== carrier - -Network operator - - - -*`network.carrier.name`*:: -+ --- -Carrier name, eg. Vodafone, T-Mobile, etc. - - -type: keyword - --- - -*`network.carrier.mcc`*:: -+ --- -Mobile country code - - -type: keyword - --- - -*`network.carrier.mnc`*:: -+ --- -Mobile network code - - -type: keyword - --- - -*`network.carrier.icc`*:: -+ --- -ISO country code, eg. US - - -type: keyword - --- - -[float] -=== kubernetes - -Kubernetes metadata reported by agents - - - -*`kubernetes.namespace`*:: -+ --- -Kubernetes namespace - - -type: keyword - --- - - -*`kubernetes.node.name`*:: -+ --- -Kubernetes node name - - -type: keyword - --- - - -*`kubernetes.pod.name`*:: -+ --- -Kubernetes pod name - - -type: keyword - --- - -*`kubernetes.pod.uid`*:: -+ --- -Kubernetes Pod UID - - -type: keyword - --- - -[float] -=== host - -Optional host fields. - - - -*`host.architecture`*:: -+ --- -The architecture of the host the event was recorded on. - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`host.hostname`*:: -+ --- -The hostname of the host the event was recorded on. - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`host.name`*:: -+ --- -Name of the host the event was recorded on. It can contain same information as host.hostname or a name specified by the user. - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`host.ip`*:: -+ --- -IP of the host that records the event. - - -type: ip - -{yes-icon} {ecs-ref}[ECS] field. - --- - -[float] -=== os - -The OS fields contain information about the operating system. - - - -*`host.os.platform`*:: -+ --- -The platform of the host the event was recorded on. - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -[float] -=== process - -Information pertaining to the running process where the data was collected - - - -*`process.args`*:: -+ --- -Process arguments. May be filtered to protect sensitive information. - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`process.pid`*:: -+ --- -Numeric process ID of the service process. - - -type: long - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`process.ppid`*:: -+ --- -Numeric ID of the service's parent process. - - -type: long - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`process.title`*:: -+ --- -Service process title. - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - - -*`observer.listening`*:: -+ --- -Address the server is listening on. - - -type: keyword - --- - -*`observer.hostname`*:: -+ --- -Hostname of the APM Server. - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`observer.version`*:: -+ --- -APM Server version. - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`observer.type`*:: -+ --- -The type will be set to `apm-server`. - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`observer.id`*:: -+ --- -Unique identifier of the APM Server. - - -type: keyword - --- - -*`observer.ephemeral_id`*:: -+ --- -Ephemeral identifier of the APM Server. - - -type: keyword - --- - - -*`user.name`*:: -+ --- -The username of the logged in user. - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`user.id`*:: -+ --- -Identifier of the logged in user. - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`user.email`*:: -+ --- -Email of the logged in user. - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - - -*`client.domain`*:: -+ --- -Client domain. - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`client.ip`*:: -+ --- -IP address of the client of a recorded event. This is typically obtained from a request's X-Forwarded-For or the X-Real-IP header or falls back to a given configuration for remote address. - - -type: ip - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`client.port`*:: -+ --- -Port of the client. - - -type: long - -{yes-icon} {ecs-ref}[ECS] field. - --- - - -*`source.domain`*:: -+ --- -Source domain. - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`source.ip`*:: -+ --- -IP address of the source of a recorded event. This is typically obtained from a request's X-Forwarded-For or the X-Real-IP header or falls back to a given configuration for remote address. - - -type: ip - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`source.port`*:: -+ --- -Port of the source. - - -type: long - -{yes-icon} {ecs-ref}[ECS] field. - --- - -[float] -=== destination - -Destination fields describe details about the destination of a packet/event. -Destination fields are usually populated in conjunction with source fields. - - -*`destination.address`*:: -+ --- -Some event destination addresses are defined ambiguously. The event will sometimes list an IP, a domain or a unix socket. You should always store the raw address in the `.address` field. Then it should be duplicated to `.ip` or `.domain`, depending on which one it is. - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`destination.ip`*:: -+ --- -IP addess of the destination. Can be one of multiple IPv4 or IPv6 addresses. - -type: ip - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`destination.port`*:: -+ --- -Port of the destination. - -type: long - -format: string - -{yes-icon} {ecs-ref}[ECS] field. - --- - -[float] -=== user_agent - -The user_agent fields normally come from a browser request. They often show up in web service logs coming from the parsed user agent string. - - - -*`user_agent.original`*:: -+ --- -Unparsed version of the user_agent. - - -type: keyword - -example: Mozilla/5.0 (iPhone; CPU iPhone OS 12_1 like Mac OS X) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/12.0 Mobile/15E148 Safari/604.1 - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`user_agent.original.text`*:: -+ --- -Software agent acting in behalf of a user, eg. a web browser / OS combination. - - -type: text - --- - -*`user_agent.name`*:: -+ --- -Name of the user agent. - - -type: keyword - -example: Safari - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`user_agent.version`*:: -+ --- -Version of the user agent. - - -type: keyword - -example: 12.0 - -{yes-icon} {ecs-ref}[ECS] field. - --- - -[float] -=== device - -Information concerning the device. - - - -*`user_agent.device.name`*:: -+ --- -Name of the device. - - -type: keyword - -example: iPhone - -{yes-icon} {ecs-ref}[ECS] field. - --- - -[float] -=== os - -The OS fields contain information about the operating system. - - - -*`user_agent.os.platform`*:: -+ --- -Operating system platform (such centos, ubuntu, windows). - - -type: keyword - -example: darwin - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`user_agent.os.name`*:: -+ --- -Operating system name, without the version. - - -type: keyword - -example: Mac OS X - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`user_agent.os.full`*:: -+ --- -Operating system name, including the version or code name. - - -type: keyword - -example: Mac OS Mojave - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`user_agent.os.family`*:: -+ --- -OS family (such as redhat, debian, freebsd, windows). - - -type: keyword - -example: debian - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`user_agent.os.version`*:: -+ --- -Operating system version as a raw string. - - -type: keyword - -example: 10.14.1 - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`user_agent.os.kernel`*:: -+ --- -Operating system kernel version as a raw string. - - -type: keyword - -example: 4.4.0-112-generic - -{yes-icon} {ecs-ref}[ECS] field. - --- - -[float] -=== cloud - -Cloud metadata reported by agents - - - - -*`cloud.account.id`*:: -+ --- -Cloud account ID - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`cloud.account.name`*:: -+ --- -Cloud account name - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`cloud.availability_zone`*:: -+ --- -Cloud availability zone name - -type: keyword - -example: us-east1-a - -{yes-icon} {ecs-ref}[ECS] field. - --- - - -*`cloud.instance.id`*:: -+ --- -Cloud instance/machine ID - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`cloud.instance.name`*:: -+ --- -Cloud instance/machine name - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - - -*`cloud.machine.type`*:: -+ --- -Cloud instance/machine type - -type: keyword - -example: t2.medium - -{yes-icon} {ecs-ref}[ECS] field. - --- - - -*`cloud.project.id`*:: -+ --- -Cloud project ID - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`cloud.project.name`*:: -+ --- -Cloud project name - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`cloud.provider`*:: -+ --- -Cloud provider name - -type: keyword - -example: gcp - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`cloud.region`*:: -+ --- -Cloud region name - -type: keyword - -example: us-east1 - -{yes-icon} {ecs-ref}[ECS] field. - --- - - -*`cloud.service.name`*:: -+ --- -Cloud service name, intended to distinguish services running on different platforms within a provider. - - -type: keyword - --- - - -*`profile.id`*:: -+ --- -Unique ID for the profile. All samples within a profile will have the same profile ID. - - -type: keyword - --- - -*`profile.duration`*:: -+ --- -Duration of the profile, in nanoseconds. All samples within a profile will have the same duration. To aggregate durations, you should first group by the profile ID. - - -type: long - --- - - -*`profile.cpu.ns`*:: -+ --- -Amount of CPU time profiled, in nanoseconds. - - -type: long - --- - - -*`profile.wall.us`*:: -+ --- -Amount of wall time profiled, in microseconds. - - -type: long - --- - - -*`profile.samples.count`*:: -+ --- -Number of profile samples for the profiling period. - - -type: long - --- - - -*`profile.alloc_objects.count`*:: -+ --- -Number of objects allocated since the process started. - - -type: long - --- - - -*`profile.alloc_space.bytes`*:: -+ --- -Amount of memory allocated, in bytes, since the process started. - - -type: long - --- - - -*`profile.inuse_objects.count`*:: -+ --- -Number of objects allocated and currently in use. - - -type: long - --- - - -*`profile.inuse_space.bytes`*:: -+ --- -Amount of memory allocated, in bytes, and currently in use. - - -type: long - --- - - -*`profile.top.id`*:: -+ --- -Unique ID for the top stack frame in the context of its callers. - - -type: keyword - --- - -*`profile.top.function`*:: -+ --- -Function name for the top stack frame. - - -type: keyword - --- - -*`profile.top.filename`*:: -+ --- -Source code filename for the top stack frame. - - -type: keyword - --- - -*`profile.top.line`*:: -+ --- -Source code line number for the top stack frame. - - -type: long - --- - - -*`profile.stack.id`*:: -+ --- -Unique ID for a stack frame in the context of its callers. - - -type: keyword - --- - -*`profile.stack.function`*:: -+ --- -Function name for a stack frame. - - -type: keyword - --- - -*`profile.stack.filename`*:: -+ --- -Source code filename for a stack frame. - - -type: keyword - --- - -*`profile.stack.line`*:: -+ --- -Source code line number for a stack frame. - - -type: long - --- - -[[exported-fields-apm-sourcemap]] -== APM Sourcemap fields - -Sourcemap files enriched with metadata - - - -[float] -=== service - -Service fields. - - - -*`sourcemap.service.name`*:: -+ --- -The name of the service this sourcemap belongs to. - - -type: keyword - --- - -*`sourcemap.service.version`*:: -+ --- -Service version. - - -type: keyword - --- - -*`sourcemap.bundle_filepath`*:: -+ --- -Location of the sourcemap relative to the file requesting it. - - -type: keyword - --- - -[[exported-fields-apm-span]] -== APM Span fields - -Span-specific data for APM. - - -*`processor.name`*:: -+ --- -Processor name. - -type: keyword - --- - -*`processor.event`*:: -+ --- -Processor event. - -type: keyword - --- - - -*`timestamp.us`*:: -+ --- -Timestamp of the event in microseconds since Unix epoch. - - -type: long - --- - -*`labels`*:: -+ --- -A flat mapping of user-defined labels with string, boolean or number values. - - -type: object - -{yes-icon} {ecs-ref}[ECS] field. - --- - -[float] -=== service - -Service fields. - - - -*`service.name`*:: -+ --- -Immutable name of the service emitting this event. - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`service.version`*:: -+ --- -Version of the service emitting this event. - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`service.environment`*:: -+ --- -Service environment. - - -type: keyword - --- - - -*`service.node.name`*:: -+ --- -Unique meaningful name of the service node. - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - - -*`service.language.name`*:: -+ --- -Name of the programming language used. - - -type: keyword - --- - -*`service.language.version`*:: -+ --- -Version of the programming language used. - - -type: keyword - --- - - -*`service.runtime.name`*:: -+ --- -Name of the runtime used. - - -type: keyword - --- - -*`service.runtime.version`*:: -+ --- -Version of the runtime used. - - -type: keyword - --- - - -*`service.framework.name`*:: -+ --- -Name of the framework used. - - -type: keyword - --- - -*`service.framework.version`*:: -+ --- -Version of the framework used. - - -type: keyword - --- - - -*`transaction.id`*:: -+ --- -The transaction ID. - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`transaction.sampled`*:: -+ --- -Transactions that are 'sampled' will include all available information. Transactions that are not sampled will not have spans or context. - - -type: boolean - --- - -*`transaction.type`*:: -+ --- -Keyword of specific relevance in the service's domain (eg. 'request', 'backgroundjob', etc) - - -type: keyword - --- - -*`transaction.name`*:: -+ --- -Generic designation of a transaction in the scope of a single service (eg. 'GET /users/:id'). - - -type: keyword - --- - -*`transaction.name.text`*:: -+ --- -type: text - --- - - -*`trace.id`*:: -+ --- -The ID of the trace to which the event belongs to. - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - - -*`parent.id`*:: -+ --- -The ID of the parent event. - - -type: keyword - --- - - -*`agent.name`*:: -+ --- -Name of the agent used. - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`agent.version`*:: -+ --- -Version of the agent used. - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`agent.ephemeral_id`*:: -+ --- -The Ephemeral ID identifies a running process. - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -[float] -=== container - -Container fields are used for meta information about the specific container that is the source of information. These fields help correlate data based containers from any runtime. - - - -*`container.id`*:: -+ --- -Unique container id. - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -[float] -=== kubernetes - -Kubernetes metadata reported by agents - - - -*`kubernetes.namespace`*:: -+ --- -Kubernetes namespace - - -type: keyword - --- - - -*`kubernetes.node.name`*:: -+ --- -Kubernetes node name - - -type: keyword - --- - - -*`kubernetes.pod.name`*:: -+ --- -Kubernetes pod name - - -type: keyword - --- - -*`kubernetes.pod.uid`*:: -+ --- -Kubernetes Pod UID - - -type: keyword - --- - -[float] -=== network - -Optional network fields - - - -[float] -=== connection - -Network connection details - - - -*`network.connection.type`*:: -+ --- -Network connection type, eg. "wifi", "cell" - - -type: keyword - --- - -*`network.connection.subtype`*:: -+ --- -Detailed network connection sub-type, e.g. "LTE", "CDMA" - - -type: keyword - --- - -[float] -=== carrier - -Network operator - - - -*`network.carrier.name`*:: -+ --- -Carrier name, eg. Vodafone, T-Mobile, etc. - - -type: keyword - --- - -*`network.carrier.mcc`*:: -+ --- -Mobile country code - - -type: keyword - --- - -*`network.carrier.mnc`*:: -+ --- -Mobile network code - - -type: keyword - --- - -*`network.carrier.icc`*:: -+ --- -ISO country code, eg. US - - -type: keyword - --- - -[float] -=== host - -Optional host fields. - - - -*`host.architecture`*:: -+ --- -The architecture of the host the event was recorded on. - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`host.hostname`*:: -+ --- -The hostname of the host the event was recorded on. - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`host.name`*:: -+ --- -Name of the host the event was recorded on. It can contain same information as host.hostname or a name specified by the user. - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`host.ip`*:: -+ --- -IP of the host that records the event. - - -type: ip - -{yes-icon} {ecs-ref}[ECS] field. - --- - -[float] -=== os - -The OS fields contain information about the operating system. - - - -*`host.os.platform`*:: -+ --- -The platform of the host the event was recorded on. - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -[float] -=== process - -Information pertaining to the running process where the data was collected - - - -*`process.args`*:: -+ --- -Process arguments. May be filtered to protect sensitive information. - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`process.pid`*:: -+ --- -Numeric process ID of the service process. - - -type: long - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`process.ppid`*:: -+ --- -Numeric ID of the service's parent process. - - -type: long - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`process.title`*:: -+ --- -Service process title. - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - - -*`observer.listening`*:: -+ --- -Address the server is listening on. - - -type: keyword - --- - -*`observer.hostname`*:: -+ --- -Hostname of the APM Server. - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`observer.version`*:: -+ --- -APM Server version. - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`observer.type`*:: -+ --- -The type will be set to `apm-server`. - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`observer.id`*:: -+ --- -Unique identifier of the APM Server. - - -type: keyword - --- - -*`observer.ephemeral_id`*:: -+ --- -Ephemeral identifier of the APM Server. - - -type: keyword - --- - - -*`user.name`*:: -+ --- -The username of the logged in user. - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`user.domain`*:: -+ --- -Domain of the logged in user. - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`user.id`*:: -+ --- -Identifier of the logged in user. - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`user.email`*:: -+ --- -Email of the logged in user. - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - - -*`client.domain`*:: -+ --- -Client domain. - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`client.ip`*:: -+ --- -IP address of the client of a recorded event. This is typically obtained from a request's X-Forwarded-For or the X-Real-IP header or falls back to a given configuration for remote address. - - -type: ip - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`client.port`*:: -+ --- -Port of the client. - - -type: long - -{yes-icon} {ecs-ref}[ECS] field. - --- - - -*`source.domain`*:: -+ --- -Source domain. - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`source.ip`*:: -+ --- -IP address of the source of a recorded event. This is typically obtained from a request's X-Forwarded-For or the X-Real-IP header or falls back to a given configuration for remote address. - - -type: ip - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`source.port`*:: -+ --- -Port of the source. - - -type: long - -{yes-icon} {ecs-ref}[ECS] field. - --- - -[float] -=== destination - -Destination fields describe details about the destination of a packet/event. -Destination fields are usually populated in conjunction with source fields. - - -*`destination.address`*:: -+ --- -Some event destination addresses are defined ambiguously. The event will sometimes list an IP, a domain or a unix socket. You should always store the raw address in the `.address` field. Then it should be duplicated to `.ip` or `.domain`, depending on which one it is. - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`destination.ip`*:: -+ --- -IP addess of the destination. Can be one of multiple IPv4 or IPv6 addresses. - -type: ip - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`destination.port`*:: -+ --- -Port of the destination. - -type: long - -format: string - -{yes-icon} {ecs-ref}[ECS] field. - --- - -[float] -=== user_agent - -The user_agent fields normally come from a browser request. They often show up in web service logs coming from the parsed user agent string. - - - -*`user_agent.original`*:: -+ --- -Unparsed version of the user_agent. - - -type: keyword - -example: Mozilla/5.0 (iPhone; CPU iPhone OS 12_1 like Mac OS X) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/12.0 Mobile/15E148 Safari/604.1 - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`user_agent.original.text`*:: -+ --- -Software agent acting in behalf of a user, eg. a web browser / OS combination. - - -type: text - --- - -*`user_agent.name`*:: -+ --- -Name of the user agent. - - -type: keyword - -example: Safari - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`user_agent.version`*:: -+ --- -Version of the user agent. - - -type: keyword - -example: 12.0 - -{yes-icon} {ecs-ref}[ECS] field. - --- - -[float] -=== device - -Information concerning the device. - - - -*`user_agent.device.name`*:: -+ --- -Name of the device. - - -type: keyword - -example: iPhone - -{yes-icon} {ecs-ref}[ECS] field. - --- - -[float] -=== os - -The OS fields contain information about the operating system. - - - -*`user_agent.os.platform`*:: -+ --- -Operating system platform (such centos, ubuntu, windows). - - -type: keyword - -example: darwin - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`user_agent.os.name`*:: -+ --- -Operating system name, without the version. - - -type: keyword - -example: Mac OS X - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`user_agent.os.full`*:: -+ --- -Operating system name, including the version or code name. - - -type: keyword - -example: Mac OS Mojave - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`user_agent.os.family`*:: -+ --- -OS family (such as redhat, debian, freebsd, windows). - - -type: keyword - -example: debian - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`user_agent.os.version`*:: -+ --- -Operating system version as a raw string. - - -type: keyword - -example: 10.14.1 - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`user_agent.os.kernel`*:: -+ --- -Operating system kernel version as a raw string. - - -type: keyword - -example: 4.4.0-112-generic - -{yes-icon} {ecs-ref}[ECS] field. - --- - -[float] -=== cloud - -Cloud metadata reported by agents - - - - -*`cloud.account.id`*:: -+ --- -Cloud account ID - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`cloud.account.name`*:: -+ --- -Cloud account name - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`cloud.availability_zone`*:: -+ --- -Cloud availability zone name - -type: keyword - -example: us-east1-a - -{yes-icon} {ecs-ref}[ECS] field. - --- - - -*`cloud.instance.id`*:: -+ --- -Cloud instance/machine ID - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`cloud.instance.name`*:: -+ --- -Cloud instance/machine name - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - - -*`cloud.machine.type`*:: -+ --- -Cloud instance/machine type - -type: keyword - -example: t2.medium - -{yes-icon} {ecs-ref}[ECS] field. - --- - - -*`cloud.project.id`*:: -+ --- -Cloud project ID - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`cloud.project.name`*:: -+ --- -Cloud project name - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`cloud.provider`*:: -+ --- -Cloud provider name - -type: keyword - -example: gcp - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`cloud.region`*:: -+ --- -Cloud region name - -type: keyword - -example: us-east1 - -{yes-icon} {ecs-ref}[ECS] field. - --- - - -*`cloud.service.name`*:: -+ --- -Cloud service name, intended to distinguish services running on different platforms within a provider. - - -type: keyword - --- - - -*`event.outcome`*:: -+ --- -`event.outcome` simply denotes whether the event represents a success or a failure from the perspective of the entity that produced the event. - - -type: keyword - -example: success - -{yes-icon} {ecs-ref}[ECS] field. - --- - - -*`child.id`*:: -+ --- -The ID(s) of the child event(s). - - -type: keyword - --- - - -*`span.type`*:: -+ --- -Keyword of specific relevance in the service's domain (eg: 'db.postgresql.query', 'template.erb', 'cache', etc). - - -type: keyword - --- - -*`span.subtype`*:: -+ --- -A further sub-division of the type (e.g. postgresql, elasticsearch) - - -type: keyword - --- - -*`span.id`*:: -+ --- -The ID of the span stored as hex encoded string. - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`span.name`*:: -+ --- -Generic designation of a span in the scope of a transaction. - - -type: keyword - --- - -*`span.action`*:: -+ --- -The specific kind of event within the sub-type represented by the span (e.g. query, connect) - - -type: keyword - --- - - -*`span.start.us`*:: -+ --- -Offset relative to the transaction's timestamp identifying the start of the span, in microseconds. - - -type: long - --- - - -*`span.duration.us`*:: -+ --- -Duration of the span, in microseconds. - - -type: long - --- - -*`span.sync`*:: -+ --- -Indicates whether the span was executed synchronously or asynchronously. - - -type: boolean - --- - - -*`span.db.link`*:: -+ --- -Database link. - - -type: keyword - --- - -*`span.db.rows_affected`*:: -+ --- -Number of rows affected by the database statement. - - -type: long - --- - - -[float] -=== service - -Destination service context - - -*`span.destination.service.type`*:: -+ --- -Type of the destination service (e.g. 'db', 'elasticsearch'). Should typically be the same as span.type. DEPRECATED: this field will be removed in a future release - - -type: keyword - --- - -*`span.destination.service.name`*:: -+ --- -Identifier for the destination service (e.g. 'http://elastic.co', 'elasticsearch', 'rabbitmq') DEPRECATED: this field will be removed in a future release - - -type: keyword - --- - -*`span.destination.service.resource`*:: -+ --- -Identifier for the destination service resource being operated on (e.g. 'http://elastic.co:80', 'elasticsearch', 'rabbitmq/queue_name') - - -type: keyword - --- - - - -*`span.message.queue.name`*:: -+ --- -Name of the message queue or topic where the message is published or received. - - -type: keyword - --- - - -*`span.message.age.ms`*:: -+ --- -Age of a message in milliseconds. - - -type: long - --- - - -*`span.composite.count`*:: -+ --- -Number of compressed spans the composite span represents. - - -type: long - --- - - -*`span.composite.sum.us`*:: -+ --- -Sum of the durations of the compressed spans, in microseconds. - - -type: long - --- - -*`span.composite.compression_strategy`*:: -+ --- -The compression strategy that was used. - - -type: keyword - --- - -[[exported-fields-apm-span-metrics-xpack]] -== APM Span Metrics fields - -APM span metrics are used for showing rate of requests and latency between instrumented services. - - - -*`metricset.period`*:: -+ --- -Current data collection period for this event in milliseconds. - -type: long - --- - - - -*`span.destination.service.response_time.count`*:: -+ --- -Number of aggregated outgoing requests. - -type: long - --- - -*`span.destination.service.response_time.sum.us`*:: -+ --- -Aggregated duration of outgoing requests, in microseconds. - -type: long - --- - -[[exported-fields-apm-transaction]] -== APM Transaction fields - -Transaction-specific data for APM - - -*`processor.name`*:: -+ --- -Processor name. - -type: keyword - --- - -*`processor.event`*:: -+ --- -Processor event. - -type: keyword - --- - - -*`timestamp.us`*:: -+ --- -Timestamp of the event in microseconds since Unix epoch. - - -type: long - --- - -[float] -=== url - -A complete Url, with scheme, host and path. - - - -*`url.scheme`*:: -+ --- -The protocol of the request, e.g. "https:". - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`url.full`*:: -+ --- -The full, possibly agent-assembled URL of the request, e.g https://example.com:443/search?q=elasticsearch#top. - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`url.domain`*:: -+ --- -The hostname of the request, e.g. "example.com". - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`url.port`*:: -+ --- -The port of the request, e.g. 443. - - -type: long - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`url.path`*:: -+ --- -The path of the request, e.g. "/search". - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`url.query`*:: -+ --- -The query string of the request, e.g. "q=elasticsearch". - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`url.fragment`*:: -+ --- -A fragment specifying a location in a web page , e.g. "top". - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - - -*`http.version`*:: -+ --- -The http version of the request leading to this event. - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - - -*`http.request.method`*:: -+ --- -The http method of the request leading to this event. - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`http.request.headers`*:: -+ --- -The canonical headers of the monitored HTTP request. - - -type: object - -Object is not enabled. - --- - -*`http.request.referrer`*:: -+ --- -Referrer for this HTTP request. - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - - -*`http.response.status_code`*:: -+ --- -The status code of the HTTP response. - - -type: long - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`http.response.finished`*:: -+ --- -Used by the Node agent to indicate when in the response life cycle an error has occurred. - - -type: boolean - --- - -*`http.response.headers`*:: -+ --- -The canonical headers of the monitored HTTP response. - - -type: object - -Object is not enabled. - --- - -*`labels`*:: -+ --- -A flat mapping of user-defined labels with string, boolean or number values. - - -type: object - -{yes-icon} {ecs-ref}[ECS] field. - --- - -[float] -=== faas - -Function as a service fields. - - - -*`faas.execution`*:: -+ --- -Request ID of the function invocation. - - -type: keyword - --- - -*`faas.coldstart`*:: -+ --- -Boolean indicating whether the function invocation was a coldstart or not. - - -type: boolean - --- - -*`faas.trigger.type`*:: -+ --- -The trigger type. - - -type: keyword - --- - -*`faas.trigger.request_id`*:: -+ --- -The ID of the origin trigger request. - - -type: keyword - --- - -[float] -=== service - -Service fields. - - - -*`service.id`*:: -+ --- -Immutable id of the service emitting this event. - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`service.name`*:: -+ --- -Immutable name of the service emitting this event. - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`service.version`*:: -+ --- -Version of the service emitting this event. - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`service.environment`*:: -+ --- -Service environment. - - -type: keyword - --- - - -*`service.node.name`*:: -+ --- -Unique meaningful name of the service node. - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - - -*`service.language.name`*:: -+ --- -Name of the programming language used. - - -type: keyword - --- - -*`service.language.version`*:: -+ --- -Version of the programming language used. - - -type: keyword - --- - - -*`service.runtime.name`*:: -+ --- -Name of the runtime used. - - -type: keyword - --- - -*`service.runtime.version`*:: -+ --- -Version of the runtime used. - - -type: keyword - --- - - -*`service.framework.name`*:: -+ --- -Name of the framework used. - - -type: keyword - --- - -*`service.framework.version`*:: -+ --- -Version of the framework used. - - -type: keyword - --- - - -*`service.origin.id`*:: -+ --- -Immutable id of the service emitting this event. - - -type: keyword - --- - -*`service.origin.name`*:: -+ --- -Immutable name of the service emitting this event. - - -type: keyword - --- - -*`service.origin.version`*:: -+ --- -The version of the service the data was collected from. - - -type: keyword - --- - - -*`session.id`*:: -+ --- -The ID of the session to which the event belongs. - - -type: keyword - --- - -*`session.sequence`*:: -+ --- -The sequence number of the event within the session to which the event belongs. - - -type: long - --- - - - -*`transaction.duration.us`*:: -+ --- -Total duration of this transaction, in microseconds. - - -type: long - --- - -*`transaction.result`*:: -+ --- -The result of the transaction. HTTP status code for HTTP-related transactions. - - -type: keyword - --- - -*`transaction.marks`*:: -+ --- -A user-defined mapping of groups of marks in milliseconds. - - -type: object - --- - -*`transaction.marks.*.*`*:: -+ --- -A user-defined mapping of groups of marks in milliseconds. - - -type: object - --- - - -*`transaction.experience.cls`*:: -+ --- -The Cumulative Layout Shift metric - -type: scaled_float - --- - -*`transaction.experience.fid`*:: -+ --- -The First Input Delay metric - -type: scaled_float - --- - -*`transaction.experience.tbt`*:: -+ --- -The Total Blocking Time metric - -type: scaled_float - --- - -[float] -=== longtask - -Longtask duration/count metrics - - -*`transaction.experience.longtask.count`*:: -+ --- -The total number of of longtasks - -type: long - --- - -*`transaction.experience.longtask.sum`*:: -+ --- -The sum of longtask durations - -type: scaled_float - --- - -*`transaction.experience.longtask.max`*:: -+ --- -The max longtask duration - -type: scaled_float - --- - - -*`transaction.span_count.dropped`*:: -+ --- -The total amount of dropped spans for this transaction. - -type: long - --- - - - -*`transaction.message.queue.name`*:: -+ --- -Name of the message queue or topic where the message is published or received. - - -type: keyword - --- - - -*`transaction.message.age.ms`*:: -+ --- -Age of a message in milliseconds. - - -type: long - --- - - -*`trace.id`*:: -+ --- -The ID of the trace to which the event belongs to. - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - - -*`parent.id`*:: -+ --- -The ID of the parent event. - - -type: keyword - --- - - -*`agent.name`*:: -+ --- -Name of the agent used. - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`agent.version`*:: -+ --- -Version of the agent used. - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`agent.ephemeral_id`*:: -+ --- -The Ephemeral ID identifies a running process. - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -[float] -=== container - -Container fields are used for meta information about the specific container that is the source of information. These fields help correlate data based containers from any runtime. - - - -*`container.id`*:: -+ --- -Unique container id. - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -[float] -=== kubernetes - -Kubernetes metadata reported by agents - - - -*`kubernetes.namespace`*:: -+ --- -Kubernetes namespace - - -type: keyword - --- - - -*`kubernetes.node.name`*:: -+ --- -Kubernetes node name - - -type: keyword - --- - - -*`kubernetes.pod.name`*:: -+ --- -Kubernetes pod name - - -type: keyword - --- - -*`kubernetes.pod.uid`*:: -+ --- -Kubernetes Pod UID - - -type: keyword - --- - -[float] -=== network - -Optional network fields - - - -[float] -=== connection - -Network connection details - - - -*`network.connection.type`*:: -+ --- -Network connection type, eg. "wifi", "cell" - - -type: keyword - --- - -*`network.connection.subtype`*:: -+ --- -Detailed network connection sub-type, e.g. "LTE", "CDMA" - - -type: keyword - --- - -[float] -=== carrier - -Network operator - - - -*`network.carrier.name`*:: -+ --- -Carrier name, eg. Vodafone, T-Mobile, etc. - - -type: keyword - --- - -*`network.carrier.mcc`*:: -+ --- -Mobile country code - - -type: keyword - --- - -*`network.carrier.mnc`*:: -+ --- -Mobile network code - - -type: keyword - --- - -*`network.carrier.icc`*:: -+ --- -ISO country code, eg. US - - -type: keyword - --- - -[float] -=== host - -Optional host fields. - - - -*`host.architecture`*:: -+ --- -The architecture of the host the event was recorded on. - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`host.hostname`*:: -+ --- -The hostname of the host the event was recorded on. - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`host.name`*:: -+ --- -Name of the host the event was recorded on. It can contain same information as host.hostname or a name specified by the user. - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`host.ip`*:: -+ --- -IP of the host that records the event. - - -type: ip - -{yes-icon} {ecs-ref}[ECS] field. - --- - -[float] -=== os - -The OS fields contain information about the operating system. - - - -*`host.os.platform`*:: -+ --- -The platform of the host the event was recorded on. - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -[float] -=== process - -Information pertaining to the running process where the data was collected - - - -*`process.args`*:: -+ --- -Process arguments. May be filtered to protect sensitive information. - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`process.pid`*:: -+ --- -Numeric process ID of the service process. - - -type: long - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`process.ppid`*:: -+ --- -Numeric ID of the service's parent process. - - -type: long - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`process.title`*:: -+ --- -Service process title. - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - - -*`observer.listening`*:: -+ --- -Address the server is listening on. - - -type: keyword - --- - -*`observer.hostname`*:: -+ --- -Hostname of the APM Server. - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`observer.version`*:: -+ --- -APM Server version. - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`observer.type`*:: -+ --- -The type will be set to `apm-server`. - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`observer.id`*:: -+ --- -Unique identifier of the APM Server. - - -type: keyword - --- - -*`observer.ephemeral_id`*:: -+ --- -Ephemeral identifier of the APM Server. - - -type: keyword - --- - - -*`user.domain`*:: -+ --- -The domain of the logged in user. - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`user.name`*:: -+ --- -The username of the logged in user. - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`user.id`*:: -+ --- -Identifier of the logged in user. - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`user.email`*:: -+ --- -Email of the logged in user. - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - - -*`client.domain`*:: -+ --- -Client domain. - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`client.ip`*:: -+ --- -IP address of the client of a recorded event. This is typically obtained from a request's X-Forwarded-For or the X-Real-IP header or falls back to a given configuration for remote address. - - -type: ip - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`client.port`*:: -+ --- -Port of the client. - - -type: long - -{yes-icon} {ecs-ref}[ECS] field. - --- - - -*`source.domain`*:: -+ --- -Source domain. - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`source.ip`*:: -+ --- -IP address of the source of a recorded event. This is typically obtained from a request's X-Forwarded-For or the X-Real-IP header or falls back to a given configuration for remote address. - - -type: ip - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`source.port`*:: -+ --- -Port of the source. - - -type: long - -{yes-icon} {ecs-ref}[ECS] field. - --- - -[float] -=== destination - -Destination fields describe details about the destination of a packet/event. -Destination fields are usually populated in conjunction with source fields. - - -*`destination.address`*:: -+ --- -Some event destination addresses are defined ambiguously. The event will sometimes list an IP, a domain or a unix socket. You should always store the raw address in the `.address` field. Then it should be duplicated to `.ip` or `.domain`, depending on which one it is. - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`destination.ip`*:: -+ --- -IP addess of the destination. Can be one of multiple IPv4 or IPv6 addresses. - -type: ip - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`destination.port`*:: -+ --- -Port of the destination. - -type: long - -format: string - -{yes-icon} {ecs-ref}[ECS] field. - --- - -[float] -=== user_agent - -The user_agent fields normally come from a browser request. They often show up in web service logs coming from the parsed user agent string. - - - -*`user_agent.original`*:: -+ --- -Unparsed version of the user_agent. - - -type: keyword - -example: Mozilla/5.0 (iPhone; CPU iPhone OS 12_1 like Mac OS X) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/12.0 Mobile/15E148 Safari/604.1 - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`user_agent.original.text`*:: -+ --- -Software agent acting in behalf of a user, eg. a web browser / OS combination. - - -type: text - --- - -*`user_agent.name`*:: -+ --- -Name of the user agent. - - -type: keyword - -example: Safari - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`user_agent.version`*:: -+ --- -Version of the user agent. - - -type: keyword - -example: 12.0 - -{yes-icon} {ecs-ref}[ECS] field. - --- - -[float] -=== device - -Information concerning the device. - - - -*`user_agent.device.name`*:: -+ --- -Name of the device. - - -type: keyword - -example: iPhone - -{yes-icon} {ecs-ref}[ECS] field. - --- - -[float] -=== os - -The OS fields contain information about the operating system. - - - -*`user_agent.os.platform`*:: -+ --- -Operating system platform (such centos, ubuntu, windows). - - -type: keyword - -example: darwin - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`user_agent.os.name`*:: -+ --- -Operating system name, without the version. - - -type: keyword - -example: Mac OS X - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`user_agent.os.full`*:: -+ --- -Operating system name, including the version or code name. - - -type: keyword - -example: Mac OS Mojave - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`user_agent.os.family`*:: -+ --- -OS family (such as redhat, debian, freebsd, windows). - - -type: keyword - -example: debian - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`user_agent.os.version`*:: -+ --- -Operating system version as a raw string. - - -type: keyword - -example: 10.14.1 - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`user_agent.os.kernel`*:: -+ --- -Operating system kernel version as a raw string. - - -type: keyword - -example: 4.4.0-112-generic - -{yes-icon} {ecs-ref}[ECS] field. - --- - -[float] -=== cloud - -Cloud metadata reported by agents - - - - -*`cloud.account.id`*:: -+ --- -Cloud account ID - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`cloud.account.name`*:: -+ --- -Cloud account name - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`cloud.availability_zone`*:: -+ --- -Cloud availability zone name - -type: keyword - -example: us-east1-a - -{yes-icon} {ecs-ref}[ECS] field. - --- - - -*`cloud.instance.id`*:: -+ --- -Cloud instance/machine ID - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`cloud.instance.name`*:: -+ --- -Cloud instance/machine name - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - - -*`cloud.machine.type`*:: -+ --- -Cloud instance/machine type - -type: keyword - -example: t2.medium - -{yes-icon} {ecs-ref}[ECS] field. - --- - - -*`cloud.origin.account.id`*:: -+ --- -The cloud account or organization id used to identify different entities in a multi-tenant environment. - - -type: keyword - --- - -*`cloud.origin.provider`*:: -+ --- -Name of the cloud provider. - - -type: keyword - --- - -*`cloud.origin.region`*:: -+ --- -Region in which this host, resource, or service is located. - - -type: keyword - --- - -*`cloud.origin.service.name`*:: -+ --- -The cloud service name is intended to distinguish services running on different platforms within a provider. - - -type: keyword - --- - - -*`cloud.project.id`*:: -+ --- -Cloud project ID - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`cloud.project.name`*:: -+ --- -Cloud project name - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`cloud.provider`*:: -+ --- -Cloud provider name - -type: keyword - -example: gcp - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`cloud.region`*:: -+ --- -Cloud region name - -type: keyword - -example: us-east1 - -{yes-icon} {ecs-ref}[ECS] field. - --- - - -*`cloud.service.name`*:: -+ --- -Cloud service name, intended to distinguish services running on different platforms within a provider. - - -type: keyword - --- - - -*`event.outcome`*:: -+ --- -`event.outcome` simply denotes whether the event represents a success or a failure from the perspective of the entity that produced the event. - - -type: keyword - -example: success - -{yes-icon} {ecs-ref}[ECS] field. - --- - -[[exported-fields-apm-transaction-metrics]] -== APM Transaction Metrics fields - -APM transaction metrics, and transaction metrics-specific properties, such as transaction.root. - - - -*`processor.name`*:: -+ --- -Processor name. - -type: keyword - --- - -*`processor.event`*:: -+ --- -Processor event. - -type: keyword - --- - -*`timeseries.instance`*:: -+ --- -Time series instance ID - -type: keyword - --- - - -*`timestamp.us`*:: -+ --- -Timestamp of the event in microseconds since Unix epoch. - - -type: long - --- - -*`labels`*:: -+ --- -A flat mapping of user-defined labels with string, boolean or number values. - - -type: object - -{yes-icon} {ecs-ref}[ECS] field. - --- - - -*`metricset.name`*:: -+ --- -Name of the set of metrics. - - -type: keyword - -example: transaction - --- - -[float] -=== service - -Service fields. - - - -*`service.name`*:: -+ --- -Immutable name of the service emitting this event. - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`service.version`*:: -+ --- -Version of the service emitting this event. - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`service.environment`*:: -+ --- -Service environment. - - -type: keyword - --- - - -*`service.node.name`*:: -+ --- -Unique meaningful name of the service node. - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - - -*`service.language.name`*:: -+ --- -Name of the programming language used. - - -type: keyword - --- - -*`service.language.version`*:: -+ --- -Version of the programming language used. - - -type: keyword - --- - - -*`service.runtime.name`*:: -+ --- -Name of the runtime used. - - -type: keyword - --- - -*`service.runtime.version`*:: -+ --- -Version of the runtime used. - - -type: keyword - --- - - -*`service.framework.name`*:: -+ --- -Name of the framework used. - - -type: keyword - --- - -*`service.framework.version`*:: -+ --- -Version of the framework used. - - -type: keyword - --- - - -*`transaction.id`*:: -+ --- -The transaction ID. - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`transaction.sampled`*:: -+ --- -Transactions that are 'sampled' will include all available information. Transactions that are not sampled will not have spans or context. - - -type: boolean - --- - -*`transaction.type`*:: -+ --- -Keyword of specific relevance in the service's domain (eg. 'request', 'backgroundjob', etc) - - -type: keyword - --- - -*`transaction.name`*:: -+ --- -Generic designation of a transaction in the scope of a single service (eg. 'GET /users/:id'). - - -type: keyword - --- - -*`transaction.name.text`*:: -+ --- -type: text - --- - -[float] -=== self_time - -Portion of the transaction's duration where no direct child was running - - - -*`transaction.self_time.count`*:: -+ --- -Number of aggregated transactions. - -type: long - --- - - -*`transaction.self_time.sum.us`*:: -+ --- -Aggregated transaction duration, excluding the time periods where a direct child was running, in microseconds. - - -type: long - --- - - -*`transaction.root`*:: -+ --- -Identifies metrics for root transactions. This can be used for calculating metrics for traces. - - -type: boolean - --- - -*`transaction.result`*:: -+ --- -The result of the transaction. HTTP status code for HTTP-related transactions. - - -type: keyword - --- - - -*`span.type`*:: -+ --- -Keyword of specific relevance in the service's domain (eg: 'db.postgresql.query', 'template.erb', 'cache', etc). - - -type: keyword - --- - -*`span.subtype`*:: -+ --- -A further sub-division of the type (e.g. postgresql, elasticsearch) - - -type: keyword - --- - -[float] -=== self_time - -Portion of the span's duration where no direct child was running - - - -*`span.self_time.count`*:: -+ --- -Number of aggregated spans. - -type: long - --- - - -*`span.self_time.sum.us`*:: -+ --- -Aggregated span duration, excluding the time periods where a direct child was running, in microseconds. - - -type: long - --- - - -[float] -=== service - -Destination service context - - -*`span.destination.service.resource`*:: -+ --- -Identifier for the destination service resource being operated on (e.g. 'http://elastic.co:80', 'elasticsearch', 'rabbitmq/queue_name') - - -type: keyword - --- - - -*`agent.name`*:: -+ --- -Name of the agent used. - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`agent.version`*:: -+ --- -Version of the agent used. - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`agent.ephemeral_id`*:: -+ --- -The Ephemeral ID identifies a running process. - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -[float] -=== container - -Container fields are used for meta information about the specific container that is the source of information. These fields help correlate data based containers from any runtime. - - - -*`container.id`*:: -+ --- -Unique container id. - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -[float] -=== kubernetes - -Kubernetes metadata reported by agents - - - -*`kubernetes.namespace`*:: -+ --- -Kubernetes namespace - - -type: keyword - --- - - -*`kubernetes.node.name`*:: -+ --- -Kubernetes node name - - -type: keyword - --- - - -*`kubernetes.pod.name`*:: -+ --- -Kubernetes pod name - - -type: keyword - --- - -*`kubernetes.pod.uid`*:: -+ --- -Kubernetes Pod UID - - -type: keyword - --- - -[float] -=== network - -Optional network fields - - - -[float] -=== connection - -Network connection details - - - -*`network.connection.type`*:: -+ --- -Network connection type, eg. "wifi", "cell" - - -type: keyword - --- - -*`network.connection.subtype`*:: -+ --- -Detailed network connection sub-type, e.g. "LTE", "CDMA" - - -type: keyword - --- - -[float] -=== carrier - -Network operator - - - -*`network.carrier.name`*:: -+ --- -Carrier name, eg. Vodafone, T-Mobile, etc. - - -type: keyword - --- - -*`network.carrier.mcc`*:: -+ --- -Mobile country code - - -type: keyword - --- - -*`network.carrier.mnc`*:: -+ --- -Mobile network code - - -type: keyword - --- - -*`network.carrier.icc`*:: -+ --- -ISO country code, eg. US - - -type: keyword - --- - -[float] -=== host - -Optional host fields. - - - -*`host.architecture`*:: -+ --- -The architecture of the host the event was recorded on. - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`host.hostname`*:: -+ --- -The hostname of the host the event was recorded on. - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`host.name`*:: -+ --- -Name of the host the event was recorded on. It can contain same information as host.hostname or a name specified by the user. - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`host.ip`*:: -+ --- -IP of the host that records the event. - - -type: ip - -{yes-icon} {ecs-ref}[ECS] field. - --- - -[float] -=== os - -The OS fields contain information about the operating system. - - - -*`host.os.platform`*:: -+ --- -The platform of the host the event was recorded on. - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -[float] -=== process - -Information pertaining to the running process where the data was collected - - - -*`process.args`*:: -+ --- -Process arguments. May be filtered to protect sensitive information. - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`process.pid`*:: -+ --- -Numeric process ID of the service process. - - -type: long - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`process.ppid`*:: -+ --- -Numeric ID of the service's parent process. - - -type: long - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`process.title`*:: -+ --- -Service process title. - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - - -*`observer.listening`*:: -+ --- -Address the server is listening on. - - -type: keyword - --- - -*`observer.hostname`*:: -+ --- -Hostname of the APM Server. - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`observer.version`*:: -+ --- -APM Server version. - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`observer.type`*:: -+ --- -The type will be set to `apm-server`. - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`observer.id`*:: -+ --- -Unique identifier of the APM Server. - - -type: keyword - --- - -*`observer.ephemeral_id`*:: -+ --- -Ephemeral identifier of the APM Server. - - -type: keyword - --- - - -*`user.name`*:: -+ --- -The username of the logged in user. - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`user.id`*:: -+ --- -Identifier of the logged in user. - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`user.email`*:: -+ --- -Email of the logged in user. - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - - -*`client.domain`*:: -+ --- -Client domain. - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`client.ip`*:: -+ --- -IP address of the client of a recorded event. This is typically obtained from a request's X-Forwarded-For or the X-Real-IP header or falls back to a given configuration for remote address. - - -type: ip - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`client.port`*:: -+ --- -Port of the client. - - -type: long - -{yes-icon} {ecs-ref}[ECS] field. - --- - - -*`source.domain`*:: -+ --- -Source domain. - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`source.ip`*:: -+ --- -IP address of the source of a recorded event. This is typically obtained from a request's X-Forwarded-For or the X-Real-IP header or falls back to a given configuration for remote address. - - -type: ip - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`source.port`*:: -+ --- -Port of the source. - - -type: long - -{yes-icon} {ecs-ref}[ECS] field. - --- - -[float] -=== destination - -Destination fields describe details about the destination of a packet/event. -Destination fields are usually populated in conjunction with source fields. - - -*`destination.address`*:: -+ --- -Some event destination addresses are defined ambiguously. The event will sometimes list an IP, a domain or a unix socket. You should always store the raw address in the `.address` field. Then it should be duplicated to `.ip` or `.domain`, depending on which one it is. - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`destination.ip`*:: -+ --- -IP addess of the destination. Can be one of multiple IPv4 or IPv6 addresses. - -type: ip - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`destination.port`*:: -+ --- -Port of the destination. - -type: long - -format: string - -{yes-icon} {ecs-ref}[ECS] field. - --- - -[float] -=== user_agent - -The user_agent fields normally come from a browser request. They often show up in web service logs coming from the parsed user agent string. - - - -*`user_agent.original`*:: -+ --- -Unparsed version of the user_agent. - - -type: keyword - -example: Mozilla/5.0 (iPhone; CPU iPhone OS 12_1 like Mac OS X) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/12.0 Mobile/15E148 Safari/604.1 - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`user_agent.original.text`*:: -+ --- -Software agent acting in behalf of a user, eg. a web browser / OS combination. - - -type: text - --- - -*`user_agent.name`*:: -+ --- -Name of the user agent. - - -type: keyword - -example: Safari - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`user_agent.version`*:: -+ --- -Version of the user agent. - - -type: keyword - -example: 12.0 - -{yes-icon} {ecs-ref}[ECS] field. - --- - -[float] -=== device - -Information concerning the device. - - - -*`user_agent.device.name`*:: -+ --- -Name of the device. - - -type: keyword - -example: iPhone - -{yes-icon} {ecs-ref}[ECS] field. - --- - -[float] -=== os - -The OS fields contain information about the operating system. - - - -*`user_agent.os.platform`*:: -+ --- -Operating system platform (such centos, ubuntu, windows). - - -type: keyword - -example: darwin - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`user_agent.os.name`*:: -+ --- -Operating system name, without the version. - - -type: keyword - -example: Mac OS X - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`user_agent.os.full`*:: -+ --- -Operating system name, including the version or code name. - - -type: keyword - -example: Mac OS Mojave - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`user_agent.os.family`*:: -+ --- -OS family (such as redhat, debian, freebsd, windows). - - -type: keyword - -example: debian - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`user_agent.os.version`*:: -+ --- -Operating system version as a raw string. - - -type: keyword - -example: 10.14.1 - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`user_agent.os.kernel`*:: -+ --- -Operating system kernel version as a raw string. - - -type: keyword - -example: 4.4.0-112-generic - -{yes-icon} {ecs-ref}[ECS] field. - --- - -[float] -=== cloud - -Cloud metadata reported by agents - - - - -*`cloud.account.id`*:: -+ --- -Cloud account ID - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`cloud.account.name`*:: -+ --- -Cloud account name - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`cloud.availability_zone`*:: -+ --- -Cloud availability zone name - -type: keyword - -example: us-east1-a - -{yes-icon} {ecs-ref}[ECS] field. - --- - - -*`cloud.instance.id`*:: -+ --- -Cloud instance/machine ID - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`cloud.instance.name`*:: -+ --- -Cloud instance/machine name - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - - -*`cloud.machine.type`*:: -+ --- -Cloud instance/machine type - -type: keyword - -example: t2.medium - -{yes-icon} {ecs-ref}[ECS] field. - --- - - -*`cloud.project.id`*:: -+ --- -Cloud project ID - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`cloud.project.name`*:: -+ --- -Cloud project name - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`cloud.provider`*:: -+ --- -Cloud provider name - -type: keyword - -example: gcp - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`cloud.region`*:: -+ --- -Cloud region name - -type: keyword - -example: us-east1 - -{yes-icon} {ecs-ref}[ECS] field. - --- - - -*`cloud.service.name`*:: -+ --- -Cloud service name, intended to distinguish services running on different platforms within a provider. - - -type: keyword - --- - - -*`event.outcome`*:: -+ --- -`event.outcome` simply denotes whether the event represents a success or a failure from the perspective of the entity that produced the event. - - -type: keyword - -example: success - -{yes-icon} {ecs-ref}[ECS] field. - --- - -[[exported-fields-apm-transaction-metrics-xpack]] -== APM Transaction Metrics fields - -APM transaction metrics, and transaction metrics-specific properties, requiring licensed features such as the histogram field type. - - - - - -*`transaction.duration.histogram`*:: -+ --- -Pre-aggregated histogram of transaction durations. - - -type: histogram - --- - -[[exported-fields-beat-common]] -== Beat fields - -Contains common beat fields available in all event types. - - - -*`agent.hostname`*:: -+ --- -Deprecated - use agent.name or agent.id to identify an agent. - - -type: alias - -alias to: agent.name - --- - -*`beat.timezone`*:: -+ --- -type: alias - -alias to: event.timezone - --- - -*`fields`*:: -+ --- -Contains user configurable fields. - - -type: object - --- - -*`beat.name`*:: -+ --- -type: alias - -alias to: host.name - --- - -*`beat.hostname`*:: -+ --- -type: alias - -alias to: agent.name - --- - -*`timeseries.instance`*:: -+ --- -Time series instance id - -type: keyword - --- - -[[exported-fields-cloud]] -== Cloud provider metadata fields - -Metadata from cloud providers added by the add_cloud_metadata processor. - - - -*`cloud.image.id`*:: -+ --- -Image ID for the cloud instance. - - -example: ami-abcd1234 - --- - -*`meta.cloud.provider`*:: -+ --- -type: alias - -alias to: cloud.provider - --- - -*`meta.cloud.instance_id`*:: -+ --- -type: alias - -alias to: cloud.instance.id - --- - -*`meta.cloud.instance_name`*:: -+ --- -type: alias - -alias to: cloud.instance.name - --- - -*`meta.cloud.machine_type`*:: -+ --- -type: alias - -alias to: cloud.machine.type - --- - -*`meta.cloud.availability_zone`*:: -+ --- -type: alias - -alias to: cloud.availability_zone - --- - -*`meta.cloud.project_id`*:: -+ --- -type: alias - -alias to: cloud.project.id - --- - -*`meta.cloud.region`*:: -+ --- -type: alias - -alias to: cloud.region - --- - -[[exported-fields-docker-processor]] -== Docker fields - -Docker stats collected from Docker. - - - - -*`docker.container.id`*:: -+ --- -type: alias - -alias to: container.id - --- - -*`docker.container.image`*:: -+ --- -type: alias - -alias to: container.image.name - --- - -*`docker.container.name`*:: -+ --- -type: alias - -alias to: container.name - --- - -*`docker.container.labels`*:: -+ --- -Image labels. - - -type: object - --- - -[[exported-fields-ecs]] -== ECS fields - - -This section defines Elastic Common Schema (ECS) fields—a common set of fields -to be used when storing event data in {es}. - -This is an exhaustive list, and fields listed here are not necessarily used by {beatname_uc}. -The goal of ECS is to enable and encourage users of {es} to normalize their event data, -so that they can better analyze, visualize, and correlate the data represented in their events. - -See the {ecs-ref}[ECS reference] for more information. - -*`@timestamp`*:: -+ --- -Date/time when the event originated. -This is the date/time extracted from the event, typically representing when the event was generated by the source. -If the event source has no original timestamp, this value is typically populated by the first time the event was received by the pipeline. -Required field for all events. - -type: date - -example: 2016-05-23T08:05:34.853Z - -required: True - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`labels`*:: -+ --- -Custom key/value pairs. -Can be used to add meta information to events. Should not contain nested objects. All values are stored as keyword. -Example: `docker` and `k8s` labels. - -type: object - -example: {"application": "foo-bar", "env": "production"} - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`message`*:: -+ --- -For log events the message field contains the log message, optimized for viewing in a log viewer. -For structured logs without an original message field, other fields can be concatenated to form a human-readable summary of the event. -If multiple messages exist, they can be combined into one message. - -type: match_only_text - -example: Hello World - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`tags`*:: -+ --- -List of keywords used to tag each event. - -type: keyword - -example: ["production", "env2"] - -{yes-icon} {ecs-ref}[ECS] field. - --- - -[float] -=== agent - -The agent fields contain the data about the software entity, if any, that collects, detects, or observes events on a host, or takes measurements on a host. -Examples include Beats. Agents may also run on observers. ECS agent.* fields shall be populated with details of the agent running on the host or observer where the event happened or the measurement was taken. - - -*`agent.build.original`*:: -+ --- -Extended build information for the agent. -This field is intended to contain any build information that a data source may provide, no specific formatting is required. - -type: keyword - -example: metricbeat version 7.6.0 (amd64), libbeat 7.6.0 [6a23e8f8f30f5001ba344e4e54d8d9cb82cb107c built 2020-02-05 23:10:10 +0000 UTC] - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`agent.ephemeral_id`*:: -+ --- -Ephemeral identifier of this agent (if one exists). -This id normally changes across restarts, but `agent.id` does not. - -type: keyword - -example: 8a4f500f - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`agent.id`*:: -+ --- -Unique identifier of this agent (if one exists). -Example: For Beats this would be beat.id. - -type: keyword - -example: 8a4f500d - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`agent.name`*:: -+ --- -Custom name of the agent. -This is a name that can be given to an agent. This can be helpful if for example two Filebeat instances are running on the same host but a human readable separation is needed on which Filebeat instance data is coming from. -If no name is given, the name is often left empty. - -type: keyword - -example: foo - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`agent.type`*:: -+ --- -Type of the agent. -The agent type always stays the same and should be given by the agent used. In case of Filebeat the agent would always be Filebeat also if two Filebeat instances are run on the same machine. - -type: keyword - -example: filebeat - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`agent.version`*:: -+ --- -Version of the agent. - -type: keyword - -example: 6.0.0-rc2 - -{yes-icon} {ecs-ref}[ECS] field. - --- - -[float] -=== as - -An autonomous system (AS) is a collection of connected Internet Protocol (IP) routing prefixes under the control of one or more network operators on behalf of a single administrative entity or domain that presents a common, clearly defined routing policy to the internet. - - -*`as.number`*:: -+ --- -Unique number allocated to the autonomous system. The autonomous system number (ASN) uniquely identifies each network on the Internet. - -type: long - -example: 15169 - --- - -*`as.organization.name`*:: -+ --- -Organization name. - -type: keyword - -example: Google LLC - --- - -*`as.organization.name.text`*:: -+ --- -type: match_only_text - --- - -[float] -=== client - -A client is defined as the initiator of a network connection for events regarding sessions, connections, or bidirectional flow records. -For TCP events, the client is the initiator of the TCP connection that sends the SYN packet(s). For other protocols, the client is generally the initiator or requestor in the network transaction. Some systems use the term "originator" to refer the client in TCP connections. The client fields describe details about the system acting as the client in the network event. Client fields are usually populated in conjunction with server fields. Client fields are generally not populated for packet-level events. -Client / server representations can add semantic context to an exchange, which is helpful to visualize the data in certain situations. If your context falls in that category, you should still ensure that source and destination are filled appropriately. - - -*`client.address`*:: -+ --- -Some event client addresses are defined ambiguously. The event will sometimes list an IP, a domain or a unix socket. You should always store the raw address in the `.address` field. -Then it should be duplicated to `.ip` or `.domain`, depending on which one it is. - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`client.as.number`*:: -+ --- -Unique number allocated to the autonomous system. The autonomous system number (ASN) uniquely identifies each network on the Internet. - -type: long - -example: 15169 - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`client.as.organization.name`*:: -+ --- -Organization name. - -type: keyword - -example: Google LLC - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`client.as.organization.name.text`*:: -+ --- -type: match_only_text - --- - -*`client.bytes`*:: -+ --- -Bytes sent from the client to the server. - -type: long - -example: 184 - -format: bytes - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`client.domain`*:: -+ --- -Client domain. - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`client.geo.city_name`*:: -+ --- -City name. - -type: keyword - -example: Montreal - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`client.geo.continent_code`*:: -+ --- -Two-letter code representing continent's name. - -type: keyword - -example: NA - --- - -*`client.geo.continent_name`*:: -+ --- -Name of the continent. - -type: keyword - -example: North America - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`client.geo.country_iso_code`*:: -+ --- -Country ISO code. - -type: keyword - -example: CA - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`client.geo.country_name`*:: -+ --- -Country name. - -type: keyword - -example: Canada - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`client.geo.location`*:: -+ --- -Longitude and latitude. - -type: geo_point - -example: { "lon": -73.614830, "lat": 45.505918 } - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`client.geo.name`*:: -+ --- -User-defined description of a location, at the level of granularity they care about. -Could be the name of their data centers, the floor number, if this describes a local physical entity, city names. -Not typically used in automated geolocation. - -type: keyword - -example: boston-dc - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`client.geo.postal_code`*:: -+ --- -Postal code associated with the location. -Values appropriate for this field may also be known as a postcode or ZIP code and will vary widely from country to country. - -type: keyword - -example: 94040 - --- - -*`client.geo.region_iso_code`*:: -+ --- -Region ISO code. - -type: keyword - -example: CA-QC - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`client.geo.region_name`*:: -+ --- -Region name. - -type: keyword - -example: Quebec - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`client.geo.timezone`*:: -+ --- -The time zone of the location, such as IANA time zone name. - -type: keyword - -example: America/Argentina/Buenos_Aires - --- - -*`client.ip`*:: -+ --- -IP address of the client (IPv4 or IPv6). - -type: ip - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`client.mac`*:: -+ --- -MAC address of the client. -The notation format from RFC 7042 is suggested: Each octet (that is, 8-bit byte) is represented by two [uppercase] hexadecimal digits giving the value of the octet as an unsigned integer. Successive octets are separated by a hyphen. - -type: keyword - -example: 00-00-5E-00-53-23 - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`client.nat.ip`*:: -+ --- -Translated IP of source based NAT sessions (e.g. internal client to internet). -Typically connections traversing load balancers, firewalls, or routers. - -type: ip - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`client.nat.port`*:: -+ --- -Translated port of source based NAT sessions (e.g. internal client to internet). -Typically connections traversing load balancers, firewalls, or routers. - -type: long - -format: string - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`client.packets`*:: -+ --- -Packets sent from the client to the server. - -type: long - -example: 12 - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`client.port`*:: -+ --- -Port of the client. - -type: long - -format: string - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`client.registered_domain`*:: -+ --- -The highest registered client domain, stripped of the subdomain. -For example, the registered domain for "foo.example.com" is "example.com". -This value can be determined precisely with a list like the public suffix list (http://publicsuffix.org). Trying to approximate this by simply taking the last two labels will not work well for TLDs such as "co.uk". - -type: keyword - -example: example.com - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`client.subdomain`*:: -+ --- -The subdomain portion of a fully qualified domain name includes all of the names except the host name under the registered_domain. In a partially qualified domain, or if the the qualification level of the full name cannot be determined, subdomain contains all of the names below the registered domain. -For example the subdomain portion of "www.east.mydomain.co.uk" is "east". If the domain has multiple levels of subdomain, such as "sub2.sub1.example.com", the subdomain field should contain "sub2.sub1", with no trailing period. - -type: keyword - -example: east - --- - -*`client.top_level_domain`*:: -+ --- -The effective top level domain (eTLD), also known as the domain suffix, is the last part of the domain name. For example, the top level domain for example.com is "com". -This value can be determined precisely with a list like the public suffix list (http://publicsuffix.org). Trying to approximate this by simply taking the last label will not work well for effective TLDs such as "co.uk". - -type: keyword - -example: co.uk - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`client.user.domain`*:: -+ --- -Name of the directory the user is a member of. -For example, an LDAP or Active Directory domain name. - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`client.user.email`*:: -+ --- -User email address. - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`client.user.full_name`*:: -+ --- -User's full name, if available. - -type: keyword - -example: Albert Einstein - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`client.user.full_name.text`*:: -+ --- -type: match_only_text - --- - -*`client.user.group.domain`*:: -+ --- -Name of the directory the group is a member of. -For example, an LDAP or Active Directory domain name. - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`client.user.group.id`*:: -+ --- -Unique identifier for the group on the system/platform. - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`client.user.group.name`*:: -+ --- -Name of the group. - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`client.user.hash`*:: -+ --- -Unique user hash to correlate information for a user in anonymized form. -Useful if `user.id` or `user.name` contain confidential information and cannot be used. - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`client.user.id`*:: -+ --- -Unique identifier of the user. - -type: keyword - -example: S-1-5-21-202424912787-2692429404-2351956786-1000 - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`client.user.name`*:: -+ --- -Short name or login of the user. - -type: keyword - -example: a.einstein - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`client.user.name.text`*:: -+ --- -type: match_only_text - --- - -*`client.user.roles`*:: -+ --- -Array of user roles at the time of the event. - -type: keyword - -example: ["kibana_admin", "reporting_user"] - -{yes-icon} {ecs-ref}[ECS] field. - --- - -[float] -=== cloud - -Fields related to the cloud or infrastructure the events are coming from. - - -*`cloud.account.id`*:: -+ --- -The cloud account or organization id used to identify different entities in a multi-tenant environment. -Examples: AWS account id, Google Cloud ORG Id, or other unique identifier. - -type: keyword - -example: 666777888999 - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`cloud.account.name`*:: -+ --- -The cloud account name or alias used to identify different entities in a multi-tenant environment. -Examples: AWS account name, Google Cloud ORG display name. - -type: keyword - -example: elastic-dev - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`cloud.availability_zone`*:: -+ --- -Availability zone in which this host, resource, or service is located. - -type: keyword - -example: us-east-1c - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`cloud.instance.id`*:: -+ --- -Instance ID of the host machine. - -type: keyword - -example: i-1234567890abcdef0 - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`cloud.instance.name`*:: -+ --- -Instance name of the host machine. - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`cloud.machine.type`*:: -+ --- -Machine type of the host machine. - -type: keyword - -example: t2.medium - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`cloud.project.id`*:: -+ --- -The cloud project identifier. -Examples: Google Cloud Project id, Azure Project id. - -type: keyword - -example: my-project - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`cloud.project.name`*:: -+ --- -The cloud project name. -Examples: Google Cloud Project name, Azure Project name. - -type: keyword - -example: my project - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`cloud.provider`*:: -+ --- -Name of the cloud provider. Example values are aws, azure, gcp, or digitalocean. - -type: keyword - -example: aws - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`cloud.region`*:: -+ --- -Region in which this host, resource, or service is located. - -type: keyword - -example: us-east-1 - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`cloud.service.name`*:: -+ --- -The cloud service name is intended to distinguish services running on different platforms within a provider, eg AWS EC2 vs Lambda, GCP GCE vs App Engine, Azure VM vs App Server. -Examples: app engine, app service, cloud run, fargate, lambda. - -type: keyword - -example: lambda - --- - -[float] -=== code_signature - -These fields contain information about binary code signatures. - - -*`code_signature.digest_algorithm`*:: -+ --- -The hashing algorithm used to sign the process. -This value can distinguish signatures when a file is signed multiple times by the same signer but with a different digest algorithm. - -type: keyword - -example: sha256 - --- - -*`code_signature.exists`*:: -+ --- -Boolean to capture if a signature is present. - -type: boolean - -example: true - --- - -*`code_signature.signing_id`*:: -+ --- -The identifier used to sign the process. -This is used to identify the application manufactured by a software vendor. The field is relevant to Apple *OS only. - -type: keyword - -example: com.apple.xpc.proxy - --- - -*`code_signature.status`*:: -+ --- -Additional information about the certificate status. -This is useful for logging cryptographic errors with the certificate validity or trust status. Leave unpopulated if the validity or trust of the certificate was unchecked. - -type: keyword - -example: ERROR_UNTRUSTED_ROOT - --- - -*`code_signature.subject_name`*:: -+ --- -Subject name of the code signer - -type: keyword - -example: Microsoft Corporation - --- - -*`code_signature.team_id`*:: -+ --- -The team identifier used to sign the process. -This is used to identify the team or vendor of a software product. The field is relevant to Apple *OS only. - -type: keyword - -example: EQHXZ8M8AV - --- - -*`code_signature.timestamp`*:: -+ --- -Date and time when the code signature was generated and signed. - -type: date - -example: 2021-01-01T12:10:30Z - --- - -*`code_signature.trusted`*:: -+ --- -Stores the trust status of the certificate chain. -Validating the trust of the certificate chain may be complicated, and this field should only be populated by tools that actively check the status. - -type: boolean - -example: true - --- - -*`code_signature.valid`*:: -+ --- -Boolean to capture if the digital signature is verified against the binary content. -Leave unpopulated if a certificate was unchecked. - -type: boolean - -example: true - --- - -[float] -=== container - -Container fields are used for meta information about the specific container that is the source of information. -These fields help correlate data based containers from any runtime. - - -*`container.id`*:: -+ --- -Unique container id. - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`container.image.name`*:: -+ --- -Name of the image the container was built on. - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`container.image.tag`*:: -+ --- -Container image tags. - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`container.labels`*:: -+ --- -Image labels. - -type: object - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`container.name`*:: -+ --- -Container name. - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`container.runtime`*:: -+ --- -Runtime managing this container. - -type: keyword - -example: docker - -{yes-icon} {ecs-ref}[ECS] field. - --- - -[float] -=== data_stream - -The data_stream fields take part in defining the new data stream naming scheme. -In the new data stream naming scheme the value of the data stream fields combine to the name of the actual data stream in the following manner: `{data_stream.type}-{data_stream.dataset}-{data_stream.namespace}`. This means the fields can only contain characters that are valid as part of names of data streams. More details about this can be found in this https://www.elastic.co/blog/an-introduction-to-the-elastic-data-stream-naming-scheme[blog post]. -An Elasticsearch data stream consists of one or more backing indices, and a data stream name forms part of the backing indices names. Due to this convention, data streams must also follow index naming restrictions. For example, data stream names cannot include `\`, `/`, `*`, `?`, `"`, `<`, `>`, `|`, ` ` (space character), `,`, or `#`. Please see the Elasticsearch reference for additional https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-create-index.html#indices-create-api-path-params[restrictions]. - - -*`data_stream.dataset`*:: -+ --- -The field can contain anything that makes sense to signify the source of the data. -Examples include `nginx.access`, `prometheus`, `endpoint` etc. For data streams that otherwise fit, but that do not have dataset set we use the value "generic" for the dataset value. `event.dataset` should have the same value as `data_stream.dataset`. -Beyond the Elasticsearch data stream naming criteria noted above, the `dataset` value has additional restrictions: - * Must not contain `-` - * No longer than 100 characters - -type: constant_keyword - -example: nginx.access - --- - -*`data_stream.namespace`*:: -+ --- -A user defined namespace. Namespaces are useful to allow grouping of data. -Many users already organize their indices this way, and the data stream naming scheme now provides this best practice as a default. Many users will populate this field with `default`. If no value is used, it falls back to `default`. -Beyond the Elasticsearch index naming criteria noted above, `namespace` value has the additional restrictions: - * Must not contain `-` - * No longer than 100 characters - -type: constant_keyword - -example: production - --- - -*`data_stream.type`*:: -+ --- -An overarching type for the data stream. -Currently allowed values are "logs" and "metrics". We expect to also add "traces" and "synthetics" in the near future. - -type: constant_keyword - -example: logs - --- - -[float] -=== destination - -Destination fields capture details about the receiver of a network exchange/packet. These fields are populated from a network event, packet, or other event containing details of a network transaction. -Destination fields are usually populated in conjunction with source fields. The source and destination fields are considered the baseline and should always be filled if an event contains source and destination details from a network transaction. If the event also contains identification of the client and server roles, then the client and server fields should also be populated. - - -*`destination.address`*:: -+ --- -Some event destination addresses are defined ambiguously. The event will sometimes list an IP, a domain or a unix socket. You should always store the raw address in the `.address` field. -Then it should be duplicated to `.ip` or `.domain`, depending on which one it is. - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`destination.as.number`*:: -+ --- -Unique number allocated to the autonomous system. The autonomous system number (ASN) uniquely identifies each network on the Internet. - -type: long - -example: 15169 - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`destination.as.organization.name`*:: -+ --- -Organization name. - -type: keyword - -example: Google LLC - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`destination.as.organization.name.text`*:: -+ --- -type: match_only_text - --- - -*`destination.bytes`*:: -+ --- -Bytes sent from the destination to the source. - -type: long - -example: 184 - -format: bytes - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`destination.domain`*:: -+ --- -Destination domain. - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`destination.geo.city_name`*:: -+ --- -City name. - -type: keyword - -example: Montreal - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`destination.geo.continent_code`*:: -+ --- -Two-letter code representing continent's name. - -type: keyword - -example: NA - --- - -*`destination.geo.continent_name`*:: -+ --- -Name of the continent. - -type: keyword - -example: North America - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`destination.geo.country_iso_code`*:: -+ --- -Country ISO code. - -type: keyword - -example: CA - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`destination.geo.country_name`*:: -+ --- -Country name. - -type: keyword - -example: Canada - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`destination.geo.location`*:: -+ --- -Longitude and latitude. - -type: geo_point - -example: { "lon": -73.614830, "lat": 45.505918 } - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`destination.geo.name`*:: -+ --- -User-defined description of a location, at the level of granularity they care about. -Could be the name of their data centers, the floor number, if this describes a local physical entity, city names. -Not typically used in automated geolocation. - -type: keyword - -example: boston-dc - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`destination.geo.postal_code`*:: -+ --- -Postal code associated with the location. -Values appropriate for this field may also be known as a postcode or ZIP code and will vary widely from country to country. - -type: keyword - -example: 94040 - --- - -*`destination.geo.region_iso_code`*:: -+ --- -Region ISO code. - -type: keyword - -example: CA-QC - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`destination.geo.region_name`*:: -+ --- -Region name. - -type: keyword - -example: Quebec - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`destination.geo.timezone`*:: -+ --- -The time zone of the location, such as IANA time zone name. - -type: keyword - -example: America/Argentina/Buenos_Aires - --- - -*`destination.ip`*:: -+ --- -IP address of the destination (IPv4 or IPv6). - -type: ip - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`destination.mac`*:: -+ --- -MAC address of the destination. -The notation format from RFC 7042 is suggested: Each octet (that is, 8-bit byte) is represented by two [uppercase] hexadecimal digits giving the value of the octet as an unsigned integer. Successive octets are separated by a hyphen. - -type: keyword - -example: 00-00-5E-00-53-23 - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`destination.nat.ip`*:: -+ --- -Translated ip of destination based NAT sessions (e.g. internet to private DMZ) -Typically used with load balancers, firewalls, or routers. - -type: ip - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`destination.nat.port`*:: -+ --- -Port the source session is translated to by NAT Device. -Typically used with load balancers, firewalls, or routers. - -type: long - -format: string - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`destination.packets`*:: -+ --- -Packets sent from the destination to the source. - -type: long - -example: 12 - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`destination.port`*:: -+ --- -Port of the destination. - -type: long - -format: string - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`destination.registered_domain`*:: -+ --- -The highest registered destination domain, stripped of the subdomain. -For example, the registered domain for "foo.example.com" is "example.com". -This value can be determined precisely with a list like the public suffix list (http://publicsuffix.org). Trying to approximate this by simply taking the last two labels will not work well for TLDs such as "co.uk". - -type: keyword - -example: example.com - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`destination.subdomain`*:: -+ --- -The subdomain portion of a fully qualified domain name includes all of the names except the host name under the registered_domain. In a partially qualified domain, or if the the qualification level of the full name cannot be determined, subdomain contains all of the names below the registered domain. -For example the subdomain portion of "www.east.mydomain.co.uk" is "east". If the domain has multiple levels of subdomain, such as "sub2.sub1.example.com", the subdomain field should contain "sub2.sub1", with no trailing period. - -type: keyword - -example: east - --- - -*`destination.top_level_domain`*:: -+ --- -The effective top level domain (eTLD), also known as the domain suffix, is the last part of the domain name. For example, the top level domain for example.com is "com". -This value can be determined precisely with a list like the public suffix list (http://publicsuffix.org). Trying to approximate this by simply taking the last label will not work well for effective TLDs such as "co.uk". - -type: keyword - -example: co.uk - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`destination.user.domain`*:: -+ --- -Name of the directory the user is a member of. -For example, an LDAP or Active Directory domain name. - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`destination.user.email`*:: -+ --- -User email address. - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`destination.user.full_name`*:: -+ --- -User's full name, if available. - -type: keyword - -example: Albert Einstein - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`destination.user.full_name.text`*:: -+ --- -type: match_only_text - --- - -*`destination.user.group.domain`*:: -+ --- -Name of the directory the group is a member of. -For example, an LDAP or Active Directory domain name. - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`destination.user.group.id`*:: -+ --- -Unique identifier for the group on the system/platform. - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`destination.user.group.name`*:: -+ --- -Name of the group. - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`destination.user.hash`*:: -+ --- -Unique user hash to correlate information for a user in anonymized form. -Useful if `user.id` or `user.name` contain confidential information and cannot be used. - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`destination.user.id`*:: -+ --- -Unique identifier of the user. - -type: keyword - -example: S-1-5-21-202424912787-2692429404-2351956786-1000 - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`destination.user.name`*:: -+ --- -Short name or login of the user. - -type: keyword - -example: a.einstein - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`destination.user.name.text`*:: -+ --- -type: match_only_text - --- - -*`destination.user.roles`*:: -+ --- -Array of user roles at the time of the event. - -type: keyword - -example: ["kibana_admin", "reporting_user"] - -{yes-icon} {ecs-ref}[ECS] field. - --- - -[float] -=== dll - -These fields contain information about code libraries dynamically loaded into processes. - -Many operating systems refer to "shared code libraries" with different names, but this field set refers to all of the following: -* Dynamic-link library (`.dll`) commonly used on Windows -* Shared Object (`.so`) commonly used on Unix-like operating systems -* Dynamic library (`.dylib`) commonly used on macOS - - -*`dll.code_signature.digest_algorithm`*:: -+ --- -The hashing algorithm used to sign the process. -This value can distinguish signatures when a file is signed multiple times by the same signer but with a different digest algorithm. - -type: keyword - -example: sha256 - --- - -*`dll.code_signature.exists`*:: -+ --- -Boolean to capture if a signature is present. - -type: boolean - -example: true - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`dll.code_signature.signing_id`*:: -+ --- -The identifier used to sign the process. -This is used to identify the application manufactured by a software vendor. The field is relevant to Apple *OS only. - -type: keyword - -example: com.apple.xpc.proxy - --- - -*`dll.code_signature.status`*:: -+ --- -Additional information about the certificate status. -This is useful for logging cryptographic errors with the certificate validity or trust status. Leave unpopulated if the validity or trust of the certificate was unchecked. - -type: keyword - -example: ERROR_UNTRUSTED_ROOT - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`dll.code_signature.subject_name`*:: -+ --- -Subject name of the code signer - -type: keyword - -example: Microsoft Corporation - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`dll.code_signature.team_id`*:: -+ --- -The team identifier used to sign the process. -This is used to identify the team or vendor of a software product. The field is relevant to Apple *OS only. - -type: keyword - -example: EQHXZ8M8AV - --- - -*`dll.code_signature.timestamp`*:: -+ --- -Date and time when the code signature was generated and signed. - -type: date - -example: 2021-01-01T12:10:30Z - --- - -*`dll.code_signature.trusted`*:: -+ --- -Stores the trust status of the certificate chain. -Validating the trust of the certificate chain may be complicated, and this field should only be populated by tools that actively check the status. - -type: boolean - -example: true - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`dll.code_signature.valid`*:: -+ --- -Boolean to capture if the digital signature is verified against the binary content. -Leave unpopulated if a certificate was unchecked. - -type: boolean - -example: true - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`dll.hash.md5`*:: -+ --- -MD5 hash. - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`dll.hash.sha1`*:: -+ --- -SHA1 hash. - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`dll.hash.sha256`*:: -+ --- -SHA256 hash. - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`dll.hash.sha512`*:: -+ --- -SHA512 hash. - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`dll.hash.ssdeep`*:: -+ --- -SSDEEP hash. - -type: keyword - --- - -*`dll.name`*:: -+ --- -Name of the library. -This generally maps to the name of the file on disk. - -type: keyword - -example: kernel32.dll - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`dll.path`*:: -+ --- -Full file path of the library. - -type: keyword - -example: C:\Windows\System32\kernel32.dll - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`dll.pe.architecture`*:: -+ --- -CPU architecture target for the file. - -type: keyword - -example: x64 - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`dll.pe.company`*:: -+ --- -Internal company name of the file, provided at compile-time. - -type: keyword - -example: Microsoft Corporation - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`dll.pe.description`*:: -+ --- -Internal description of the file, provided at compile-time. - -type: keyword - -example: Paint - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`dll.pe.file_version`*:: -+ --- -Internal version of the file, provided at compile-time. - -type: keyword - -example: 6.3.9600.17415 - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`dll.pe.imphash`*:: -+ --- -A hash of the imports in a PE file. An imphash -- or import hash -- can be used to fingerprint binaries even after recompilation or other code-level transformations have occurred, which would change more traditional hash values. -Learn more at https://www.fireeye.com/blog/threat-research/2014/01/tracking-malware-import-hashing.html. - -type: keyword - -example: 0c6803c4e922103c4dca5963aad36ddf - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`dll.pe.original_file_name`*:: -+ --- -Internal name of the file, provided at compile-time. - -type: keyword - -example: MSPAINT.EXE - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`dll.pe.product`*:: -+ --- -Internal product name of the file, provided at compile-time. - -type: keyword - -example: Microsoft® Windows® Operating System - -{yes-icon} {ecs-ref}[ECS] field. - --- - -[float] -=== dns - -Fields describing DNS queries and answers. -DNS events should either represent a single DNS query prior to getting answers (`dns.type:query`) or they should represent a full exchange and contain the query details as well as all of the answers that were provided for this query (`dns.type:answer`). - - -*`dns.answers`*:: -+ --- -An array containing an object for each answer section returned by the server. -The main keys that should be present in these objects are defined by ECS. Records that have more information may contain more keys than what ECS defines. -Not all DNS data sources give all details about DNS answers. At minimum, answer objects must contain the `data` key. If more information is available, map as much of it to ECS as possible, and add any additional fields to the answer objects as custom fields. - -type: object - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`dns.answers.class`*:: -+ --- -The class of DNS data contained in this resource record. - -type: keyword - -example: IN - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`dns.answers.data`*:: -+ --- -The data describing the resource. -The meaning of this data depends on the type and class of the resource record. - -type: keyword - -example: 10.10.10.10 - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`dns.answers.name`*:: -+ --- -The domain name to which this resource record pertains. -If a chain of CNAME is being resolved, each answer's `name` should be the one that corresponds with the answer's `data`. It should not simply be the original `question.name` repeated. - -type: keyword - -example: www.example.com - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`dns.answers.ttl`*:: -+ --- -The time interval in seconds that this resource record may be cached before it should be discarded. Zero values mean that the data should not be cached. - -type: long - -example: 180 - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`dns.answers.type`*:: -+ --- -The type of data contained in this resource record. - -type: keyword - -example: CNAME - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`dns.header_flags`*:: -+ --- -Array of 2 letter DNS header flags. -Expected values are: AA, TC, RD, RA, AD, CD, DO. - -type: keyword - -example: ["RD", "RA"] - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`dns.id`*:: -+ --- -The DNS packet identifier assigned by the program that generated the query. The identifier is copied to the response. - -type: keyword - -example: 62111 - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`dns.op_code`*:: -+ --- -The DNS operation code that specifies the kind of query in the message. This value is set by the originator of a query and copied into the response. - -type: keyword - -example: QUERY - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`dns.question.class`*:: -+ --- -The class of records being queried. - -type: keyword - -example: IN - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`dns.question.name`*:: -+ --- -The name being queried. -If the name field contains non-printable characters (below 32 or above 126), those characters should be represented as escaped base 10 integers (\DDD). Back slashes and quotes should be escaped. Tabs, carriage returns, and line feeds should be converted to \t, \r, and \n respectively. - -type: keyword - -example: www.example.com - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`dns.question.registered_domain`*:: -+ --- -The highest registered domain, stripped of the subdomain. -For example, the registered domain for "foo.example.com" is "example.com". -This value can be determined precisely with a list like the public suffix list (http://publicsuffix.org). Trying to approximate this by simply taking the last two labels will not work well for TLDs such as "co.uk". - -type: keyword - -example: example.com - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`dns.question.subdomain`*:: -+ --- -The subdomain is all of the labels under the registered_domain. -If the domain has multiple levels of subdomain, such as "sub2.sub1.example.com", the subdomain field should contain "sub2.sub1", with no trailing period. - -type: keyword - -example: www - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`dns.question.top_level_domain`*:: -+ --- -The effective top level domain (eTLD), also known as the domain suffix, is the last part of the domain name. For example, the top level domain for example.com is "com". -This value can be determined precisely with a list like the public suffix list (http://publicsuffix.org). Trying to approximate this by simply taking the last label will not work well for effective TLDs such as "co.uk". - -type: keyword - -example: co.uk - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`dns.question.type`*:: -+ --- -The type of record being queried. - -type: keyword - -example: AAAA - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`dns.resolved_ip`*:: -+ --- -Array containing all IPs seen in `answers.data`. -The `answers` array can be difficult to use, because of the variety of data formats it can contain. Extracting all IP addresses seen in there to `dns.resolved_ip` makes it possible to index them as IP addresses, and makes them easier to visualize and query for. - -type: ip - -example: ["10.10.10.10", "10.10.10.11"] - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`dns.response_code`*:: -+ --- -The DNS response code. - -type: keyword - -example: NOERROR - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`dns.type`*:: -+ --- -The type of DNS event captured, query or answer. -If your source of DNS events only gives you DNS queries, you should only create dns events of type `dns.type:query`. -If your source of DNS events gives you answers as well, you should create one event per query (optionally as soon as the query is seen). And a second event containing all query details as well as an array of answers. - -type: keyword - -example: answer - -{yes-icon} {ecs-ref}[ECS] field. - --- - -[float] -=== ecs - -Meta-information specific to ECS. - - -*`ecs.version`*:: -+ --- -ECS version this event conforms to. `ecs.version` is a required field and must exist in all events. -When querying across multiple indices -- which may conform to slightly different ECS versions -- this field lets integrations adjust to the schema version of the events. - -type: keyword - -example: 1.0.0 - -required: True - -{yes-icon} {ecs-ref}[ECS] field. - --- - -[float] -=== elf - -These fields contain Linux Executable Linkable Format (ELF) metadata. - - -*`elf.architecture`*:: -+ --- -Machine architecture of the ELF file. - -type: keyword - -example: x86-64 - --- - -*`elf.byte_order`*:: -+ --- -Byte sequence of ELF file. - -type: keyword - -example: Little Endian - --- - -*`elf.cpu_type`*:: -+ --- -CPU type of the ELF file. - -type: keyword - -example: Intel - --- - -*`elf.creation_date`*:: -+ --- -Extracted when possible from the file's metadata. Indicates when it was built or compiled. It can also be faked by malware creators. - -type: date - --- - -*`elf.exports`*:: -+ --- -List of exported element names and types. - -type: flattened - --- - -*`elf.header.abi_version`*:: -+ --- -Version of the ELF Application Binary Interface (ABI). - -type: keyword - --- - -*`elf.header.class`*:: -+ --- -Header class of the ELF file. - -type: keyword - --- - -*`elf.header.data`*:: -+ --- -Data table of the ELF header. - -type: keyword - --- - -*`elf.header.entrypoint`*:: -+ --- -Header entrypoint of the ELF file. - -type: long - -format: string - --- - -*`elf.header.object_version`*:: -+ --- -"0x1" for original ELF files. - -type: keyword - --- - -*`elf.header.os_abi`*:: -+ --- -Application Binary Interface (ABI) of the Linux OS. - -type: keyword - --- - -*`elf.header.type`*:: -+ --- -Header type of the ELF file. - -type: keyword - --- - -*`elf.header.version`*:: -+ --- -Version of the ELF header. - -type: keyword - --- - -*`elf.imports`*:: -+ --- -List of imported element names and types. - -type: flattened - --- - -*`elf.sections`*:: -+ --- -An array containing an object for each section of the ELF file. -The keys that should be present in these objects are defined by sub-fields underneath `elf.sections.*`. - -type: nested - --- - -*`elf.sections.chi2`*:: -+ --- -Chi-square probability distribution of the section. - -type: long - -format: number - --- - -*`elf.sections.entropy`*:: -+ --- -Shannon entropy calculation from the section. - -type: long - -format: number - --- - -*`elf.sections.flags`*:: -+ --- -ELF Section List flags. - -type: keyword - --- - -*`elf.sections.name`*:: -+ --- -ELF Section List name. - -type: keyword - --- - -*`elf.sections.physical_offset`*:: -+ --- -ELF Section List offset. - -type: keyword - --- - -*`elf.sections.physical_size`*:: -+ --- -ELF Section List physical size. - -type: long - -format: bytes - --- - -*`elf.sections.type`*:: -+ --- -ELF Section List type. - -type: keyword - --- - -*`elf.sections.virtual_address`*:: -+ --- -ELF Section List virtual address. - -type: long - -format: string - --- - -*`elf.sections.virtual_size`*:: -+ --- -ELF Section List virtual size. - -type: long - -format: string - --- - -*`elf.segments`*:: -+ --- -An array containing an object for each segment of the ELF file. -The keys that should be present in these objects are defined by sub-fields underneath `elf.segments.*`. - -type: nested - --- - -*`elf.segments.sections`*:: -+ --- -ELF object segment sections. - -type: keyword - --- - -*`elf.segments.type`*:: -+ --- -ELF object segment type. - -type: keyword - --- - -*`elf.shared_libraries`*:: -+ --- -List of shared libraries used by this ELF object. - -type: keyword - --- - -*`elf.telfhash`*:: -+ --- -telfhash symbol hash for ELF file. - -type: keyword - --- - -[float] -=== error - -These fields can represent errors of any kind. -Use them for errors that happen while fetching events or in cases where the event itself contains an error. - - -*`error.code`*:: -+ --- -Error code describing the error. - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`error.id`*:: -+ --- -Unique identifier for the error. - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`error.message`*:: -+ --- -Error message. - -type: match_only_text - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`error.stack_trace`*:: -+ --- -The stack trace of this error in plain text. - -type: wildcard - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`error.stack_trace.text`*:: -+ --- -type: match_only_text - --- - -*`error.type`*:: -+ --- -The type of the error, for example the class name of the exception. - -type: keyword - -example: java.lang.NullPointerException - -{yes-icon} {ecs-ref}[ECS] field. - --- - -[float] -=== event - -The event fields are used for context information about the log or metric event itself. -A log is defined as an event containing details of something that happened. Log events must include the time at which the thing happened. Examples of log events include a process starting on a host, a network packet being sent from a source to a destination, or a network connection between a client and a server being initiated or closed. A metric is defined as an event containing one or more numerical measurements and the time at which the measurement was taken. Examples of metric events include memory pressure measured on a host and device temperature. See the `event.kind` definition in this section for additional details about metric and state events. - - -*`event.action`*:: -+ --- -The action captured by the event. -This describes the information in the event. It is more specific than `event.category`. Examples are `group-add`, `process-started`, `file-created`. The value is normally defined by the implementer. - -type: keyword - -example: user-password-change - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`event.agent_id_status`*:: -+ --- -Agents are normally responsible for populating the `agent.id` field value. If the system receiving events is capable of validating the value based on authentication information for the client then this field can be used to reflect the outcome of that validation. -For example if the agent's connection is authenticated with mTLS and the client cert contains the ID of the agent to which the cert was issued then the `agent.id` value in events can be checked against the certificate. If the values match then `event.agent_id_status: verified` is added to the event, otherwise one of the other allowed values should be used. -If no validation is performed then the field should be omitted. -The allowed values are: -`verified` - The `agent.id` field value matches expected value obtained from auth metadata. -`mismatch` - The `agent.id` field value does not match the expected value obtained from auth metadata. -`missing` - There was no `agent.id` field in the event to validate. -`auth_metadata_missing` - There was no auth metadata or it was missing information about the agent ID. - -type: keyword - -example: verified - --- - -*`event.category`*:: -+ --- -This is one of four ECS Categorization Fields, and indicates the second level in the ECS category hierarchy. -`event.category` represents the "big buckets" of ECS categories. For example, filtering on `event.category:process` yields all events relating to process activity. This field is closely related to `event.type`, which is used as a subcategory. -This field is an array. This will allow proper categorization of some events that fall in multiple categories. - -type: keyword - -example: authentication - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`event.code`*:: -+ --- -Identification code for this event, if one exists. -Some event sources use event codes to identify messages unambiguously, regardless of message language or wording adjustments over time. An example of this is the Windows Event ID. - -type: keyword - -example: 4648 - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`event.created`*:: -+ --- -event.created contains the date/time when the event was first read by an agent, or by your pipeline. -This field is distinct from @timestamp in that @timestamp typically contain the time extracted from the original event. -In most situations, these two timestamps will be slightly different. The difference can be used to calculate the delay between your source generating an event, and the time when your agent first processed it. This can be used to monitor your agent's or pipeline's ability to keep up with your event source. -In case the two timestamps are identical, @timestamp should be used. - -type: date - -example: 2016-05-23T08:05:34.857Z - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`event.dataset`*:: -+ --- -Name of the dataset. -If an event source publishes more than one type of log or events (e.g. access log, error log), the dataset is used to specify which one the event comes from. -It's recommended but not required to start the dataset name with the module name, followed by a dot, then the dataset name. - -type: keyword - -example: apache.access - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`event.duration`*:: -+ --- -Duration of the event in nanoseconds. -If event.start and event.end are known this value should be the difference between the end and start time. - -type: long - -format: duration - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`event.end`*:: -+ --- -event.end contains the date when the event ended or when the activity was last observed. - -type: date - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`event.hash`*:: -+ --- -Hash (perhaps logstash fingerprint) of raw field to be able to demonstrate log integrity. - -type: keyword - -example: 123456789012345678901234567890ABCD - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`event.id`*:: -+ --- -Unique ID to describe the event. - -type: keyword - -example: 8a4f500d - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`event.ingested`*:: -+ --- -Timestamp when an event arrived in the central data store. -This is different from `@timestamp`, which is when the event originally occurred. It's also different from `event.created`, which is meant to capture the first time an agent saw the event. -In normal conditions, assuming no tampering, the timestamps should chronologically look like this: `@timestamp` < `event.created` < `event.ingested`. - -type: date - -example: 2016-05-23T08:05:35.101Z - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`event.kind`*:: -+ --- -This is one of four ECS Categorization Fields, and indicates the highest level in the ECS category hierarchy. -`event.kind` gives high-level information about what type of information the event contains, without being specific to the contents of the event. For example, values of this field distinguish alert events from metric events. -The value of this field can be used to inform how these kinds of events should be handled. They may warrant different retention, different access control, it may also help understand whether the data coming in at a regular interval or not. - -type: keyword - -example: alert - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`event.module`*:: -+ --- -Name of the module this data is coming from. -If your monitoring agent supports the concept of modules or plugins to process events of a given source (e.g. Apache logs), `event.module` should contain the name of this module. - -type: keyword - -example: apache - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`event.original`*:: -+ --- -Raw text message of entire event. Used to demonstrate log integrity or where the full log message (before splitting it up in multiple parts) may be required, e.g. for reindex. -This field is not indexed and doc_values are disabled. It cannot be searched, but it can be retrieved from `_source`. If users wish to override this and index this field, please see `Field data types` in the `Elasticsearch Reference`. - -type: keyword - -example: Sep 19 08:26:10 host CEF:0|Security| threatmanager|1.0|100| worm successfully stopped|10|src=10.0.0.1 dst=2.1.2.2spt=1232 - -{yes-icon} {ecs-ref}[ECS] field. - -Field is not indexed. - --- - -*`event.outcome`*:: -+ --- -This is one of four ECS Categorization Fields, and indicates the lowest level in the ECS category hierarchy. -`event.outcome` simply denotes whether the event represents a success or a failure from the perspective of the entity that produced the event. -Note that when a single transaction is described in multiple events, each event may populate different values of `event.outcome`, according to their perspective. -Also note that in the case of a compound event (a single event that contains multiple logical events), this field should be populated with the value that best captures the overall success or failure from the perspective of the event producer. -Further note that not all events will have an associated outcome. For example, this field is generally not populated for metric events, events with `event.type:info`, or any events for which an outcome does not make logical sense. - -type: keyword - -example: success - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`event.provider`*:: -+ --- -Source of the event. -Event transports such as Syslog or the Windows Event Log typically mention the source of an event. It can be the name of the software that generated the event (e.g. Sysmon, httpd), or of a subsystem of the operating system (kernel, Microsoft-Windows-Security-Auditing). - -type: keyword - -example: kernel - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`event.reason`*:: -+ --- -Reason why this event happened, according to the source. -This describes the why of a particular action or outcome captured in the event. Where `event.action` captures the action from the event, `event.reason` describes why that action was taken. For example, a web proxy with an `event.action` which denied the request may also populate `event.reason` with the reason why (e.g. `blocked site`). - -type: keyword - -example: Terminated an unexpected process - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`event.reference`*:: -+ --- -Reference URL linking to additional information about this event. -This URL links to a static definition of this event. Alert events, indicated by `event.kind:alert`, are a common use case for this field. - -type: keyword - -example: https://system.example.com/event/#0001234 - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`event.risk_score`*:: -+ --- -Risk score or priority of the event (e.g. security solutions). Use your system's original value here. - -type: float - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`event.risk_score_norm`*:: -+ --- -Normalized risk score or priority of the event, on a scale of 0 to 100. -This is mainly useful if you use more than one system that assigns risk scores, and you want to see a normalized value across all systems. - -type: float - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`event.sequence`*:: -+ --- -Sequence number of the event. -The sequence number is a value published by some event sources, to make the exact ordering of events unambiguous, regardless of the timestamp precision. - -type: long - -format: string - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`event.severity`*:: -+ --- -The numeric severity of the event according to your event source. -What the different severity values mean can be different between sources and use cases. It's up to the implementer to make sure severities are consistent across events from the same source. -The Syslog severity belongs in `log.syslog.severity.code`. `event.severity` is meant to represent the severity according to the event source (e.g. firewall, IDS). If the event source does not publish its own severity, you may optionally copy the `log.syslog.severity.code` to `event.severity`. - -type: long - -example: 7 - -format: string - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`event.start`*:: -+ --- -event.start contains the date when the event started or when the activity was first observed. - -type: date - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`event.timezone`*:: -+ --- -This field should be populated when the event's timestamp does not include timezone information already (e.g. default Syslog timestamps). It's optional otherwise. -Acceptable timezone formats are: a canonical ID (e.g. "Europe/Amsterdam"), abbreviated (e.g. "EST") or an HH:mm differential (e.g. "-05:00"). - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`event.type`*:: -+ --- -This is one of four ECS Categorization Fields, and indicates the third level in the ECS category hierarchy. -`event.type` represents a categorization "sub-bucket" that, when used along with the `event.category` field values, enables filtering events down to a level appropriate for single visualization. -This field is an array. This will allow proper categorization of some events that fall in multiple event types. - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`event.url`*:: -+ --- -URL linking to an external system to continue investigation of this event. -This URL links to another system where in-depth investigation of the specific occurrence of this event can take place. Alert events, indicated by `event.kind:alert`, are a common use case for this field. - -type: keyword - -example: https://mysystem.example.com/alert/5271dedb-f5b0-4218-87f0-4ac4870a38fe - -{yes-icon} {ecs-ref}[ECS] field. - --- - -[float] -=== file - -A file is defined as a set of information that has been created on, or has existed on a filesystem. -File objects can be associated with host events, network events, and/or file events (e.g., those produced by File Integrity Monitoring [FIM] products or services). File fields provide details about the affected file associated with the event or metric. - - -*`file.accessed`*:: -+ --- -Last time the file was accessed. -Note that not all filesystems keep track of access time. - -type: date - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`file.attributes`*:: -+ --- -Array of file attributes. -Attributes names will vary by platform. Here's a non-exhaustive list of values that are expected in this field: archive, compressed, directory, encrypted, execute, hidden, read, readonly, system, write. - -type: keyword - -example: ["readonly", "system"] - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`file.code_signature.digest_algorithm`*:: -+ --- -The hashing algorithm used to sign the process. -This value can distinguish signatures when a file is signed multiple times by the same signer but with a different digest algorithm. - -type: keyword - -example: sha256 - --- - -*`file.code_signature.exists`*:: -+ --- -Boolean to capture if a signature is present. - -type: boolean - -example: true - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`file.code_signature.signing_id`*:: -+ --- -The identifier used to sign the process. -This is used to identify the application manufactured by a software vendor. The field is relevant to Apple *OS only. - -type: keyword - -example: com.apple.xpc.proxy - --- - -*`file.code_signature.status`*:: -+ --- -Additional information about the certificate status. -This is useful for logging cryptographic errors with the certificate validity or trust status. Leave unpopulated if the validity or trust of the certificate was unchecked. - -type: keyword - -example: ERROR_UNTRUSTED_ROOT - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`file.code_signature.subject_name`*:: -+ --- -Subject name of the code signer - -type: keyword - -example: Microsoft Corporation - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`file.code_signature.team_id`*:: -+ --- -The team identifier used to sign the process. -This is used to identify the team or vendor of a software product. The field is relevant to Apple *OS only. - -type: keyword - -example: EQHXZ8M8AV - --- - -*`file.code_signature.timestamp`*:: -+ --- -Date and time when the code signature was generated and signed. - -type: date - -example: 2021-01-01T12:10:30Z - --- - -*`file.code_signature.trusted`*:: -+ --- -Stores the trust status of the certificate chain. -Validating the trust of the certificate chain may be complicated, and this field should only be populated by tools that actively check the status. - -type: boolean - -example: true - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`file.code_signature.valid`*:: -+ --- -Boolean to capture if the digital signature is verified against the binary content. -Leave unpopulated if a certificate was unchecked. - -type: boolean - -example: true - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`file.created`*:: -+ --- -File creation time. -Note that not all filesystems store the creation time. - -type: date - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`file.ctime`*:: -+ --- -Last time the file attributes or metadata changed. -Note that changes to the file content will update `mtime`. This implies `ctime` will be adjusted at the same time, since `mtime` is an attribute of the file. - -type: date - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`file.device`*:: -+ --- -Device that is the source of the file. - -type: keyword - -example: sda - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`file.directory`*:: -+ --- -Directory where the file is located. It should include the drive letter, when appropriate. - -type: keyword - -example: /home/alice - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`file.drive_letter`*:: -+ --- -Drive letter where the file is located. This field is only relevant on Windows. -The value should be uppercase, and not include the colon. - -type: keyword - -example: C - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`file.elf.architecture`*:: -+ --- -Machine architecture of the ELF file. - -type: keyword - -example: x86-64 - --- - -*`file.elf.byte_order`*:: -+ --- -Byte sequence of ELF file. - -type: keyword - -example: Little Endian - --- - -*`file.elf.cpu_type`*:: -+ --- -CPU type of the ELF file. - -type: keyword - -example: Intel - --- - -*`file.elf.creation_date`*:: -+ --- -Extracted when possible from the file's metadata. Indicates when it was built or compiled. It can also be faked by malware creators. - -type: date - --- - -*`file.elf.exports`*:: -+ --- -List of exported element names and types. - -type: flattened - --- - -*`file.elf.header.abi_version`*:: -+ --- -Version of the ELF Application Binary Interface (ABI). - -type: keyword - --- - -*`file.elf.header.class`*:: -+ --- -Header class of the ELF file. - -type: keyword - --- - -*`file.elf.header.data`*:: -+ --- -Data table of the ELF header. - -type: keyword - --- - -*`file.elf.header.entrypoint`*:: -+ --- -Header entrypoint of the ELF file. - -type: long - -format: string - --- - -*`file.elf.header.object_version`*:: -+ --- -"0x1" for original ELF files. - -type: keyword - --- - -*`file.elf.header.os_abi`*:: -+ --- -Application Binary Interface (ABI) of the Linux OS. - -type: keyword - --- - -*`file.elf.header.type`*:: -+ --- -Header type of the ELF file. - -type: keyword - --- - -*`file.elf.header.version`*:: -+ --- -Version of the ELF header. - -type: keyword - --- - -*`file.elf.imports`*:: -+ --- -List of imported element names and types. - -type: flattened - --- - -*`file.elf.sections`*:: -+ --- -An array containing an object for each section of the ELF file. -The keys that should be present in these objects are defined by sub-fields underneath `elf.sections.*`. - -type: nested - --- - -*`file.elf.sections.chi2`*:: -+ --- -Chi-square probability distribution of the section. - -type: long - -format: number - --- - -*`file.elf.sections.entropy`*:: -+ --- -Shannon entropy calculation from the section. - -type: long - -format: number - --- - -*`file.elf.sections.flags`*:: -+ --- -ELF Section List flags. - -type: keyword - --- - -*`file.elf.sections.name`*:: -+ --- -ELF Section List name. - -type: keyword - --- - -*`file.elf.sections.physical_offset`*:: -+ --- -ELF Section List offset. - -type: keyword - --- - -*`file.elf.sections.physical_size`*:: -+ --- -ELF Section List physical size. - -type: long - -format: bytes - --- - -*`file.elf.sections.type`*:: -+ --- -ELF Section List type. - -type: keyword - --- - -*`file.elf.sections.virtual_address`*:: -+ --- -ELF Section List virtual address. - -type: long - -format: string - --- - -*`file.elf.sections.virtual_size`*:: -+ --- -ELF Section List virtual size. - -type: long - -format: string - --- - -*`file.elf.segments`*:: -+ --- -An array containing an object for each segment of the ELF file. -The keys that should be present in these objects are defined by sub-fields underneath `elf.segments.*`. - -type: nested - --- - -*`file.elf.segments.sections`*:: -+ --- -ELF object segment sections. - -type: keyword - --- - -*`file.elf.segments.type`*:: -+ --- -ELF object segment type. - -type: keyword - --- - -*`file.elf.shared_libraries`*:: -+ --- -List of shared libraries used by this ELF object. - -type: keyword - --- - -*`file.elf.telfhash`*:: -+ --- -telfhash symbol hash for ELF file. - -type: keyword - --- - -*`file.extension`*:: -+ --- -File extension, excluding the leading dot. -Note that when the file name has multiple extensions (example.tar.gz), only the last one should be captured ("gz", not "tar.gz"). - -type: keyword - -example: png - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`file.fork_name`*:: -+ --- -A fork is additional data associated with a filesystem object. -On Linux, a resource fork is used to store additional data with a filesystem object. A file always has at least one fork for the data portion, and additional forks may exist. -On NTFS, this is analogous to an Alternate Data Stream (ADS), and the default data stream for a file is just called $DATA. Zone.Identifier is commonly used by Windows to track contents downloaded from the Internet. An ADS is typically of the form: `C:\path\to\filename.extension:some_fork_name`, and `some_fork_name` is the value that should populate `fork_name`. `filename.extension` should populate `file.name`, and `extension` should populate `file.extension`. The full path, `file.path`, will include the fork name. - -type: keyword - -example: Zone.Identifer - --- - -*`file.gid`*:: -+ --- -Primary group ID (GID) of the file. - -type: keyword - -example: 1001 - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`file.group`*:: -+ --- -Primary group name of the file. - -type: keyword - -example: alice - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`file.hash.md5`*:: -+ --- -MD5 hash. - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`file.hash.sha1`*:: -+ --- -SHA1 hash. - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`file.hash.sha256`*:: -+ --- -SHA256 hash. - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`file.hash.sha512`*:: -+ --- -SHA512 hash. - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`file.hash.ssdeep`*:: -+ --- -SSDEEP hash. - -type: keyword - --- - -*`file.inode`*:: -+ --- -Inode representing the file in the filesystem. - -type: keyword - -example: 256383 - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`file.mime_type`*:: -+ --- -MIME type should identify the format of the file or stream of bytes using https://www.iana.org/assignments/media-types/media-types.xhtml[IANA official types], where possible. When more than one type is applicable, the most specific type should be used. - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`file.mode`*:: -+ --- -Mode of the file in octal representation. - -type: keyword - -example: 0640 - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`file.mtime`*:: -+ --- -Last time the file content was modified. - -type: date - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`file.name`*:: -+ --- -Name of the file including the extension, without the directory. - -type: keyword - -example: example.png - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`file.owner`*:: -+ --- -File owner's username. - -type: keyword - -example: alice - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`file.path`*:: -+ --- -Full path to the file, including the file name. It should include the drive letter, when appropriate. - -type: keyword - -example: /home/alice/example.png - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`file.path.text`*:: -+ --- -type: match_only_text - --- - -*`file.pe.architecture`*:: -+ --- -CPU architecture target for the file. - -type: keyword - -example: x64 - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`file.pe.company`*:: -+ --- -Internal company name of the file, provided at compile-time. - -type: keyword - -example: Microsoft Corporation - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`file.pe.description`*:: -+ --- -Internal description of the file, provided at compile-time. - -type: keyword - -example: Paint - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`file.pe.file_version`*:: -+ --- -Internal version of the file, provided at compile-time. - -type: keyword - -example: 6.3.9600.17415 - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`file.pe.imphash`*:: -+ --- -A hash of the imports in a PE file. An imphash -- or import hash -- can be used to fingerprint binaries even after recompilation or other code-level transformations have occurred, which would change more traditional hash values. -Learn more at https://www.fireeye.com/blog/threat-research/2014/01/tracking-malware-import-hashing.html. - -type: keyword - -example: 0c6803c4e922103c4dca5963aad36ddf - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`file.pe.original_file_name`*:: -+ --- -Internal name of the file, provided at compile-time. - -type: keyword - -example: MSPAINT.EXE - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`file.pe.product`*:: -+ --- -Internal product name of the file, provided at compile-time. - -type: keyword - -example: Microsoft® Windows® Operating System - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`file.size`*:: -+ --- -File size in bytes. -Only relevant when `file.type` is "file". - -type: long - -example: 16384 - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`file.target_path`*:: -+ --- -Target path for symlinks. - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`file.target_path.text`*:: -+ --- -type: match_only_text - --- - -*`file.type`*:: -+ --- -File type (file, dir, or symlink). - -type: keyword - -example: file - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`file.uid`*:: -+ --- -The user ID (UID) or security identifier (SID) of the file owner. - -type: keyword - -example: 1001 - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`file.x509.alternative_names`*:: -+ --- -List of subject alternative names (SAN). Name types vary by certificate authority and certificate type but commonly contain IP addresses, DNS names (and wildcards), and email addresses. - -type: keyword - -example: *.elastic.co - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`file.x509.issuer.common_name`*:: -+ --- -List of common name (CN) of issuing certificate authority. - -type: keyword - -example: Example SHA2 High Assurance Server CA - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`file.x509.issuer.country`*:: -+ --- -List of country (C) codes - -type: keyword - -example: US - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`file.x509.issuer.distinguished_name`*:: -+ --- -Distinguished name (DN) of issuing certificate authority. - -type: keyword - -example: C=US, O=Example Inc, OU=www.example.com, CN=Example SHA2 High Assurance Server CA - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`file.x509.issuer.locality`*:: -+ --- -List of locality names (L) - -type: keyword - -example: Mountain View - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`file.x509.issuer.organization`*:: -+ --- -List of organizations (O) of issuing certificate authority. - -type: keyword - -example: Example Inc - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`file.x509.issuer.organizational_unit`*:: -+ --- -List of organizational units (OU) of issuing certificate authority. - -type: keyword - -example: www.example.com - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`file.x509.issuer.state_or_province`*:: -+ --- -List of state or province names (ST, S, or P) - -type: keyword - -example: California - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`file.x509.not_after`*:: -+ --- -Time at which the certificate is no longer considered valid. - -type: date - -example: 2020-07-16 03:15:39+00:00 - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`file.x509.not_before`*:: -+ --- -Time at which the certificate is first considered valid. - -type: date - -example: 2019-08-16 01:40:25+00:00 - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`file.x509.public_key_algorithm`*:: -+ --- -Algorithm used to generate the public key. - -type: keyword - -example: RSA - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`file.x509.public_key_curve`*:: -+ --- -The curve used by the elliptic curve public key algorithm. This is algorithm specific. - -type: keyword - -example: nistp521 - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`file.x509.public_key_exponent`*:: -+ --- -Exponent used to derive the public key. This is algorithm specific. - -type: long - -example: 65537 - -{yes-icon} {ecs-ref}[ECS] field. - -Field is not indexed. - --- - -*`file.x509.public_key_size`*:: -+ --- -The size of the public key space in bits. - -type: long - -example: 2048 - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`file.x509.serial_number`*:: -+ --- -Unique serial number issued by the certificate authority. For consistency, if this value is alphanumeric, it should be formatted without colons and uppercase characters. - -type: keyword - -example: 55FBB9C7DEBF09809D12CCAA - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`file.x509.signature_algorithm`*:: -+ --- -Identifier for certificate signature algorithm. We recommend using names found in Go Lang Crypto library. See https://github.com/golang/go/blob/go1.14/src/crypto/x509/x509.go#L337-L353. - -type: keyword - -example: SHA256-RSA - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`file.x509.subject.common_name`*:: -+ --- -List of common names (CN) of subject. - -type: keyword - -example: shared.global.example.net - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`file.x509.subject.country`*:: -+ --- -List of country (C) code - -type: keyword - -example: US - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`file.x509.subject.distinguished_name`*:: -+ --- -Distinguished name (DN) of the certificate subject entity. - -type: keyword - -example: C=US, ST=California, L=San Francisco, O=Example, Inc., CN=shared.global.example.net - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`file.x509.subject.locality`*:: -+ --- -List of locality names (L) - -type: keyword - -example: San Francisco - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`file.x509.subject.organization`*:: -+ --- -List of organizations (O) of subject. - -type: keyword - -example: Example, Inc. - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`file.x509.subject.organizational_unit`*:: -+ --- -List of organizational units (OU) of subject. - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`file.x509.subject.state_or_province`*:: -+ --- -List of state or province names (ST, S, or P) - -type: keyword - -example: California - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`file.x509.version_number`*:: -+ --- -Version of x509 format. - -type: keyword - -example: 3 - -{yes-icon} {ecs-ref}[ECS] field. - --- - -[float] -=== geo - -Geo fields can carry data about a specific location related to an event. -This geolocation information can be derived from techniques such as Geo IP, or be user-supplied. - - -*`geo.city_name`*:: -+ --- -City name. - -type: keyword - -example: Montreal - --- - -*`geo.continent_code`*:: -+ --- -Two-letter code representing continent's name. - -type: keyword - -example: NA - --- - -*`geo.continent_name`*:: -+ --- -Name of the continent. - -type: keyword - -example: North America - --- - -*`geo.country_iso_code`*:: -+ --- -Country ISO code. - -type: keyword - -example: CA - --- - -*`geo.country_name`*:: -+ --- -Country name. - -type: keyword - -example: Canada - --- - -*`geo.location`*:: -+ --- -Longitude and latitude. - -type: geo_point - -example: { "lon": -73.614830, "lat": 45.505918 } - --- - -*`geo.name`*:: -+ --- -User-defined description of a location, at the level of granularity they care about. -Could be the name of their data centers, the floor number, if this describes a local physical entity, city names. -Not typically used in automated geolocation. - -type: keyword - -example: boston-dc - --- - -*`geo.postal_code`*:: -+ --- -Postal code associated with the location. -Values appropriate for this field may also be known as a postcode or ZIP code and will vary widely from country to country. - -type: keyword - -example: 94040 - --- - -*`geo.region_iso_code`*:: -+ --- -Region ISO code. - -type: keyword - -example: CA-QC - --- - -*`geo.region_name`*:: -+ --- -Region name. - -type: keyword - -example: Quebec - --- - -*`geo.timezone`*:: -+ --- -The time zone of the location, such as IANA time zone name. - -type: keyword - -example: America/Argentina/Buenos_Aires - --- - -[float] -=== group - -The group fields are meant to represent groups that are relevant to the event. - - -*`group.domain`*:: -+ --- -Name of the directory the group is a member of. -For example, an LDAP or Active Directory domain name. - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`group.id`*:: -+ --- -Unique identifier for the group on the system/platform. - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`group.name`*:: -+ --- -Name of the group. - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -[float] -=== hash - -The hash fields represent different bitwise hash algorithms and their values. -Field names for common hashes (e.g. MD5, SHA1) are predefined. Add fields for other hashes by lowercasing the hash algorithm name and using underscore separators as appropriate (snake case, e.g. sha3_512). -Note that this fieldset is used for common hashes that may be computed over a range of generic bytes. Entity-specific hashes such as ja3 or imphash are placed in the fieldsets to which they relate (tls and pe, respectively). - - -*`hash.md5`*:: -+ --- -MD5 hash. - -type: keyword - --- - -*`hash.sha1`*:: -+ --- -SHA1 hash. - -type: keyword - --- - -*`hash.sha256`*:: -+ --- -SHA256 hash. - -type: keyword - --- - -*`hash.sha512`*:: -+ --- -SHA512 hash. - -type: keyword - --- - -*`hash.ssdeep`*:: -+ --- -SSDEEP hash. - -type: keyword - --- - -[float] -=== host - -A host is defined as a general computing instance. -ECS host.* fields should be populated with details about the host on which the event happened, or from which the measurement was taken. Host types include hardware, virtual machines, Docker containers, and Kubernetes nodes. - - -*`host.architecture`*:: -+ --- -Operating system architecture. - -type: keyword - -example: x86_64 - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`host.cpu.usage`*:: -+ --- -Percent CPU used which is normalized by the number of CPU cores and it ranges from 0 to 1. -Scaling factor: 1000. -For example: For a two core host, this value should be the average of the two cores, between 0 and 1. - -type: scaled_float - --- - -*`host.disk.read.bytes`*:: -+ --- -The total number of bytes (gauge) read successfully (aggregated from all disks) since the last metric collection. - -type: long - --- - -*`host.disk.write.bytes`*:: -+ --- -The total number of bytes (gauge) written successfully (aggregated from all disks) since the last metric collection. - -type: long - --- - -*`host.domain`*:: -+ --- -Name of the domain of which the host is a member. -For example, on Windows this could be the host's Active Directory domain or NetBIOS domain name. For Linux this could be the domain of the host's LDAP provider. - -type: keyword - -example: CONTOSO - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`host.geo.city_name`*:: -+ --- -City name. - -type: keyword - -example: Montreal - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`host.geo.continent_code`*:: -+ --- -Two-letter code representing continent's name. - -type: keyword - -example: NA - --- - -*`host.geo.continent_name`*:: -+ --- -Name of the continent. - -type: keyword - -example: North America - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`host.geo.country_iso_code`*:: -+ --- -Country ISO code. - -type: keyword - -example: CA - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`host.geo.country_name`*:: -+ --- -Country name. - -type: keyword - -example: Canada - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`host.geo.location`*:: -+ --- -Longitude and latitude. - -type: geo_point - -example: { "lon": -73.614830, "lat": 45.505918 } - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`host.geo.name`*:: -+ --- -User-defined description of a location, at the level of granularity they care about. -Could be the name of their data centers, the floor number, if this describes a local physical entity, city names. -Not typically used in automated geolocation. - -type: keyword - -example: boston-dc - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`host.geo.postal_code`*:: -+ --- -Postal code associated with the location. -Values appropriate for this field may also be known as a postcode or ZIP code and will vary widely from country to country. - -type: keyword - -example: 94040 - --- - -*`host.geo.region_iso_code`*:: -+ --- -Region ISO code. - -type: keyword - -example: CA-QC - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`host.geo.region_name`*:: -+ --- -Region name. - -type: keyword - -example: Quebec - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`host.geo.timezone`*:: -+ --- -The time zone of the location, such as IANA time zone name. - -type: keyword - -example: America/Argentina/Buenos_Aires - --- - -*`host.hostname`*:: -+ --- -Hostname of the host. -It normally contains what the `hostname` command returns on the host machine. - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`host.id`*:: -+ --- -Unique host id. -As hostname is not always unique, use values that are meaningful in your environment. -Example: The current usage of `beat.name`. - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`host.ip`*:: -+ --- -Host ip addresses. - -type: ip - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`host.mac`*:: -+ --- -Host MAC addresses. -The notation format from RFC 7042 is suggested: Each octet (that is, 8-bit byte) is represented by two [uppercase] hexadecimal digits giving the value of the octet as an unsigned integer. Successive octets are separated by a hyphen. - -type: keyword - -example: ["00-00-5E-00-53-23", "00-00-5E-00-53-24"] - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`host.name`*:: -+ --- -Name of the host. -It can contain what `hostname` returns on Unix systems, the fully qualified domain name, or a name specified by the user. The sender decides which value to use. - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`host.network.egress.bytes`*:: -+ --- -The number of bytes (gauge) sent out on all network interfaces by the host since the last metric collection. - -type: long - --- - -*`host.network.egress.packets`*:: -+ --- -The number of packets (gauge) sent out on all network interfaces by the host since the last metric collection. - -type: long - --- - -*`host.network.ingress.bytes`*:: -+ --- -The number of bytes received (gauge) on all network interfaces by the host since the last metric collection. - -type: long - --- - -*`host.network.ingress.packets`*:: -+ --- -The number of packets (gauge) received on all network interfaces by the host since the last metric collection. - -type: long - --- - -*`host.os.family`*:: -+ --- -OS family (such as redhat, debian, freebsd, windows). - -type: keyword - -example: debian - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`host.os.full`*:: -+ --- -Operating system name, including the version or code name. - -type: keyword - -example: Mac OS Mojave - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`host.os.full.text`*:: -+ --- -type: match_only_text - --- - -*`host.os.kernel`*:: -+ --- -Operating system kernel version as a raw string. - -type: keyword - -example: 4.4.0-112-generic - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`host.os.name`*:: -+ --- -Operating system name, without the version. - -type: keyword - -example: Mac OS X - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`host.os.name.text`*:: -+ --- -type: match_only_text - --- - -*`host.os.platform`*:: -+ --- -Operating system platform (such centos, ubuntu, windows). - -type: keyword - -example: darwin - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`host.os.type`*:: -+ --- -Use the `os.type` field to categorize the operating system into one of the broad commercial families. -One of these following values should be used (lowercase): linux, macos, unix, windows. -If the OS you're dealing with is not in the list, the field should not be populated. Please let us know by opening an issue with ECS, to propose its addition. - -type: keyword - -example: macos - --- - -*`host.os.version`*:: -+ --- -Operating system version as a raw string. - -type: keyword - -example: 10.14.1 - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`host.type`*:: -+ --- -Type of host. -For Cloud providers this can be the machine type like `t2.medium`. If vm, this could be the container, for example, or other information meaningful in your environment. - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`host.uptime`*:: -+ --- -Seconds the host has been up. - -type: long - -example: 1325 - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`host.user.domain`*:: -+ --- -Name of the directory the user is a member of. -For example, an LDAP or Active Directory domain name. - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`host.user.email`*:: -+ --- -User email address. - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`host.user.full_name`*:: -+ --- -User's full name, if available. - -type: keyword - -example: Albert Einstein - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`host.user.full_name.text`*:: -+ --- -type: match_only_text - --- - -*`host.user.group.domain`*:: -+ --- -Name of the directory the group is a member of. -For example, an LDAP or Active Directory domain name. - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`host.user.group.id`*:: -+ --- -Unique identifier for the group on the system/platform. - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`host.user.group.name`*:: -+ --- -Name of the group. - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`host.user.hash`*:: -+ --- -Unique user hash to correlate information for a user in anonymized form. -Useful if `user.id` or `user.name` contain confidential information and cannot be used. - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`host.user.id`*:: -+ --- -Unique identifier of the user. - -type: keyword - -example: S-1-5-21-202424912787-2692429404-2351956786-1000 - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`host.user.name`*:: -+ --- -Short name or login of the user. - -type: keyword - -example: a.einstein - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`host.user.name.text`*:: -+ --- -type: match_only_text - --- - -*`host.user.roles`*:: -+ --- -Array of user roles at the time of the event. - -type: keyword - -example: ["kibana_admin", "reporting_user"] - -{yes-icon} {ecs-ref}[ECS] field. - --- - -[float] -=== http - -Fields related to HTTP activity. Use the `url` field set to store the url of the request. - - -*`http.request.body.bytes`*:: -+ --- -Size in bytes of the request body. - -type: long - -example: 887 - -format: bytes - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`http.request.body.content`*:: -+ --- -The full HTTP request body. - -type: wildcard - -example: Hello world - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`http.request.body.content.text`*:: -+ --- -type: match_only_text - --- - -*`http.request.bytes`*:: -+ --- -Total size in bytes of the request (body and headers). - -type: long - -example: 1437 - -format: bytes - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`http.request.id`*:: -+ --- -A unique identifier for each HTTP request to correlate logs between clients and servers in transactions. -The id may be contained in a non-standard HTTP header, such as `X-Request-ID` or `X-Correlation-ID`. - -type: keyword - -example: 123e4567-e89b-12d3-a456-426614174000 - --- - -*`http.request.method`*:: -+ --- -HTTP request method. -Prior to ECS 1.6.0 the following guidance was provided: -"The field value must be normalized to lowercase for querying." -As of ECS 1.6.0, the guidance is deprecated because the original case of the method may be useful in anomaly detection. Original case will be mandated in ECS 2.0.0 - -type: keyword - -example: GET, POST, PUT, PoST - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`http.request.mime_type`*:: -+ --- -Mime type of the body of the request. -This value must only be populated based on the content of the request body, not on the `Content-Type` header. Comparing the mime type of a request with the request's Content-Type header can be helpful in detecting threats or misconfigured clients. - -type: keyword - -example: image/gif - --- - -*`http.request.referrer`*:: -+ --- -Referrer for this HTTP request. - -type: keyword - -example: https://blog.example.com/ - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`http.response.body.bytes`*:: -+ --- -Size in bytes of the response body. - -type: long - -example: 887 - -format: bytes - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`http.response.body.content`*:: -+ --- -The full HTTP response body. - -type: wildcard - -example: Hello world - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`http.response.body.content.text`*:: -+ --- -type: match_only_text - --- - -*`http.response.bytes`*:: -+ --- -Total size in bytes of the response (body and headers). - -type: long - -example: 1437 - -format: bytes - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`http.response.mime_type`*:: -+ --- -Mime type of the body of the response. -This value must only be populated based on the content of the response body, not on the `Content-Type` header. Comparing the mime type of a response with the response's Content-Type header can be helpful in detecting misconfigured servers. - -type: keyword - -example: image/gif - --- - -*`http.response.status_code`*:: -+ --- -HTTP response status code. - -type: long - -example: 404 - -format: string - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`http.version`*:: -+ --- -HTTP version. - -type: keyword - -example: 1.1 - -{yes-icon} {ecs-ref}[ECS] field. - --- - -[float] -=== interface - -The interface fields are used to record ingress and egress interface information when reported by an observer (e.g. firewall, router, load balancer) in the context of the observer handling a network connection. In the case of a single observer interface (e.g. network sensor on a span port) only the observer.ingress information should be populated. - - -*`interface.alias`*:: -+ --- -Interface alias as reported by the system, typically used in firewall implementations for e.g. inside, outside, or dmz logical interface naming. - -type: keyword - -example: outside - --- - -*`interface.id`*:: -+ --- -Interface ID as reported by an observer (typically SNMP interface ID). - -type: keyword - -example: 10 - --- - -*`interface.name`*:: -+ --- -Interface name as reported by the system. - -type: keyword - -example: eth0 - --- - -[float] -=== log - -Details about the event's logging mechanism or logging transport. -The log.* fields are typically populated with details about the logging mechanism used to create and/or transport the event. For example, syslog details belong under `log.syslog.*`. -The details specific to your event source are typically not logged under `log.*`, but rather in `event.*` or in other ECS fields. - - -*`log.file.path`*:: -+ --- -Full path to the log file this event came from, including the file name. It should include the drive letter, when appropriate. -If the event wasn't read from a log file, do not populate this field. - -type: keyword - -example: /var/log/fun-times.log - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`log.level`*:: -+ --- -Original log level of the log event. -If the source of the event provides a log level or textual severity, this is the one that goes in `log.level`. If your source doesn't specify one, you may put your event transport's severity here (e.g. Syslog severity). -Some examples are `warn`, `err`, `i`, `informational`. - -type: keyword - -example: error - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`log.logger`*:: -+ --- -The name of the logger inside an application. This is usually the name of the class which initialized the logger, or can be a custom name. - -type: keyword - -example: org.elasticsearch.bootstrap.Bootstrap - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`log.origin.file.line`*:: -+ --- -The line number of the file containing the source code which originated the log event. - -type: integer - -example: 42 - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`log.origin.file.name`*:: -+ --- -The name of the file containing the source code which originated the log event. -Note that this field is not meant to capture the log file. The correct field to capture the log file is `log.file.path`. - -type: keyword - -example: Bootstrap.java - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`log.origin.function`*:: -+ --- -The name of the function or method which originated the log event. - -type: keyword - -example: init - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`log.original`*:: -+ --- -Deprecated for removal in next major version release. This field is superseded by `event.original`. -This is the original log message and contains the full log message before splitting it up in multiple parts. -In contrast to the `message` field which can contain an extracted part of the log message, this field contains the original, full log message. It can have already some modifications applied like encoding or new lines removed to clean up the log message. -This field is not indexed and doc_values are disabled so it can't be queried but the value can be retrieved from `_source`. - -type: keyword - -example: Sep 19 08:26:10 localhost My log - -{yes-icon} {ecs-ref}[ECS] field. - -Field is not indexed. - --- - -*`log.syslog`*:: -+ --- -The Syslog metadata of the event, if the event was transmitted via Syslog. Please see RFCs 5424 or 3164. - -type: object - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`log.syslog.facility.code`*:: -+ --- -The Syslog numeric facility of the log event, if available. -According to RFCs 5424 and 3164, this value should be an integer between 0 and 23. - -type: long - -example: 23 - -format: string - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`log.syslog.facility.name`*:: -+ --- -The Syslog text-based facility of the log event, if available. - -type: keyword - -example: local7 - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`log.syslog.priority`*:: -+ --- -Syslog numeric priority of the event, if available. -According to RFCs 5424 and 3164, the priority is 8 * facility + severity. This number is therefore expected to contain a value between 0 and 191. - -type: long - -example: 135 - -format: string - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`log.syslog.severity.code`*:: -+ --- -The Syslog numeric severity of the log event, if available. -If the event source publishing via Syslog provides a different numeric severity value (e.g. firewall, IDS), your source's numeric severity should go to `event.severity`. If the event source does not specify a distinct severity, you can optionally copy the Syslog severity to `event.severity`. - -type: long - -example: 3 - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`log.syslog.severity.name`*:: -+ --- -The Syslog numeric severity of the log event, if available. -If the event source publishing via Syslog provides a different severity value (e.g. firewall, IDS), your source's text severity should go to `log.level`. If the event source does not specify a distinct severity, you can optionally copy the Syslog severity to `log.level`. - -type: keyword - -example: Error - -{yes-icon} {ecs-ref}[ECS] field. - --- - -[float] -=== network - -The network is defined as the communication path over which a host or network event happens. -The network.* fields should be populated with details about the network activity associated with an event. - - -*`network.application`*:: -+ --- -A name given to an application level protocol. This can be arbitrarily assigned for things like microservices, but also apply to things like skype, icq, facebook, twitter. This would be used in situations where the vendor or service can be decoded such as from the source/dest IP owners, ports, or wire format. -The field value must be normalized to lowercase for querying. See the documentation section "Implementing ECS". - -type: keyword - -example: aim - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`network.bytes`*:: -+ --- -Total bytes transferred in both directions. -If `source.bytes` and `destination.bytes` are known, `network.bytes` is their sum. - -type: long - -example: 368 - -format: bytes - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`network.community_id`*:: -+ --- -A hash of source and destination IPs and ports, as well as the protocol used in a communication. This is a tool-agnostic standard to identify flows. -Learn more at https://github.com/corelight/community-id-spec. - -type: keyword - -example: 1:hO+sN4H+MG5MY/8hIrXPqc4ZQz0= - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`network.direction`*:: -+ --- -Direction of the network traffic. -Recommended values are: - * ingress - * egress - * inbound - * outbound - * internal - * external - * unknown - -When mapping events from a host-based monitoring context, populate this field from the host's point of view, using the values "ingress" or "egress". -When mapping events from a network or perimeter-based monitoring context, populate this field from the point of view of the network perimeter, using the values "inbound", "outbound", "internal" or "external". -Note that "internal" is not crossing perimeter boundaries, and is meant to describe communication between two hosts within the perimeter. Note also that "external" is meant to describe traffic between two hosts that are external to the perimeter. This could for example be useful for ISPs or VPN service providers. - -type: keyword - -example: inbound - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`network.forwarded_ip`*:: -+ --- -Host IP address when the source IP address is the proxy. - -type: ip - -example: 192.1.1.2 - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`network.iana_number`*:: -+ --- -IANA Protocol Number (https://www.iana.org/assignments/protocol-numbers/protocol-numbers.xhtml). Standardized list of protocols. This aligns well with NetFlow and sFlow related logs which use the IANA Protocol Number. - -type: keyword - -example: 6 - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`network.inner`*:: -+ --- -Network.inner fields are added in addition to network.vlan fields to describe the innermost VLAN when q-in-q VLAN tagging is present. Allowed fields include vlan.id and vlan.name. Inner vlan fields are typically used when sending traffic with multiple 802.1q encapsulations to a network sensor (e.g. Zeek, Wireshark.) - -type: object - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`network.inner.vlan.id`*:: -+ --- -VLAN ID as reported by the observer. - -type: keyword - -example: 10 - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`network.inner.vlan.name`*:: -+ --- -Optional VLAN name as reported by the observer. - -type: keyword - -example: outside - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`network.name`*:: -+ --- -Name given by operators to sections of their network. - -type: keyword - -example: Guest Wifi - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`network.packets`*:: -+ --- -Total packets transferred in both directions. -If `source.packets` and `destination.packets` are known, `network.packets` is their sum. - -type: long - -example: 24 - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`network.protocol`*:: -+ --- -L7 Network protocol name. ex. http, lumberjack, transport protocol. -The field value must be normalized to lowercase for querying. See the documentation section "Implementing ECS". - -type: keyword - -example: http - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`network.transport`*:: -+ --- -Same as network.iana_number, but instead using the Keyword name of the transport layer (udp, tcp, ipv6-icmp, etc.) -The field value must be normalized to lowercase for querying. See the documentation section "Implementing ECS". - -type: keyword - -example: tcp - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`network.type`*:: -+ --- -In the OSI Model this would be the Network Layer. ipv4, ipv6, ipsec, pim, etc -The field value must be normalized to lowercase for querying. See the documentation section "Implementing ECS". - -type: keyword - -example: ipv4 - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`network.vlan.id`*:: -+ --- -VLAN ID as reported by the observer. - -type: keyword - -example: 10 - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`network.vlan.name`*:: -+ --- -Optional VLAN name as reported by the observer. - -type: keyword - -example: outside - -{yes-icon} {ecs-ref}[ECS] field. - --- - -[float] -=== observer - -An observer is defined as a special network, security, or application device used to detect, observe, or create network, security, or application-related events and metrics. -This could be a custom hardware appliance or a server that has been configured to run special network, security, or application software. Examples include firewalls, web proxies, intrusion detection/prevention systems, network monitoring sensors, web application firewalls, data loss prevention systems, and APM servers. The observer.* fields shall be populated with details of the system, if any, that detects, observes and/or creates a network, security, or application event or metric. Message queues and ETL components used in processing events or metrics are not considered observers in ECS. - - -*`observer.egress`*:: -+ --- -Observer.egress holds information like interface number and name, vlan, and zone information to classify egress traffic. Single armed monitoring such as a network sensor on a span port should only use observer.ingress to categorize traffic. - -type: object - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`observer.egress.interface.alias`*:: -+ --- -Interface alias as reported by the system, typically used in firewall implementations for e.g. inside, outside, or dmz logical interface naming. - -type: keyword - -example: outside - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`observer.egress.interface.id`*:: -+ --- -Interface ID as reported by an observer (typically SNMP interface ID). - -type: keyword - -example: 10 - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`observer.egress.interface.name`*:: -+ --- -Interface name as reported by the system. - -type: keyword - -example: eth0 - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`observer.egress.vlan.id`*:: -+ --- -VLAN ID as reported by the observer. - -type: keyword - -example: 10 - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`observer.egress.vlan.name`*:: -+ --- -Optional VLAN name as reported by the observer. - -type: keyword - -example: outside - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`observer.egress.zone`*:: -+ --- -Network zone of outbound traffic as reported by the observer to categorize the destination area of egress traffic, e.g. Internal, External, DMZ, HR, Legal, etc. - -type: keyword - -example: Public_Internet - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`observer.geo.city_name`*:: -+ --- -City name. - -type: keyword - -example: Montreal - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`observer.geo.continent_code`*:: -+ --- -Two-letter code representing continent's name. - -type: keyword - -example: NA - --- - -*`observer.geo.continent_name`*:: -+ --- -Name of the continent. - -type: keyword - -example: North America - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`observer.geo.country_iso_code`*:: -+ --- -Country ISO code. - -type: keyword - -example: CA - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`observer.geo.country_name`*:: -+ --- -Country name. - -type: keyword - -example: Canada - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`observer.geo.location`*:: -+ --- -Longitude and latitude. - -type: geo_point - -example: { "lon": -73.614830, "lat": 45.505918 } - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`observer.geo.name`*:: -+ --- -User-defined description of a location, at the level of granularity they care about. -Could be the name of their data centers, the floor number, if this describes a local physical entity, city names. -Not typically used in automated geolocation. - -type: keyword - -example: boston-dc - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`observer.geo.postal_code`*:: -+ --- -Postal code associated with the location. -Values appropriate for this field may also be known as a postcode or ZIP code and will vary widely from country to country. - -type: keyword - -example: 94040 - --- - -*`observer.geo.region_iso_code`*:: -+ --- -Region ISO code. - -type: keyword - -example: CA-QC - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`observer.geo.region_name`*:: -+ --- -Region name. - -type: keyword - -example: Quebec - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`observer.geo.timezone`*:: -+ --- -The time zone of the location, such as IANA time zone name. - -type: keyword - -example: America/Argentina/Buenos_Aires - --- - -*`observer.hostname`*:: -+ --- -Hostname of the observer. - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`observer.ingress`*:: -+ --- -Observer.ingress holds information like interface number and name, vlan, and zone information to classify ingress traffic. Single armed monitoring such as a network sensor on a span port should only use observer.ingress to categorize traffic. - -type: object - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`observer.ingress.interface.alias`*:: -+ --- -Interface alias as reported by the system, typically used in firewall implementations for e.g. inside, outside, or dmz logical interface naming. - -type: keyword - -example: outside - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`observer.ingress.interface.id`*:: -+ --- -Interface ID as reported by an observer (typically SNMP interface ID). - -type: keyword - -example: 10 - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`observer.ingress.interface.name`*:: -+ --- -Interface name as reported by the system. - -type: keyword - -example: eth0 - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`observer.ingress.vlan.id`*:: -+ --- -VLAN ID as reported by the observer. - -type: keyword - -example: 10 - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`observer.ingress.vlan.name`*:: -+ --- -Optional VLAN name as reported by the observer. - -type: keyword - -example: outside - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`observer.ingress.zone`*:: -+ --- -Network zone of incoming traffic as reported by the observer to categorize the source area of ingress traffic. e.g. internal, External, DMZ, HR, Legal, etc. - -type: keyword - -example: DMZ - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`observer.ip`*:: -+ --- -IP addresses of the observer. - -type: ip - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`observer.mac`*:: -+ --- -MAC addresses of the observer. -The notation format from RFC 7042 is suggested: Each octet (that is, 8-bit byte) is represented by two [uppercase] hexadecimal digits giving the value of the octet as an unsigned integer. Successive octets are separated by a hyphen. - -type: keyword - -example: ["00-00-5E-00-53-23", "00-00-5E-00-53-24"] - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`observer.name`*:: -+ --- -Custom name of the observer. -This is a name that can be given to an observer. This can be helpful for example if multiple firewalls of the same model are used in an organization. -If no custom name is needed, the field can be left empty. - -type: keyword - -example: 1_proxySG - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`observer.os.family`*:: -+ --- -OS family (such as redhat, debian, freebsd, windows). - -type: keyword - -example: debian - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`observer.os.full`*:: -+ --- -Operating system name, including the version or code name. - -type: keyword - -example: Mac OS Mojave - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`observer.os.full.text`*:: -+ --- -type: match_only_text - --- - -*`observer.os.kernel`*:: -+ --- -Operating system kernel version as a raw string. - -type: keyword - -example: 4.4.0-112-generic - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`observer.os.name`*:: -+ --- -Operating system name, without the version. - -type: keyword - -example: Mac OS X - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`observer.os.name.text`*:: -+ --- -type: match_only_text - --- - -*`observer.os.platform`*:: -+ --- -Operating system platform (such centos, ubuntu, windows). - -type: keyword - -example: darwin - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`observer.os.type`*:: -+ --- -Use the `os.type` field to categorize the operating system into one of the broad commercial families. -One of these following values should be used (lowercase): linux, macos, unix, windows. -If the OS you're dealing with is not in the list, the field should not be populated. Please let us know by opening an issue with ECS, to propose its addition. - -type: keyword - -example: macos - --- - -*`observer.os.version`*:: -+ --- -Operating system version as a raw string. - -type: keyword - -example: 10.14.1 - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`observer.product`*:: -+ --- -The product name of the observer. - -type: keyword - -example: s200 - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`observer.serial_number`*:: -+ --- -Observer serial number. - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`observer.type`*:: -+ --- -The type of the observer the data is coming from. -There is no predefined list of observer types. Some examples are `forwarder`, `firewall`, `ids`, `ips`, `proxy`, `poller`, `sensor`, `APM server`. - -type: keyword - -example: firewall - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`observer.vendor`*:: -+ --- -Vendor name of the observer. - -type: keyword - -example: Symantec - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`observer.version`*:: -+ --- -Observer version. - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -[float] -=== orchestrator - -Fields that describe the resources which container orchestrators manage or act upon. - - -*`orchestrator.api_version`*:: -+ --- -API version being used to carry out the action - -type: keyword - -example: v1beta1 - --- - -*`orchestrator.cluster.name`*:: -+ --- -Name of the cluster. - -type: keyword - --- - -*`orchestrator.cluster.url`*:: -+ --- -URL of the API used to manage the cluster. - -type: keyword - --- - -*`orchestrator.cluster.version`*:: -+ --- -The version of the cluster. - -type: keyword - --- - -*`orchestrator.namespace`*:: -+ --- -Namespace in which the action is taking place. - -type: keyword - -example: kube-system - --- - -*`orchestrator.organization`*:: -+ --- -Organization affected by the event (for multi-tenant orchestrator setups). - -type: keyword - -example: elastic - --- - -*`orchestrator.resource.name`*:: -+ --- -Name of the resource being acted upon. - -type: keyword - -example: test-pod-cdcws - --- - -*`orchestrator.resource.type`*:: -+ --- -Type of resource being acted upon. - -type: keyword - -example: service - --- - -*`orchestrator.type`*:: -+ --- -Orchestrator cluster type (e.g. kubernetes, nomad or cloudfoundry). - -type: keyword - -example: kubernetes - --- - -[float] -=== organization - -The organization fields enrich data with information about the company or entity the data is associated with. -These fields help you arrange or filter data stored in an index by one or multiple organizations. - - -*`organization.id`*:: -+ --- -Unique identifier for the organization. - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`organization.name`*:: -+ --- -Organization name. - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`organization.name.text`*:: -+ --- -type: match_only_text - --- - -[float] -=== os - -The OS fields contain information about the operating system. - - -*`os.family`*:: -+ --- -OS family (such as redhat, debian, freebsd, windows). - -type: keyword - -example: debian - --- - -*`os.full`*:: -+ --- -Operating system name, including the version or code name. - -type: keyword - -example: Mac OS Mojave - --- - -*`os.full.text`*:: -+ --- -type: match_only_text - --- - -*`os.kernel`*:: -+ --- -Operating system kernel version as a raw string. - -type: keyword - -example: 4.4.0-112-generic - --- - -*`os.name`*:: -+ --- -Operating system name, without the version. - -type: keyword - -example: Mac OS X - --- - -*`os.name.text`*:: -+ --- -type: match_only_text - --- - -*`os.platform`*:: -+ --- -Operating system platform (such centos, ubuntu, windows). - -type: keyword - -example: darwin - --- - -*`os.type`*:: -+ --- -Use the `os.type` field to categorize the operating system into one of the broad commercial families. -One of these following values should be used (lowercase): linux, macos, unix, windows. -If the OS you're dealing with is not in the list, the field should not be populated. Please let us know by opening an issue with ECS, to propose its addition. - -type: keyword - -example: macos - --- - -*`os.version`*:: -+ --- -Operating system version as a raw string. - -type: keyword - -example: 10.14.1 - --- - -[float] -=== package - -These fields contain information about an installed software package. It contains general information about a package, such as name, version or size. It also contains installation details, such as time or location. - - -*`package.architecture`*:: -+ --- -Package architecture. - -type: keyword - -example: x86_64 - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`package.build_version`*:: -+ --- -Additional information about the build version of the installed package. -For example use the commit SHA of a non-released package. - -type: keyword - -example: 36f4f7e89dd61b0988b12ee000b98966867710cd - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`package.checksum`*:: -+ --- -Checksum of the installed package for verification. - -type: keyword - -example: 68b329da9893e34099c7d8ad5cb9c940 - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`package.description`*:: -+ --- -Description of the package. - -type: keyword - -example: Open source programming language to build simple/reliable/efficient software. - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`package.install_scope`*:: -+ --- -Indicating how the package was installed, e.g. user-local, global. - -type: keyword - -example: global - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`package.installed`*:: -+ --- -Time when package was installed. - -type: date - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`package.license`*:: -+ --- -License under which the package was released. -Use a short name, e.g. the license identifier from SPDX License List where possible (https://spdx.org/licenses/). - -type: keyword - -example: Apache License 2.0 - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`package.name`*:: -+ --- -Package name - -type: keyword - -example: go - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`package.path`*:: -+ --- -Path where the package is installed. - -type: keyword - -example: /usr/local/Cellar/go/1.12.9/ - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`package.reference`*:: -+ --- -Home page or reference URL of the software in this package, if available. - -type: keyword - -example: https://golang.org - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`package.size`*:: -+ --- -Package size in bytes. - -type: long - -example: 62231 - -format: string - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`package.type`*:: -+ --- -Type of package. -This should contain the package file type, rather than the package manager name. Examples: rpm, dpkg, brew, npm, gem, nupkg, jar. - -type: keyword - -example: rpm - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`package.version`*:: -+ --- -Package version - -type: keyword - -example: 1.12.9 - -{yes-icon} {ecs-ref}[ECS] field. - --- - -[float] -=== pe - -These fields contain Windows Portable Executable (PE) metadata. - - -*`pe.architecture`*:: -+ --- -CPU architecture target for the file. - -type: keyword - -example: x64 - --- - -*`pe.company`*:: -+ --- -Internal company name of the file, provided at compile-time. - -type: keyword - -example: Microsoft Corporation - --- - -*`pe.description`*:: -+ --- -Internal description of the file, provided at compile-time. - -type: keyword - -example: Paint - --- - -*`pe.file_version`*:: -+ --- -Internal version of the file, provided at compile-time. - -type: keyword - -example: 6.3.9600.17415 - --- - -*`pe.imphash`*:: -+ --- -A hash of the imports in a PE file. An imphash -- or import hash -- can be used to fingerprint binaries even after recompilation or other code-level transformations have occurred, which would change more traditional hash values. -Learn more at https://www.fireeye.com/blog/threat-research/2014/01/tracking-malware-import-hashing.html. - -type: keyword - -example: 0c6803c4e922103c4dca5963aad36ddf - --- - -*`pe.original_file_name`*:: -+ --- -Internal name of the file, provided at compile-time. - -type: keyword - -example: MSPAINT.EXE - --- - -*`pe.product`*:: -+ --- -Internal product name of the file, provided at compile-time. - -type: keyword - -example: Microsoft® Windows® Operating System - --- - -[float] -=== process - -These fields contain information about a process. -These fields can help you correlate metrics information with a process id/name from a log message. The `process.pid` often stays in the metric itself and is copied to the global field for correlation. - - -*`process.args`*:: -+ --- -Array of process arguments, starting with the absolute path to the executable. -May be filtered to protect sensitive information. - -type: keyword - -example: ["/usr/bin/ssh", "-l", "user", "10.0.0.16"] - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`process.args_count`*:: -+ --- -Length of the process.args array. -This field can be useful for querying or performing bucket analysis on how many arguments were provided to start a process. More arguments may be an indication of suspicious activity. - -type: long - -example: 4 - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`process.code_signature.digest_algorithm`*:: -+ --- -The hashing algorithm used to sign the process. -This value can distinguish signatures when a file is signed multiple times by the same signer but with a different digest algorithm. - -type: keyword - -example: sha256 - --- - -*`process.code_signature.exists`*:: -+ --- -Boolean to capture if a signature is present. - -type: boolean - -example: true - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`process.code_signature.signing_id`*:: -+ --- -The identifier used to sign the process. -This is used to identify the application manufactured by a software vendor. The field is relevant to Apple *OS only. - -type: keyword - -example: com.apple.xpc.proxy - --- - -*`process.code_signature.status`*:: -+ --- -Additional information about the certificate status. -This is useful for logging cryptographic errors with the certificate validity or trust status. Leave unpopulated if the validity or trust of the certificate was unchecked. - -type: keyword - -example: ERROR_UNTRUSTED_ROOT - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`process.code_signature.subject_name`*:: -+ --- -Subject name of the code signer - -type: keyword - -example: Microsoft Corporation - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`process.code_signature.team_id`*:: -+ --- -The team identifier used to sign the process. -This is used to identify the team or vendor of a software product. The field is relevant to Apple *OS only. - -type: keyword - -example: EQHXZ8M8AV - --- - -*`process.code_signature.timestamp`*:: -+ --- -Date and time when the code signature was generated and signed. - -type: date - -example: 2021-01-01T12:10:30Z - --- - -*`process.code_signature.trusted`*:: -+ --- -Stores the trust status of the certificate chain. -Validating the trust of the certificate chain may be complicated, and this field should only be populated by tools that actively check the status. - -type: boolean - -example: true - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`process.code_signature.valid`*:: -+ --- -Boolean to capture if the digital signature is verified against the binary content. -Leave unpopulated if a certificate was unchecked. - -type: boolean - -example: true - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`process.command_line`*:: -+ --- -Full command line that started the process, including the absolute path to the executable, and all arguments. -Some arguments may be filtered to protect sensitive information. - -type: wildcard - -example: /usr/bin/ssh -l user 10.0.0.16 - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`process.command_line.text`*:: -+ --- -type: match_only_text - --- - -*`process.elf.architecture`*:: -+ --- -Machine architecture of the ELF file. - -type: keyword - -example: x86-64 - --- - -*`process.elf.byte_order`*:: -+ --- -Byte sequence of ELF file. - -type: keyword - -example: Little Endian - --- - -*`process.elf.cpu_type`*:: -+ --- -CPU type of the ELF file. - -type: keyword - -example: Intel - --- - -*`process.elf.creation_date`*:: -+ --- -Extracted when possible from the file's metadata. Indicates when it was built or compiled. It can also be faked by malware creators. - -type: date - --- - -*`process.elf.exports`*:: -+ --- -List of exported element names and types. - -type: flattened - --- - -*`process.elf.header.abi_version`*:: -+ --- -Version of the ELF Application Binary Interface (ABI). - -type: keyword - --- - -*`process.elf.header.class`*:: -+ --- -Header class of the ELF file. - -type: keyword - --- - -*`process.elf.header.data`*:: -+ --- -Data table of the ELF header. - -type: keyword - --- - -*`process.elf.header.entrypoint`*:: -+ --- -Header entrypoint of the ELF file. - -type: long - -format: string - --- - -*`process.elf.header.object_version`*:: -+ --- -"0x1" for original ELF files. - -type: keyword - --- - -*`process.elf.header.os_abi`*:: -+ --- -Application Binary Interface (ABI) of the Linux OS. - -type: keyword - --- - -*`process.elf.header.type`*:: -+ --- -Header type of the ELF file. - -type: keyword - --- - -*`process.elf.header.version`*:: -+ --- -Version of the ELF header. - -type: keyword - --- - -*`process.elf.imports`*:: -+ --- -List of imported element names and types. - -type: flattened - --- - -*`process.elf.sections`*:: -+ --- -An array containing an object for each section of the ELF file. -The keys that should be present in these objects are defined by sub-fields underneath `elf.sections.*`. - -type: nested - --- - -*`process.elf.sections.chi2`*:: -+ --- -Chi-square probability distribution of the section. - -type: long - -format: number - --- - -*`process.elf.sections.entropy`*:: -+ --- -Shannon entropy calculation from the section. - -type: long - -format: number - --- - -*`process.elf.sections.flags`*:: -+ --- -ELF Section List flags. - -type: keyword - --- - -*`process.elf.sections.name`*:: -+ --- -ELF Section List name. - -type: keyword - --- - -*`process.elf.sections.physical_offset`*:: -+ --- -ELF Section List offset. - -type: keyword - --- - -*`process.elf.sections.physical_size`*:: -+ --- -ELF Section List physical size. - -type: long - -format: bytes - --- - -*`process.elf.sections.type`*:: -+ --- -ELF Section List type. - -type: keyword - --- - -*`process.elf.sections.virtual_address`*:: -+ --- -ELF Section List virtual address. - -type: long - -format: string - --- - -*`process.elf.sections.virtual_size`*:: -+ --- -ELF Section List virtual size. - -type: long - -format: string - --- - -*`process.elf.segments`*:: -+ --- -An array containing an object for each segment of the ELF file. -The keys that should be present in these objects are defined by sub-fields underneath `elf.segments.*`. - -type: nested - --- - -*`process.elf.segments.sections`*:: -+ --- -ELF object segment sections. - -type: keyword - --- - -*`process.elf.segments.type`*:: -+ --- -ELF object segment type. - -type: keyword - --- - -*`process.elf.shared_libraries`*:: -+ --- -List of shared libraries used by this ELF object. - -type: keyword - --- - -*`process.elf.telfhash`*:: -+ --- -telfhash symbol hash for ELF file. - -type: keyword - --- - -*`process.end`*:: -+ --- -The time the process ended. - -type: date - -example: 2016-05-23T08:05:34.853Z - --- - -*`process.entity_id`*:: -+ --- -Unique identifier for the process. -The implementation of this is specified by the data source, but some examples of what could be used here are a process-generated UUID, Sysmon Process GUIDs, or a hash of some uniquely identifying components of a process. -Constructing a globally unique identifier is a common practice to mitigate PID reuse as well as to identify a specific process over time, across multiple monitored hosts. - -type: keyword - -example: c2c455d9f99375d - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`process.executable`*:: -+ --- -Absolute path to the process executable. - -type: keyword - -example: /usr/bin/ssh - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`process.executable.text`*:: -+ --- -type: match_only_text - --- - -*`process.exit_code`*:: -+ --- -The exit code of the process, if this is a termination event. -The field should be absent if there is no exit code for the event (e.g. process start). - -type: long - -example: 137 - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`process.hash.md5`*:: -+ --- -MD5 hash. - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`process.hash.sha1`*:: -+ --- -SHA1 hash. - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`process.hash.sha256`*:: -+ --- -SHA256 hash. - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`process.hash.sha512`*:: -+ --- -SHA512 hash. - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`process.hash.ssdeep`*:: -+ --- -SSDEEP hash. - -type: keyword - --- - -*`process.name`*:: -+ --- -Process name. -Sometimes called program name or similar. - -type: keyword - -example: ssh - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`process.name.text`*:: -+ --- -type: match_only_text - --- - -*`process.parent.args`*:: -+ --- -Array of process arguments, starting with the absolute path to the executable. -May be filtered to protect sensitive information. - -type: keyword - -example: ["/usr/bin/ssh", "-l", "user", "10.0.0.16"] - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`process.parent.args_count`*:: -+ --- -Length of the process.args array. -This field can be useful for querying or performing bucket analysis on how many arguments were provided to start a process. More arguments may be an indication of suspicious activity. - -type: long - -example: 4 - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`process.parent.code_signature.digest_algorithm`*:: -+ --- -The hashing algorithm used to sign the process. -This value can distinguish signatures when a file is signed multiple times by the same signer but with a different digest algorithm. - -type: keyword - -example: sha256 - --- - -*`process.parent.code_signature.exists`*:: -+ --- -Boolean to capture if a signature is present. - -type: boolean - -example: true - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`process.parent.code_signature.signing_id`*:: -+ --- -The identifier used to sign the process. -This is used to identify the application manufactured by a software vendor. The field is relevant to Apple *OS only. - -type: keyword - -example: com.apple.xpc.proxy - --- - -*`process.parent.code_signature.status`*:: -+ --- -Additional information about the certificate status. -This is useful for logging cryptographic errors with the certificate validity or trust status. Leave unpopulated if the validity or trust of the certificate was unchecked. - -type: keyword - -example: ERROR_UNTRUSTED_ROOT - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`process.parent.code_signature.subject_name`*:: -+ --- -Subject name of the code signer - -type: keyword - -example: Microsoft Corporation - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`process.parent.code_signature.team_id`*:: -+ --- -The team identifier used to sign the process. -This is used to identify the team or vendor of a software product. The field is relevant to Apple *OS only. - -type: keyword - -example: EQHXZ8M8AV - --- - -*`process.parent.code_signature.timestamp`*:: -+ --- -Date and time when the code signature was generated and signed. - -type: date - -example: 2021-01-01T12:10:30Z - --- - -*`process.parent.code_signature.trusted`*:: -+ --- -Stores the trust status of the certificate chain. -Validating the trust of the certificate chain may be complicated, and this field should only be populated by tools that actively check the status. - -type: boolean - -example: true - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`process.parent.code_signature.valid`*:: -+ --- -Boolean to capture if the digital signature is verified against the binary content. -Leave unpopulated if a certificate was unchecked. - -type: boolean - -example: true - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`process.parent.command_line`*:: -+ --- -Full command line that started the process, including the absolute path to the executable, and all arguments. -Some arguments may be filtered to protect sensitive information. - -type: wildcard - -example: /usr/bin/ssh -l user 10.0.0.16 - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`process.parent.command_line.text`*:: -+ --- -type: match_only_text - --- - -*`process.parent.elf.architecture`*:: -+ --- -Machine architecture of the ELF file. - -type: keyword - -example: x86-64 - --- - -*`process.parent.elf.byte_order`*:: -+ --- -Byte sequence of ELF file. - -type: keyword - -example: Little Endian - --- - -*`process.parent.elf.cpu_type`*:: -+ --- -CPU type of the ELF file. - -type: keyword - -example: Intel - --- - -*`process.parent.elf.creation_date`*:: -+ --- -Extracted when possible from the file's metadata. Indicates when it was built or compiled. It can also be faked by malware creators. - -type: date - --- - -*`process.parent.elf.exports`*:: -+ --- -List of exported element names and types. - -type: flattened - --- - -*`process.parent.elf.header.abi_version`*:: -+ --- -Version of the ELF Application Binary Interface (ABI). - -type: keyword - --- - -*`process.parent.elf.header.class`*:: -+ --- -Header class of the ELF file. - -type: keyword - --- - -*`process.parent.elf.header.data`*:: -+ --- -Data table of the ELF header. - -type: keyword - --- - -*`process.parent.elf.header.entrypoint`*:: -+ --- -Header entrypoint of the ELF file. - -type: long - -format: string - --- - -*`process.parent.elf.header.object_version`*:: -+ --- -"0x1" for original ELF files. - -type: keyword - --- - -*`process.parent.elf.header.os_abi`*:: -+ --- -Application Binary Interface (ABI) of the Linux OS. - -type: keyword - --- - -*`process.parent.elf.header.type`*:: -+ --- -Header type of the ELF file. - -type: keyword - --- - -*`process.parent.elf.header.version`*:: -+ --- -Version of the ELF header. - -type: keyword - --- - -*`process.parent.elf.imports`*:: -+ --- -List of imported element names and types. - -type: flattened - --- - -*`process.parent.elf.sections`*:: -+ --- -An array containing an object for each section of the ELF file. -The keys that should be present in these objects are defined by sub-fields underneath `elf.sections.*`. - -type: nested - --- - -*`process.parent.elf.sections.chi2`*:: -+ --- -Chi-square probability distribution of the section. - -type: long - -format: number - --- - -*`process.parent.elf.sections.entropy`*:: -+ --- -Shannon entropy calculation from the section. - -type: long - -format: number - --- - -*`process.parent.elf.sections.flags`*:: -+ --- -ELF Section List flags. - -type: keyword - --- - -*`process.parent.elf.sections.name`*:: -+ --- -ELF Section List name. - -type: keyword - --- - -*`process.parent.elf.sections.physical_offset`*:: -+ --- -ELF Section List offset. - -type: keyword - --- - -*`process.parent.elf.sections.physical_size`*:: -+ --- -ELF Section List physical size. - -type: long - -format: bytes - --- - -*`process.parent.elf.sections.type`*:: -+ --- -ELF Section List type. - -type: keyword - --- - -*`process.parent.elf.sections.virtual_address`*:: -+ --- -ELF Section List virtual address. - -type: long - -format: string - --- - -*`process.parent.elf.sections.virtual_size`*:: -+ --- -ELF Section List virtual size. - -type: long - -format: string - --- - -*`process.parent.elf.segments`*:: -+ --- -An array containing an object for each segment of the ELF file. -The keys that should be present in these objects are defined by sub-fields underneath `elf.segments.*`. - -type: nested - --- - -*`process.parent.elf.segments.sections`*:: -+ --- -ELF object segment sections. - -type: keyword - --- - -*`process.parent.elf.segments.type`*:: -+ --- -ELF object segment type. - -type: keyword - --- - -*`process.parent.elf.shared_libraries`*:: -+ --- -List of shared libraries used by this ELF object. - -type: keyword - --- - -*`process.parent.elf.telfhash`*:: -+ --- -telfhash symbol hash for ELF file. - -type: keyword - --- - -*`process.parent.end`*:: -+ --- -The time the process ended. - -type: date - -example: 2016-05-23T08:05:34.853Z - --- - -*`process.parent.entity_id`*:: -+ --- -Unique identifier for the process. -The implementation of this is specified by the data source, but some examples of what could be used here are a process-generated UUID, Sysmon Process GUIDs, or a hash of some uniquely identifying components of a process. -Constructing a globally unique identifier is a common practice to mitigate PID reuse as well as to identify a specific process over time, across multiple monitored hosts. - -type: keyword - -example: c2c455d9f99375d - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`process.parent.executable`*:: -+ --- -Absolute path to the process executable. - -type: keyword - -example: /usr/bin/ssh - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`process.parent.executable.text`*:: -+ --- -type: match_only_text - --- - -*`process.parent.exit_code`*:: -+ --- -The exit code of the process, if this is a termination event. -The field should be absent if there is no exit code for the event (e.g. process start). - -type: long - -example: 137 - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`process.parent.hash.md5`*:: -+ --- -MD5 hash. - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`process.parent.hash.sha1`*:: -+ --- -SHA1 hash. - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`process.parent.hash.sha256`*:: -+ --- -SHA256 hash. - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`process.parent.hash.sha512`*:: -+ --- -SHA512 hash. - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`process.parent.hash.ssdeep`*:: -+ --- -SSDEEP hash. - -type: keyword - --- - -*`process.parent.name`*:: -+ --- -Process name. -Sometimes called program name or similar. - -type: keyword - -example: ssh - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`process.parent.name.text`*:: -+ --- -type: match_only_text - --- - -*`process.parent.pe.architecture`*:: -+ --- -CPU architecture target for the file. - -type: keyword - -example: x64 - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`process.parent.pe.company`*:: -+ --- -Internal company name of the file, provided at compile-time. - -type: keyword - -example: Microsoft Corporation - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`process.parent.pe.description`*:: -+ --- -Internal description of the file, provided at compile-time. - -type: keyword - -example: Paint - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`process.parent.pe.file_version`*:: -+ --- -Internal version of the file, provided at compile-time. - -type: keyword - -example: 6.3.9600.17415 - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`process.parent.pe.imphash`*:: -+ --- -A hash of the imports in a PE file. An imphash -- or import hash -- can be used to fingerprint binaries even after recompilation or other code-level transformations have occurred, which would change more traditional hash values. -Learn more at https://www.fireeye.com/blog/threat-research/2014/01/tracking-malware-import-hashing.html. - -type: keyword - -example: 0c6803c4e922103c4dca5963aad36ddf - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`process.parent.pe.original_file_name`*:: -+ --- -Internal name of the file, provided at compile-time. - -type: keyword - -example: MSPAINT.EXE - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`process.parent.pe.product`*:: -+ --- -Internal product name of the file, provided at compile-time. - -type: keyword - -example: Microsoft® Windows® Operating System - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`process.parent.pgid`*:: -+ --- -Identifier of the group of processes the process belongs to. - -type: long - -format: string - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`process.parent.pid`*:: -+ --- -Process id. - -type: long - -example: 4242 - -format: string - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`process.parent.ppid`*:: -+ --- -Parent process' pid. - -type: long - -example: 4241 - -format: string - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`process.parent.start`*:: -+ --- -The time the process started. - -type: date - -example: 2016-05-23T08:05:34.853Z - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`process.parent.thread.id`*:: -+ --- -Thread ID. - -type: long - -example: 4242 - -format: string - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`process.parent.thread.name`*:: -+ --- -Thread name. - -type: keyword - -example: thread-0 - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`process.parent.title`*:: -+ --- -Process title. -The proctitle, some times the same as process name. Can also be different: for example a browser setting its title to the web page currently opened. - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`process.parent.title.text`*:: -+ --- -type: match_only_text - --- - -*`process.parent.uptime`*:: -+ --- -Seconds the process has been up. - -type: long - -example: 1325 - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`process.parent.working_directory`*:: -+ --- -The working directory of the process. - -type: keyword - -example: /home/alice - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`process.parent.working_directory.text`*:: -+ --- -type: match_only_text - --- - -*`process.pe.architecture`*:: -+ --- -CPU architecture target for the file. - -type: keyword - -example: x64 - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`process.pe.company`*:: -+ --- -Internal company name of the file, provided at compile-time. - -type: keyword - -example: Microsoft Corporation - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`process.pe.description`*:: -+ --- -Internal description of the file, provided at compile-time. - -type: keyword - -example: Paint - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`process.pe.file_version`*:: -+ --- -Internal version of the file, provided at compile-time. - -type: keyword - -example: 6.3.9600.17415 - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`process.pe.imphash`*:: -+ --- -A hash of the imports in a PE file. An imphash -- or import hash -- can be used to fingerprint binaries even after recompilation or other code-level transformations have occurred, which would change more traditional hash values. -Learn more at https://www.fireeye.com/blog/threat-research/2014/01/tracking-malware-import-hashing.html. - -type: keyword - -example: 0c6803c4e922103c4dca5963aad36ddf - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`process.pe.original_file_name`*:: -+ --- -Internal name of the file, provided at compile-time. - -type: keyword - -example: MSPAINT.EXE - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`process.pe.product`*:: -+ --- -Internal product name of the file, provided at compile-time. - -type: keyword - -example: Microsoft® Windows® Operating System - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`process.pgid`*:: -+ --- -Identifier of the group of processes the process belongs to. - -type: long - -format: string - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`process.pid`*:: -+ --- -Process id. - -type: long - -example: 4242 - -format: string - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`process.ppid`*:: -+ --- -Parent process' pid. - -type: long - -example: 4241 - -format: string - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`process.start`*:: -+ --- -The time the process started. - -type: date - -example: 2016-05-23T08:05:34.853Z - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`process.thread.id`*:: -+ --- -Thread ID. - -type: long - -example: 4242 - -format: string - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`process.thread.name`*:: -+ --- -Thread name. - -type: keyword - -example: thread-0 - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`process.title`*:: -+ --- -Process title. -The proctitle, some times the same as process name. Can also be different: for example a browser setting its title to the web page currently opened. - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`process.title.text`*:: -+ --- -type: match_only_text - --- - -*`process.uptime`*:: -+ --- -Seconds the process has been up. - -type: long - -example: 1325 - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`process.working_directory`*:: -+ --- -The working directory of the process. - -type: keyword - -example: /home/alice - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`process.working_directory.text`*:: -+ --- -type: match_only_text - --- - -[float] -=== registry - -Fields related to Windows Registry operations. - - -*`registry.data.bytes`*:: -+ --- -Original bytes written with base64 encoding. -For Windows registry operations, such as SetValueEx and RegQueryValueEx, this corresponds to the data pointed by `lp_data`. This is optional but provides better recoverability and should be populated for REG_BINARY encoded values. - -type: keyword - -example: ZQBuAC0AVQBTAAAAZQBuAAAAAAA= - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`registry.data.strings`*:: -+ --- -Content when writing string types. -Populated as an array when writing string data to the registry. For single string registry types (REG_SZ, REG_EXPAND_SZ), this should be an array with one string. For sequences of string with REG_MULTI_SZ, this array will be variable length. For numeric data, such as REG_DWORD and REG_QWORD, this should be populated with the decimal representation (e.g `"1"`). - -type: wildcard - -example: ["C:\rta\red_ttp\bin\myapp.exe"] - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`registry.data.type`*:: -+ --- -Standard registry type for encoding contents - -type: keyword - -example: REG_SZ - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`registry.hive`*:: -+ --- -Abbreviated name for the hive. - -type: keyword - -example: HKLM - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`registry.key`*:: -+ --- -Hive-relative path of keys. - -type: keyword - -example: SOFTWARE\Microsoft\Windows NT\CurrentVersion\Image File Execution Options\winword.exe - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`registry.path`*:: -+ --- -Full path, including hive, key and value - -type: keyword - -example: HKLM\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Image File Execution Options\winword.exe\Debugger - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`registry.value`*:: -+ --- -Name of the value written. - -type: keyword - -example: Debugger - -{yes-icon} {ecs-ref}[ECS] field. - --- - -[float] -=== related - -This field set is meant to facilitate pivoting around a piece of data. -Some pieces of information can be seen in many places in an ECS event. To facilitate searching for them, store an array of all seen values to their corresponding field in `related.`. -A concrete example is IP addresses, which can be under host, observer, source, destination, client, server, and network.forwarded_ip. If you append all IPs to `related.ip`, you can then search for a given IP trivially, no matter where it appeared, by querying `related.ip:192.0.2.15`. - - -*`related.hash`*:: -+ --- -All the hashes seen on your event. Populating this field, then using it to search for hashes can help in situations where you're unsure what the hash algorithm is (and therefore which key name to search). - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`related.hosts`*:: -+ --- -All hostnames or other host identifiers seen on your event. Example identifiers include FQDNs, domain names, workstation names, or aliases. - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`related.ip`*:: -+ --- -All of the IPs seen on your event. - -type: ip - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`related.user`*:: -+ --- -All the user names or other user identifiers seen on the event. - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -[float] -=== rule - -Rule fields are used to capture the specifics of any observer or agent rules that generate alerts or other notable events. -Examples of data sources that would populate the rule fields include: network admission control platforms, network or host IDS/IPS, network firewalls, web application firewalls, url filters, endpoint detection and response (EDR) systems, etc. - - -*`rule.author`*:: -+ --- -Name, organization, or pseudonym of the author or authors who created the rule used to generate this event. - -type: keyword - -example: ["Star-Lord"] - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`rule.category`*:: -+ --- -A categorization value keyword used by the entity using the rule for detection of this event. - -type: keyword - -example: Attempted Information Leak - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`rule.description`*:: -+ --- -The description of the rule generating the event. - -type: keyword - -example: Block requests to public DNS over HTTPS / TLS protocols - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`rule.id`*:: -+ --- -A rule ID that is unique within the scope of an agent, observer, or other entity using the rule for detection of this event. - -type: keyword - -example: 101 - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`rule.license`*:: -+ --- -Name of the license under which the rule used to generate this event is made available. - -type: keyword - -example: Apache 2.0 - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`rule.name`*:: -+ --- -The name of the rule or signature generating the event. - -type: keyword - -example: BLOCK_DNS_over_TLS - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`rule.reference`*:: -+ --- -Reference URL to additional information about the rule used to generate this event. -The URL can point to the vendor's documentation about the rule. If that's not available, it can also be a link to a more general page describing this type of alert. - -type: keyword - -example: https://en.wikipedia.org/wiki/DNS_over_TLS - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`rule.ruleset`*:: -+ --- -Name of the ruleset, policy, group, or parent category in which the rule used to generate this event is a member. - -type: keyword - -example: Standard_Protocol_Filters - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`rule.uuid`*:: -+ --- -A rule ID that is unique within the scope of a set or group of agents, observers, or other entities using the rule for detection of this event. - -type: keyword - -example: 1100110011 - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`rule.version`*:: -+ --- -The version / revision of the rule being used for analysis. - -type: keyword - -example: 1.1 - -{yes-icon} {ecs-ref}[ECS] field. - --- - -[float] -=== server - -A Server is defined as the responder in a network connection for events regarding sessions, connections, or bidirectional flow records. -For TCP events, the server is the receiver of the initial SYN packet(s) of the TCP connection. For other protocols, the server is generally the responder in the network transaction. Some systems actually use the term "responder" to refer the server in TCP connections. The server fields describe details about the system acting as the server in the network event. Server fields are usually populated in conjunction with client fields. Server fields are generally not populated for packet-level events. -Client / server representations can add semantic context to an exchange, which is helpful to visualize the data in certain situations. If your context falls in that category, you should still ensure that source and destination are filled appropriately. - - -*`server.address`*:: -+ --- -Some event server addresses are defined ambiguously. The event will sometimes list an IP, a domain or a unix socket. You should always store the raw address in the `.address` field. -Then it should be duplicated to `.ip` or `.domain`, depending on which one it is. - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`server.as.number`*:: -+ --- -Unique number allocated to the autonomous system. The autonomous system number (ASN) uniquely identifies each network on the Internet. - -type: long - -example: 15169 - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`server.as.organization.name`*:: -+ --- -Organization name. - -type: keyword - -example: Google LLC - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`server.as.organization.name.text`*:: -+ --- -type: match_only_text - --- - -*`server.bytes`*:: -+ --- -Bytes sent from the server to the client. - -type: long - -example: 184 - -format: bytes - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`server.domain`*:: -+ --- -Server domain. - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`server.geo.city_name`*:: -+ --- -City name. - -type: keyword - -example: Montreal - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`server.geo.continent_code`*:: -+ --- -Two-letter code representing continent's name. - -type: keyword - -example: NA - --- - -*`server.geo.continent_name`*:: -+ --- -Name of the continent. - -type: keyword - -example: North America - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`server.geo.country_iso_code`*:: -+ --- -Country ISO code. - -type: keyword - -example: CA - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`server.geo.country_name`*:: -+ --- -Country name. - -type: keyword - -example: Canada - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`server.geo.location`*:: -+ --- -Longitude and latitude. - -type: geo_point - -example: { "lon": -73.614830, "lat": 45.505918 } - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`server.geo.name`*:: -+ --- -User-defined description of a location, at the level of granularity they care about. -Could be the name of their data centers, the floor number, if this describes a local physical entity, city names. -Not typically used in automated geolocation. - -type: keyword - -example: boston-dc - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`server.geo.postal_code`*:: -+ --- -Postal code associated with the location. -Values appropriate for this field may also be known as a postcode or ZIP code and will vary widely from country to country. - -type: keyword - -example: 94040 - --- - -*`server.geo.region_iso_code`*:: -+ --- -Region ISO code. - -type: keyword - -example: CA-QC - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`server.geo.region_name`*:: -+ --- -Region name. - -type: keyword - -example: Quebec - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`server.geo.timezone`*:: -+ --- -The time zone of the location, such as IANA time zone name. - -type: keyword - -example: America/Argentina/Buenos_Aires - --- - -*`server.ip`*:: -+ --- -IP address of the server (IPv4 or IPv6). - -type: ip - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`server.mac`*:: -+ --- -MAC address of the server. -The notation format from RFC 7042 is suggested: Each octet (that is, 8-bit byte) is represented by two [uppercase] hexadecimal digits giving the value of the octet as an unsigned integer. Successive octets are separated by a hyphen. - -type: keyword - -example: 00-00-5E-00-53-23 - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`server.nat.ip`*:: -+ --- -Translated ip of destination based NAT sessions (e.g. internet to private DMZ) -Typically used with load balancers, firewalls, or routers. - -type: ip - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`server.nat.port`*:: -+ --- -Translated port of destination based NAT sessions (e.g. internet to private DMZ) -Typically used with load balancers, firewalls, or routers. - -type: long - -format: string - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`server.packets`*:: -+ --- -Packets sent from the server to the client. - -type: long - -example: 12 - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`server.port`*:: -+ --- -Port of the server. - -type: long - -format: string - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`server.registered_domain`*:: -+ --- -The highest registered server domain, stripped of the subdomain. -For example, the registered domain for "foo.example.com" is "example.com". -This value can be determined precisely with a list like the public suffix list (http://publicsuffix.org). Trying to approximate this by simply taking the last two labels will not work well for TLDs such as "co.uk". - -type: keyword - -example: example.com - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`server.subdomain`*:: -+ --- -The subdomain portion of a fully qualified domain name includes all of the names except the host name under the registered_domain. In a partially qualified domain, or if the the qualification level of the full name cannot be determined, subdomain contains all of the names below the registered domain. -For example the subdomain portion of "www.east.mydomain.co.uk" is "east". If the domain has multiple levels of subdomain, such as "sub2.sub1.example.com", the subdomain field should contain "sub2.sub1", with no trailing period. - -type: keyword - -example: east - --- - -*`server.top_level_domain`*:: -+ --- -The effective top level domain (eTLD), also known as the domain suffix, is the last part of the domain name. For example, the top level domain for example.com is "com". -This value can be determined precisely with a list like the public suffix list (http://publicsuffix.org). Trying to approximate this by simply taking the last label will not work well for effective TLDs such as "co.uk". - -type: keyword - -example: co.uk - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`server.user.domain`*:: -+ --- -Name of the directory the user is a member of. -For example, an LDAP or Active Directory domain name. - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`server.user.email`*:: -+ --- -User email address. - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`server.user.full_name`*:: -+ --- -User's full name, if available. - -type: keyword - -example: Albert Einstein - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`server.user.full_name.text`*:: -+ --- -type: match_only_text - --- - -*`server.user.group.domain`*:: -+ --- -Name of the directory the group is a member of. -For example, an LDAP or Active Directory domain name. - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`server.user.group.id`*:: -+ --- -Unique identifier for the group on the system/platform. - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`server.user.group.name`*:: -+ --- -Name of the group. - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`server.user.hash`*:: -+ --- -Unique user hash to correlate information for a user in anonymized form. -Useful if `user.id` or `user.name` contain confidential information and cannot be used. - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`server.user.id`*:: -+ --- -Unique identifier of the user. - -type: keyword - -example: S-1-5-21-202424912787-2692429404-2351956786-1000 - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`server.user.name`*:: -+ --- -Short name or login of the user. - -type: keyword - -example: a.einstein - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`server.user.name.text`*:: -+ --- -type: match_only_text - --- - -*`server.user.roles`*:: -+ --- -Array of user roles at the time of the event. - -type: keyword - -example: ["kibana_admin", "reporting_user"] - -{yes-icon} {ecs-ref}[ECS] field. - --- - -[float] -=== service - -The service fields describe the service for or from which the data was collected. -These fields help you find and correlate logs for a specific service and version. - - -*`service.address`*:: -+ --- -Address where data about this service was collected from. -This should be a URI, network address (ipv4:port or [ipv6]:port) or a resource path (sockets). - -type: keyword - -example: 172.26.0.2:5432 - --- - -*`service.environment`*:: -+ --- -Identifies the environment where the service is running. -If the same service runs in different environments (production, staging, QA, development, etc.), the environment can identify other instances of the same service. Can also group services and applications from the same environment. - -type: keyword - -example: production - --- - -*`service.ephemeral_id`*:: -+ --- -Ephemeral identifier of this service (if one exists). -This id normally changes across restarts, but `service.id` does not. - -type: keyword - -example: 8a4f500f - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`service.id`*:: -+ --- -Unique identifier of the running service. If the service is comprised of many nodes, the `service.id` should be the same for all nodes. -This id should uniquely identify the service. This makes it possible to correlate logs and metrics for one specific service, no matter which particular node emitted the event. -Note that if you need to see the events from one specific host of the service, you should filter on that `host.name` or `host.id` instead. - -type: keyword - -example: d37e5ebfe0ae6c4972dbe9f0174a1637bb8247f6 - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`service.name`*:: -+ --- -Name of the service data is collected from. -The name of the service is normally user given. This allows for distributed services that run on multiple hosts to correlate the related instances based on the name. -In the case of Elasticsearch the `service.name` could contain the cluster name. For Beats the `service.name` is by default a copy of the `service.type` field if no name is specified. - -type: keyword - -example: elasticsearch-metrics - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`service.node.name`*:: -+ --- -Name of a service node. -This allows for two nodes of the same service running on the same host to be differentiated. Therefore, `service.node.name` should typically be unique across nodes of a given service. -In the case of Elasticsearch, the `service.node.name` could contain the unique node name within the Elasticsearch cluster. In cases where the service doesn't have the concept of a node name, the host name or container name can be used to distinguish running instances that make up this service. If those do not provide uniqueness (e.g. multiple instances of the service running on the same host) - the node name can be manually set. - -type: keyword - -example: instance-0000000016 - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`service.state`*:: -+ --- -Current state of the service. - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`service.type`*:: -+ --- -The type of the service data is collected from. -The type can be used to group and correlate logs and metrics from one service type. -Example: If logs or metrics are collected from Elasticsearch, `service.type` would be `elasticsearch`. - -type: keyword - -example: elasticsearch - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`service.version`*:: -+ --- -Version of the service the data was collected from. -This allows to look at a data set only for a specific version of a service. - -type: keyword - -example: 3.2.4 - -{yes-icon} {ecs-ref}[ECS] field. - --- - -[float] -=== source - -Source fields capture details about the sender of a network exchange/packet. These fields are populated from a network event, packet, or other event containing details of a network transaction. -Source fields are usually populated in conjunction with destination fields. The source and destination fields are considered the baseline and should always be filled if an event contains source and destination details from a network transaction. If the event also contains identification of the client and server roles, then the client and server fields should also be populated. - - -*`source.address`*:: -+ --- -Some event source addresses are defined ambiguously. The event will sometimes list an IP, a domain or a unix socket. You should always store the raw address in the `.address` field. -Then it should be duplicated to `.ip` or `.domain`, depending on which one it is. - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`source.as.number`*:: -+ --- -Unique number allocated to the autonomous system. The autonomous system number (ASN) uniquely identifies each network on the Internet. - -type: long - -example: 15169 - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`source.as.organization.name`*:: -+ --- -Organization name. - -type: keyword - -example: Google LLC - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`source.as.organization.name.text`*:: -+ --- -type: match_only_text - --- - -*`source.bytes`*:: -+ --- -Bytes sent from the source to the destination. - -type: long - -example: 184 - -format: bytes - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`source.domain`*:: -+ --- -Source domain. - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`source.geo.city_name`*:: -+ --- -City name. - -type: keyword - -example: Montreal - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`source.geo.continent_code`*:: -+ --- -Two-letter code representing continent's name. - -type: keyword - -example: NA - --- - -*`source.geo.continent_name`*:: -+ --- -Name of the continent. - -type: keyword - -example: North America - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`source.geo.country_iso_code`*:: -+ --- -Country ISO code. - -type: keyword - -example: CA - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`source.geo.country_name`*:: -+ --- -Country name. - -type: keyword - -example: Canada - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`source.geo.location`*:: -+ --- -Longitude and latitude. - -type: geo_point - -example: { "lon": -73.614830, "lat": 45.505918 } - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`source.geo.name`*:: -+ --- -User-defined description of a location, at the level of granularity they care about. -Could be the name of their data centers, the floor number, if this describes a local physical entity, city names. -Not typically used in automated geolocation. - -type: keyword - -example: boston-dc - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`source.geo.postal_code`*:: -+ --- -Postal code associated with the location. -Values appropriate for this field may also be known as a postcode or ZIP code and will vary widely from country to country. - -type: keyword - -example: 94040 - --- - -*`source.geo.region_iso_code`*:: -+ --- -Region ISO code. - -type: keyword - -example: CA-QC - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`source.geo.region_name`*:: -+ --- -Region name. - -type: keyword - -example: Quebec - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`source.geo.timezone`*:: -+ --- -The time zone of the location, such as IANA time zone name. - -type: keyword - -example: America/Argentina/Buenos_Aires - --- - -*`source.ip`*:: -+ --- -IP address of the source (IPv4 or IPv6). - -type: ip - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`source.mac`*:: -+ --- -MAC address of the source. -The notation format from RFC 7042 is suggested: Each octet (that is, 8-bit byte) is represented by two [uppercase] hexadecimal digits giving the value of the octet as an unsigned integer. Successive octets are separated by a hyphen. - -type: keyword - -example: 00-00-5E-00-53-23 - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`source.nat.ip`*:: -+ --- -Translated ip of source based NAT sessions (e.g. internal client to internet) -Typically connections traversing load balancers, firewalls, or routers. - -type: ip - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`source.nat.port`*:: -+ --- -Translated port of source based NAT sessions. (e.g. internal client to internet) -Typically used with load balancers, firewalls, or routers. - -type: long - -format: string - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`source.packets`*:: -+ --- -Packets sent from the source to the destination. - -type: long - -example: 12 - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`source.port`*:: -+ --- -Port of the source. - -type: long - -format: string - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`source.registered_domain`*:: -+ --- -The highest registered source domain, stripped of the subdomain. -For example, the registered domain for "foo.example.com" is "example.com". -This value can be determined precisely with a list like the public suffix list (http://publicsuffix.org). Trying to approximate this by simply taking the last two labels will not work well for TLDs such as "co.uk". - -type: keyword - -example: example.com - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`source.subdomain`*:: -+ --- -The subdomain portion of a fully qualified domain name includes all of the names except the host name under the registered_domain. In a partially qualified domain, or if the the qualification level of the full name cannot be determined, subdomain contains all of the names below the registered domain. -For example the subdomain portion of "www.east.mydomain.co.uk" is "east". If the domain has multiple levels of subdomain, such as "sub2.sub1.example.com", the subdomain field should contain "sub2.sub1", with no trailing period. - -type: keyword - -example: east - --- - -*`source.top_level_domain`*:: -+ --- -The effective top level domain (eTLD), also known as the domain suffix, is the last part of the domain name. For example, the top level domain for example.com is "com". -This value can be determined precisely with a list like the public suffix list (http://publicsuffix.org). Trying to approximate this by simply taking the last label will not work well for effective TLDs such as "co.uk". - -type: keyword - -example: co.uk - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`source.user.domain`*:: -+ --- -Name of the directory the user is a member of. -For example, an LDAP or Active Directory domain name. - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`source.user.email`*:: -+ --- -User email address. - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`source.user.full_name`*:: -+ --- -User's full name, if available. - -type: keyword - -example: Albert Einstein - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`source.user.full_name.text`*:: -+ --- -type: match_only_text - --- - -*`source.user.group.domain`*:: -+ --- -Name of the directory the group is a member of. -For example, an LDAP or Active Directory domain name. - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`source.user.group.id`*:: -+ --- -Unique identifier for the group on the system/platform. - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`source.user.group.name`*:: -+ --- -Name of the group. - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`source.user.hash`*:: -+ --- -Unique user hash to correlate information for a user in anonymized form. -Useful if `user.id` or `user.name` contain confidential information and cannot be used. - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`source.user.id`*:: -+ --- -Unique identifier of the user. - -type: keyword - -example: S-1-5-21-202424912787-2692429404-2351956786-1000 - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`source.user.name`*:: -+ --- -Short name or login of the user. - -type: keyword - -example: a.einstein - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`source.user.name.text`*:: -+ --- -type: match_only_text - --- - -*`source.user.roles`*:: -+ --- -Array of user roles at the time of the event. - -type: keyword - -example: ["kibana_admin", "reporting_user"] - -{yes-icon} {ecs-ref}[ECS] field. - --- - -[float] -=== threat - -Fields to classify events and alerts according to a threat taxonomy such as the MITRE ATT&CK® framework. -These fields are for users to classify alerts from all of their sources (e.g. IDS, NGFW, etc.) within a common taxonomy. The threat.tactic.* are meant to capture the high level category of the threat (e.g. "impact"). The threat.technique.* fields are meant to capture which kind of approach is used by this detected threat, to accomplish the goal (e.g. "endpoint denial of service"). - - -*`threat.enrichments`*:: -+ --- -A list of associated indicators objects enriching the event, and the context of that association/enrichment. - -type: nested - --- - -*`threat.enrichments.indicator`*:: -+ --- -Object containing associated indicators enriching the event. - -type: object - --- - -*`threat.enrichments.indicator.as.number`*:: -+ --- -Unique number allocated to the autonomous system. The autonomous system number (ASN) uniquely identifies each network on the Internet. - -type: long - -example: 15169 - --- - -*`threat.enrichments.indicator.as.organization.name`*:: -+ --- -Organization name. - -type: keyword - -example: Google LLC - --- - -*`threat.enrichments.indicator.as.organization.name.text`*:: -+ --- -type: match_only_text - --- - -*`threat.enrichments.indicator.confidence`*:: -+ --- -Identifies the confidence rating assigned by the provider using STIX confidence scales. Expected values: - * Not Specified, None, Low, Medium, High - * 0-10 - * Admirality Scale (1-6) - * DNI Scale (5-95) - * WEP Scale (Impossible - Certain) - -type: keyword - -example: High - --- - -*`threat.enrichments.indicator.description`*:: -+ --- -Describes the type of action conducted by the threat. - -type: keyword - -example: IP x.x.x.x was observed delivering the Angler EK. - --- - -*`threat.enrichments.indicator.email.address`*:: -+ --- -Identifies a threat indicator as an email address (irrespective of direction). - -type: keyword - -example: phish@example.com - --- - -*`threat.enrichments.indicator.file.accessed`*:: -+ --- -Last time the file was accessed. -Note that not all filesystems keep track of access time. - -type: date - --- - -*`threat.enrichments.indicator.file.attributes`*:: -+ --- -Array of file attributes. -Attributes names will vary by platform. Here's a non-exhaustive list of values that are expected in this field: archive, compressed, directory, encrypted, execute, hidden, read, readonly, system, write. - -type: keyword - -example: ["readonly", "system"] - --- - -*`threat.enrichments.indicator.file.code_signature.digest_algorithm`*:: -+ --- -The hashing algorithm used to sign the process. -This value can distinguish signatures when a file is signed multiple times by the same signer but with a different digest algorithm. - -type: keyword - -example: sha256 - --- - -*`threat.enrichments.indicator.file.code_signature.exists`*:: -+ --- -Boolean to capture if a signature is present. - -type: boolean - -example: true - --- - -*`threat.enrichments.indicator.file.code_signature.signing_id`*:: -+ --- -The identifier used to sign the process. -This is used to identify the application manufactured by a software vendor. The field is relevant to Apple *OS only. - -type: keyword - -example: com.apple.xpc.proxy - --- - -*`threat.enrichments.indicator.file.code_signature.status`*:: -+ --- -Additional information about the certificate status. -This is useful for logging cryptographic errors with the certificate validity or trust status. Leave unpopulated if the validity or trust of the certificate was unchecked. - -type: keyword - -example: ERROR_UNTRUSTED_ROOT - --- - -*`threat.enrichments.indicator.file.code_signature.subject_name`*:: -+ --- -Subject name of the code signer - -type: keyword - -example: Microsoft Corporation - --- - -*`threat.enrichments.indicator.file.code_signature.team_id`*:: -+ --- -The team identifier used to sign the process. -This is used to identify the team or vendor of a software product. The field is relevant to Apple *OS only. - -type: keyword - -example: EQHXZ8M8AV - --- - -*`threat.enrichments.indicator.file.code_signature.timestamp`*:: -+ --- -Date and time when the code signature was generated and signed. - -type: date - -example: 2021-01-01T12:10:30Z - --- - -*`threat.enrichments.indicator.file.code_signature.trusted`*:: -+ --- -Stores the trust status of the certificate chain. -Validating the trust of the certificate chain may be complicated, and this field should only be populated by tools that actively check the status. - -type: boolean - -example: true - --- - -*`threat.enrichments.indicator.file.code_signature.valid`*:: -+ --- -Boolean to capture if the digital signature is verified against the binary content. -Leave unpopulated if a certificate was unchecked. - -type: boolean - -example: true - --- - -*`threat.enrichments.indicator.file.created`*:: -+ --- -File creation time. -Note that not all filesystems store the creation time. - -type: date - --- - -*`threat.enrichments.indicator.file.ctime`*:: -+ --- -Last time the file attributes or metadata changed. -Note that changes to the file content will update `mtime`. This implies `ctime` will be adjusted at the same time, since `mtime` is an attribute of the file. - -type: date - --- - -*`threat.enrichments.indicator.file.device`*:: -+ --- -Device that is the source of the file. - -type: keyword - -example: sda - --- - -*`threat.enrichments.indicator.file.directory`*:: -+ --- -Directory where the file is located. It should include the drive letter, when appropriate. - -type: keyword - -example: /home/alice - --- - -*`threat.enrichments.indicator.file.drive_letter`*:: -+ --- -Drive letter where the file is located. This field is only relevant on Windows. -The value should be uppercase, and not include the colon. - -type: keyword - -example: C - --- - -*`threat.enrichments.indicator.file.elf.architecture`*:: -+ --- -Machine architecture of the ELF file. - -type: keyword - -example: x86-64 - --- - -*`threat.enrichments.indicator.file.elf.byte_order`*:: -+ --- -Byte sequence of ELF file. - -type: keyword - -example: Little Endian - --- - -*`threat.enrichments.indicator.file.elf.cpu_type`*:: -+ --- -CPU type of the ELF file. - -type: keyword - -example: Intel - --- - -*`threat.enrichments.indicator.file.elf.creation_date`*:: -+ --- -Extracted when possible from the file's metadata. Indicates when it was built or compiled. It can also be faked by malware creators. - -type: date - --- - -*`threat.enrichments.indicator.file.elf.exports`*:: -+ --- -List of exported element names and types. - -type: flattened - --- - -*`threat.enrichments.indicator.file.elf.header.abi_version`*:: -+ --- -Version of the ELF Application Binary Interface (ABI). - -type: keyword - --- - -*`threat.enrichments.indicator.file.elf.header.class`*:: -+ --- -Header class of the ELF file. - -type: keyword - --- - -*`threat.enrichments.indicator.file.elf.header.data`*:: -+ --- -Data table of the ELF header. - -type: keyword - --- - -*`threat.enrichments.indicator.file.elf.header.entrypoint`*:: -+ --- -Header entrypoint of the ELF file. - -type: long - -format: string - --- - -*`threat.enrichments.indicator.file.elf.header.object_version`*:: -+ --- -"0x1" for original ELF files. - -type: keyword - --- - -*`threat.enrichments.indicator.file.elf.header.os_abi`*:: -+ --- -Application Binary Interface (ABI) of the Linux OS. - -type: keyword - --- - -*`threat.enrichments.indicator.file.elf.header.type`*:: -+ --- -Header type of the ELF file. - -type: keyword - --- - -*`threat.enrichments.indicator.file.elf.header.version`*:: -+ --- -Version of the ELF header. - -type: keyword - --- - -*`threat.enrichments.indicator.file.elf.imports`*:: -+ --- -List of imported element names and types. - -type: flattened - --- - -*`threat.enrichments.indicator.file.elf.sections`*:: -+ --- -An array containing an object for each section of the ELF file. -The keys that should be present in these objects are defined by sub-fields underneath `elf.sections.*`. - -type: nested - --- - -*`threat.enrichments.indicator.file.elf.sections.chi2`*:: -+ --- -Chi-square probability distribution of the section. - -type: long - -format: number - --- - -*`threat.enrichments.indicator.file.elf.sections.entropy`*:: -+ --- -Shannon entropy calculation from the section. - -type: long - -format: number - --- - -*`threat.enrichments.indicator.file.elf.sections.flags`*:: -+ --- -ELF Section List flags. - -type: keyword - --- - -*`threat.enrichments.indicator.file.elf.sections.name`*:: -+ --- -ELF Section List name. - -type: keyword - --- - -*`threat.enrichments.indicator.file.elf.sections.physical_offset`*:: -+ --- -ELF Section List offset. - -type: keyword - --- - -*`threat.enrichments.indicator.file.elf.sections.physical_size`*:: -+ --- -ELF Section List physical size. - -type: long - -format: bytes - --- - -*`threat.enrichments.indicator.file.elf.sections.type`*:: -+ --- -ELF Section List type. - -type: keyword - --- - -*`threat.enrichments.indicator.file.elf.sections.virtual_address`*:: -+ --- -ELF Section List virtual address. - -type: long - -format: string - --- - -*`threat.enrichments.indicator.file.elf.sections.virtual_size`*:: -+ --- -ELF Section List virtual size. - -type: long - -format: string - --- - -*`threat.enrichments.indicator.file.elf.segments`*:: -+ --- -An array containing an object for each segment of the ELF file. -The keys that should be present in these objects are defined by sub-fields underneath `elf.segments.*`. - -type: nested - --- - -*`threat.enrichments.indicator.file.elf.segments.sections`*:: -+ --- -ELF object segment sections. - -type: keyword - --- - -*`threat.enrichments.indicator.file.elf.segments.type`*:: -+ --- -ELF object segment type. - -type: keyword - --- - -*`threat.enrichments.indicator.file.elf.shared_libraries`*:: -+ --- -List of shared libraries used by this ELF object. - -type: keyword - --- - -*`threat.enrichments.indicator.file.elf.telfhash`*:: -+ --- -telfhash symbol hash for ELF file. - -type: keyword - --- - -*`threat.enrichments.indicator.file.extension`*:: -+ --- -File extension, excluding the leading dot. -Note that when the file name has multiple extensions (example.tar.gz), only the last one should be captured ("gz", not "tar.gz"). - -type: keyword - -example: png - --- - -*`threat.enrichments.indicator.file.fork_name`*:: -+ --- -A fork is additional data associated with a filesystem object. -On Linux, a resource fork is used to store additional data with a filesystem object. A file always has at least one fork for the data portion, and additional forks may exist. -On NTFS, this is analogous to an Alternate Data Stream (ADS), and the default data stream for a file is just called $DATA. Zone.Identifier is commonly used by Windows to track contents downloaded from the Internet. An ADS is typically of the form: `C:\path\to\filename.extension:some_fork_name`, and `some_fork_name` is the value that should populate `fork_name`. `filename.extension` should populate `file.name`, and `extension` should populate `file.extension`. The full path, `file.path`, will include the fork name. - -type: keyword - -example: Zone.Identifer - --- - -*`threat.enrichments.indicator.file.gid`*:: -+ --- -Primary group ID (GID) of the file. - -type: keyword - -example: 1001 - --- - -*`threat.enrichments.indicator.file.group`*:: -+ --- -Primary group name of the file. - -type: keyword - -example: alice - --- - -*`threat.enrichments.indicator.file.hash.md5`*:: -+ --- -MD5 hash. - -type: keyword - --- - -*`threat.enrichments.indicator.file.hash.sha1`*:: -+ --- -SHA1 hash. - -type: keyword - --- - -*`threat.enrichments.indicator.file.hash.sha256`*:: -+ --- -SHA256 hash. - -type: keyword - --- - -*`threat.enrichments.indicator.file.hash.sha512`*:: -+ --- -SHA512 hash. - -type: keyword - --- - -*`threat.enrichments.indicator.file.hash.ssdeep`*:: -+ --- -SSDEEP hash. - -type: keyword - --- - -*`threat.enrichments.indicator.file.inode`*:: -+ --- -Inode representing the file in the filesystem. - -type: keyword - -example: 256383 - --- - -*`threat.enrichments.indicator.file.mime_type`*:: -+ --- -MIME type should identify the format of the file or stream of bytes using https://www.iana.org/assignments/media-types/media-types.xhtml[IANA official types], where possible. When more than one type is applicable, the most specific type should be used. - -type: keyword - --- - -*`threat.enrichments.indicator.file.mode`*:: -+ --- -Mode of the file in octal representation. - -type: keyword - -example: 0640 - --- - -*`threat.enrichments.indicator.file.mtime`*:: -+ --- -Last time the file content was modified. - -type: date - --- - -*`threat.enrichments.indicator.file.name`*:: -+ --- -Name of the file including the extension, without the directory. - -type: keyword - -example: example.png - --- - -*`threat.enrichments.indicator.file.owner`*:: -+ --- -File owner's username. - -type: keyword - -example: alice - --- - -*`threat.enrichments.indicator.file.path`*:: -+ --- -Full path to the file, including the file name. It should include the drive letter, when appropriate. - -type: keyword - -example: /home/alice/example.png - --- - -*`threat.enrichments.indicator.file.path.text`*:: -+ --- -type: match_only_text - --- - -*`threat.enrichments.indicator.file.pe.architecture`*:: -+ --- -CPU architecture target for the file. - -type: keyword - -example: x64 - --- - -*`threat.enrichments.indicator.file.pe.company`*:: -+ --- -Internal company name of the file, provided at compile-time. - -type: keyword - -example: Microsoft Corporation - --- - -*`threat.enrichments.indicator.file.pe.description`*:: -+ --- -Internal description of the file, provided at compile-time. - -type: keyword - -example: Paint - --- - -*`threat.enrichments.indicator.file.pe.file_version`*:: -+ --- -Internal version of the file, provided at compile-time. - -type: keyword - -example: 6.3.9600.17415 - --- - -*`threat.enrichments.indicator.file.pe.imphash`*:: -+ --- -A hash of the imports in a PE file. An imphash -- or import hash -- can be used to fingerprint binaries even after recompilation or other code-level transformations have occurred, which would change more traditional hash values. -Learn more at https://www.fireeye.com/blog/threat-research/2014/01/tracking-malware-import-hashing.html. - -type: keyword - -example: 0c6803c4e922103c4dca5963aad36ddf - --- - -*`threat.enrichments.indicator.file.pe.original_file_name`*:: -+ --- -Internal name of the file, provided at compile-time. - -type: keyword - -example: MSPAINT.EXE - --- - -*`threat.enrichments.indicator.file.pe.product`*:: -+ --- -Internal product name of the file, provided at compile-time. - -type: keyword - -example: Microsoft® Windows® Operating System - --- - -*`threat.enrichments.indicator.file.size`*:: -+ --- -File size in bytes. -Only relevant when `file.type` is "file". - -type: long - -example: 16384 - --- - -*`threat.enrichments.indicator.file.target_path`*:: -+ --- -Target path for symlinks. - -type: keyword - --- - -*`threat.enrichments.indicator.file.target_path.text`*:: -+ --- -type: match_only_text - --- - -*`threat.enrichments.indicator.file.type`*:: -+ --- -File type (file, dir, or symlink). - -type: keyword - -example: file - --- - -*`threat.enrichments.indicator.file.uid`*:: -+ --- -The user ID (UID) or security identifier (SID) of the file owner. - -type: keyword - -example: 1001 - --- - -*`threat.enrichments.indicator.first_seen`*:: -+ --- -The date and time when intelligence source first reported sighting this indicator. - -type: date - -example: 2020-11-05T17:25:47.000Z - --- - -*`threat.enrichments.indicator.geo.city_name`*:: -+ --- -City name. - -type: keyword - -example: Montreal - --- - -*`threat.enrichments.indicator.geo.continent_code`*:: -+ --- -Two-letter code representing continent's name. - -type: keyword - -example: NA - --- - -*`threat.enrichments.indicator.geo.continent_name`*:: -+ --- -Name of the continent. - -type: keyword - -example: North America - --- - -*`threat.enrichments.indicator.geo.country_iso_code`*:: -+ --- -Country ISO code. - -type: keyword - -example: CA - --- - -*`threat.enrichments.indicator.geo.country_name`*:: -+ --- -Country name. - -type: keyword - -example: Canada - --- - -*`threat.enrichments.indicator.geo.location`*:: -+ --- -Longitude and latitude. - -type: geo_point - -example: { "lon": -73.614830, "lat": 45.505918 } - --- - -*`threat.enrichments.indicator.geo.name`*:: -+ --- -User-defined description of a location, at the level of granularity they care about. -Could be the name of their data centers, the floor number, if this describes a local physical entity, city names. -Not typically used in automated geolocation. - -type: keyword - -example: boston-dc - --- - -*`threat.enrichments.indicator.geo.postal_code`*:: -+ --- -Postal code associated with the location. -Values appropriate for this field may also be known as a postcode or ZIP code and will vary widely from country to country. - -type: keyword - -example: 94040 - --- - -*`threat.enrichments.indicator.geo.region_iso_code`*:: -+ --- -Region ISO code. - -type: keyword - -example: CA-QC - --- - -*`threat.enrichments.indicator.geo.region_name`*:: -+ --- -Region name. - -type: keyword - -example: Quebec - --- - -*`threat.enrichments.indicator.geo.timezone`*:: -+ --- -The time zone of the location, such as IANA time zone name. - -type: keyword - -example: America/Argentina/Buenos_Aires - --- - -*`threat.enrichments.indicator.ip`*:: -+ --- -Identifies a threat indicator as an IP address (irrespective of direction). - -type: ip - -example: 1.2.3.4 - --- - -*`threat.enrichments.indicator.last_seen`*:: -+ --- -The date and time when intelligence source last reported sighting this indicator. - -type: date - -example: 2020-11-05T17:25:47.000Z - --- - -*`threat.enrichments.indicator.marking.tlp`*:: -+ --- -Traffic Light Protocol sharing markings. Recommended values are: - * WHITE - * GREEN - * AMBER - * RED - -type: keyword - -example: White - --- - -*`threat.enrichments.indicator.modified_at`*:: -+ --- -The date and time when intelligence source last modified information for this indicator. - -type: date - -example: 2020-11-05T17:25:47.000Z - --- - -*`threat.enrichments.indicator.port`*:: -+ --- -Identifies a threat indicator as a port number (irrespective of direction). - -type: long - -example: 443 - --- - -*`threat.enrichments.indicator.provider`*:: -+ --- -The name of the indicator's provider. - -type: keyword - -example: lrz_urlhaus - --- - -*`threat.enrichments.indicator.reference`*:: -+ --- -Reference URL linking to additional information about this indicator. - -type: keyword - -example: https://system.example.com/indicator/0001234 - --- - -*`threat.enrichments.indicator.registry.data.bytes`*:: -+ --- -Original bytes written with base64 encoding. -For Windows registry operations, such as SetValueEx and RegQueryValueEx, this corresponds to the data pointed by `lp_data`. This is optional but provides better recoverability and should be populated for REG_BINARY encoded values. - -type: keyword - -example: ZQBuAC0AVQBTAAAAZQBuAAAAAAA= - --- - -*`threat.enrichments.indicator.registry.data.strings`*:: -+ --- -Content when writing string types. -Populated as an array when writing string data to the registry. For single string registry types (REG_SZ, REG_EXPAND_SZ), this should be an array with one string. For sequences of string with REG_MULTI_SZ, this array will be variable length. For numeric data, such as REG_DWORD and REG_QWORD, this should be populated with the decimal representation (e.g `"1"`). - -type: wildcard - -example: ["C:\rta\red_ttp\bin\myapp.exe"] - --- - -*`threat.enrichments.indicator.registry.data.type`*:: -+ --- -Standard registry type for encoding contents - -type: keyword - -example: REG_SZ - --- - -*`threat.enrichments.indicator.registry.hive`*:: -+ --- -Abbreviated name for the hive. - -type: keyword - -example: HKLM - --- - -*`threat.enrichments.indicator.registry.key`*:: -+ --- -Hive-relative path of keys. - -type: keyword - -example: SOFTWARE\Microsoft\Windows NT\CurrentVersion\Image File Execution Options\winword.exe - --- - -*`threat.enrichments.indicator.registry.path`*:: -+ --- -Full path, including hive, key and value - -type: keyword - -example: HKLM\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Image File Execution Options\winword.exe\Debugger - --- - -*`threat.enrichments.indicator.registry.value`*:: -+ --- -Name of the value written. - -type: keyword - -example: Debugger - --- - -*`threat.enrichments.indicator.scanner_stats`*:: -+ --- -Count of AV/EDR vendors that successfully detected malicious file or URL. - -type: long - -example: 4 - --- - -*`threat.enrichments.indicator.sightings`*:: -+ --- -Number of times this indicator was observed conducting threat activity. - -type: long - -example: 20 - --- - -*`threat.enrichments.indicator.type`*:: -+ --- -Type of indicator as represented by Cyber Observable in STIX 2.0. Recommended values: - * autonomous-system - * artifact - * directory - * domain-name - * email-addr - * file - * ipv4-addr - * ipv6-addr - * mac-addr - * mutex - * port - * process - * software - * url - * user-account - * windows-registry-key - * x509-certificate - -type: keyword - -example: ipv4-addr - --- - -*`threat.enrichments.indicator.url.domain`*:: -+ --- -Domain of the url, such as "www.elastic.co". -In some cases a URL may refer to an IP and/or port directly, without a domain name. In this case, the IP address would go to the `domain` field. -If the URL contains a literal IPv6 address enclosed by `[` and `]` (IETF RFC 2732), the `[` and `]` characters should also be captured in the `domain` field. - -type: keyword - -example: www.elastic.co - --- - -*`threat.enrichments.indicator.url.extension`*:: -+ --- -The field contains the file extension from the original request url, excluding the leading dot. -The file extension is only set if it exists, as not every url has a file extension. -The leading period must not be included. For example, the value must be "png", not ".png". -Note that when the file name has multiple extensions (example.tar.gz), only the last one should be captured ("gz", not "tar.gz"). - -type: keyword - -example: png - --- - -*`threat.enrichments.indicator.url.fragment`*:: -+ --- -Portion of the url after the `#`, such as "top". -The `#` is not part of the fragment. - -type: keyword - --- - -*`threat.enrichments.indicator.url.full`*:: -+ --- -If full URLs are important to your use case, they should be stored in `url.full`, whether this field is reconstructed or present in the event source. - -type: wildcard - -example: https://www.elastic.co:443/search?q=elasticsearch#top - --- - -*`threat.enrichments.indicator.url.full.text`*:: -+ --- -type: match_only_text - --- - -*`threat.enrichments.indicator.url.original`*:: -+ --- -Unmodified original url as seen in the event source. -Note that in network monitoring, the observed URL may be a full URL, whereas in access logs, the URL is often just represented as a path. -This field is meant to represent the URL as it was observed, complete or not. - -type: wildcard - -example: https://www.elastic.co:443/search?q=elasticsearch#top or /search?q=elasticsearch - --- - -*`threat.enrichments.indicator.url.original.text`*:: -+ --- -type: match_only_text - --- - -*`threat.enrichments.indicator.url.password`*:: -+ --- -Password of the request. - -type: keyword - --- - -*`threat.enrichments.indicator.url.path`*:: -+ --- -Path of the request, such as "/search". - -type: wildcard - --- - -*`threat.enrichments.indicator.url.port`*:: -+ --- -Port of the request, such as 443. - -type: long - -example: 443 - -format: string - --- - -*`threat.enrichments.indicator.url.query`*:: -+ --- -The query field describes the query string of the request, such as "q=elasticsearch". -The `?` is excluded from the query string. If a URL contains no `?`, there is no query field. If there is a `?` but no query, the query field exists with an empty string. The `exists` query can be used to differentiate between the two cases. - -type: keyword - --- - -*`threat.enrichments.indicator.url.registered_domain`*:: -+ --- -The highest registered url domain, stripped of the subdomain. -For example, the registered domain for "foo.example.com" is "example.com". -This value can be determined precisely with a list like the public suffix list (http://publicsuffix.org). Trying to approximate this by simply taking the last two labels will not work well for TLDs such as "co.uk". - -type: keyword - -example: example.com - --- - -*`threat.enrichments.indicator.url.scheme`*:: -+ --- -Scheme of the request, such as "https". -Note: The `:` is not part of the scheme. - -type: keyword - -example: https - --- - -*`threat.enrichments.indicator.url.subdomain`*:: -+ --- -The subdomain portion of a fully qualified domain name includes all of the names except the host name under the registered_domain. In a partially qualified domain, or if the the qualification level of the full name cannot be determined, subdomain contains all of the names below the registered domain. -For example the subdomain portion of "www.east.mydomain.co.uk" is "east". If the domain has multiple levels of subdomain, such as "sub2.sub1.example.com", the subdomain field should contain "sub2.sub1", with no trailing period. - -type: keyword - -example: east - --- - -*`threat.enrichments.indicator.url.top_level_domain`*:: -+ --- -The effective top level domain (eTLD), also known as the domain suffix, is the last part of the domain name. For example, the top level domain for example.com is "com". -This value can be determined precisely with a list like the public suffix list (http://publicsuffix.org). Trying to approximate this by simply taking the last label will not work well for effective TLDs such as "co.uk". - -type: keyword - -example: co.uk - --- - -*`threat.enrichments.indicator.url.username`*:: -+ --- -Username of the request. - -type: keyword - --- - -*`threat.enrichments.indicator.x509.alternative_names`*:: -+ --- -List of subject alternative names (SAN). Name types vary by certificate authority and certificate type but commonly contain IP addresses, DNS names (and wildcards), and email addresses. - -type: keyword - -example: *.elastic.co - --- - -*`threat.enrichments.indicator.x509.issuer.common_name`*:: -+ --- -List of common name (CN) of issuing certificate authority. - -type: keyword - -example: Example SHA2 High Assurance Server CA - --- - -*`threat.enrichments.indicator.x509.issuer.country`*:: -+ --- -List of country (C) codes - -type: keyword - -example: US - --- - -*`threat.enrichments.indicator.x509.issuer.distinguished_name`*:: -+ --- -Distinguished name (DN) of issuing certificate authority. - -type: keyword - -example: C=US, O=Example Inc, OU=www.example.com, CN=Example SHA2 High Assurance Server CA - --- - -*`threat.enrichments.indicator.x509.issuer.locality`*:: -+ --- -List of locality names (L) - -type: keyword - -example: Mountain View - --- - -*`threat.enrichments.indicator.x509.issuer.organization`*:: -+ --- -List of organizations (O) of issuing certificate authority. - -type: keyword - -example: Example Inc - --- - -*`threat.enrichments.indicator.x509.issuer.organizational_unit`*:: -+ --- -List of organizational units (OU) of issuing certificate authority. - -type: keyword - -example: www.example.com - --- - -*`threat.enrichments.indicator.x509.issuer.state_or_province`*:: -+ --- -List of state or province names (ST, S, or P) - -type: keyword - -example: California - --- - -*`threat.enrichments.indicator.x509.not_after`*:: -+ --- -Time at which the certificate is no longer considered valid. - -type: date - -example: 2020-07-16 03:15:39+00:00 - --- - -*`threat.enrichments.indicator.x509.not_before`*:: -+ --- -Time at which the certificate is first considered valid. - -type: date - -example: 2019-08-16 01:40:25+00:00 - --- - -*`threat.enrichments.indicator.x509.public_key_algorithm`*:: -+ --- -Algorithm used to generate the public key. - -type: keyword - -example: RSA - --- - -*`threat.enrichments.indicator.x509.public_key_curve`*:: -+ --- -The curve used by the elliptic curve public key algorithm. This is algorithm specific. - -type: keyword - -example: nistp521 - --- - -*`threat.enrichments.indicator.x509.public_key_exponent`*:: -+ --- -Exponent used to derive the public key. This is algorithm specific. - -type: long - -example: 65537 - -Field is not indexed. - --- - -*`threat.enrichments.indicator.x509.public_key_size`*:: -+ --- -The size of the public key space in bits. - -type: long - -example: 2048 - --- - -*`threat.enrichments.indicator.x509.serial_number`*:: -+ --- -Unique serial number issued by the certificate authority. For consistency, if this value is alphanumeric, it should be formatted without colons and uppercase characters. - -type: keyword - -example: 55FBB9C7DEBF09809D12CCAA - --- - -*`threat.enrichments.indicator.x509.signature_algorithm`*:: -+ --- -Identifier for certificate signature algorithm. We recommend using names found in Go Lang Crypto library. See https://github.com/golang/go/blob/go1.14/src/crypto/x509/x509.go#L337-L353. - -type: keyword - -example: SHA256-RSA - --- - -*`threat.enrichments.indicator.x509.subject.common_name`*:: -+ --- -List of common names (CN) of subject. - -type: keyword - -example: shared.global.example.net - --- - -*`threat.enrichments.indicator.x509.subject.country`*:: -+ --- -List of country (C) code - -type: keyword - -example: US - --- - -*`threat.enrichments.indicator.x509.subject.distinguished_name`*:: -+ --- -Distinguished name (DN) of the certificate subject entity. - -type: keyword - -example: C=US, ST=California, L=San Francisco, O=Example, Inc., CN=shared.global.example.net - --- - -*`threat.enrichments.indicator.x509.subject.locality`*:: -+ --- -List of locality names (L) - -type: keyword - -example: San Francisco - --- - -*`threat.enrichments.indicator.x509.subject.organization`*:: -+ --- -List of organizations (O) of subject. - -type: keyword - -example: Example, Inc. - --- - -*`threat.enrichments.indicator.x509.subject.organizational_unit`*:: -+ --- -List of organizational units (OU) of subject. - -type: keyword - --- - -*`threat.enrichments.indicator.x509.subject.state_or_province`*:: -+ --- -List of state or province names (ST, S, or P) - -type: keyword - -example: California - --- - -*`threat.enrichments.indicator.x509.version_number`*:: -+ --- -Version of x509 format. - -type: keyword - -example: 3 - --- - -*`threat.enrichments.matched.atomic`*:: -+ --- -Identifies the atomic indicator value that matched a local environment endpoint or network event. - -type: keyword - -example: bad-domain.com - --- - -*`threat.enrichments.matched.field`*:: -+ --- -Identifies the field of the atomic indicator that matched a local environment endpoint or network event. - -type: keyword - -example: file.hash.sha256 - --- - -*`threat.enrichments.matched.id`*:: -+ --- -Identifies the _id of the indicator document enriching the event. - -type: keyword - -example: ff93aee5-86a1-4a61-b0e6-0cdc313d01b5 - --- - -*`threat.enrichments.matched.index`*:: -+ --- -Identifies the _index of the indicator document enriching the event. - -type: keyword - -example: filebeat-8.0.0-2021.05.23-000011 - --- - -*`threat.enrichments.matched.type`*:: -+ --- -Identifies the type of match that caused the event to be enriched with the given indicator - -type: keyword - -example: indicator_match_rule - --- - -*`threat.framework`*:: -+ --- -Name of the threat framework used to further categorize and classify the tactic and technique of the reported threat. Framework classification can be provided by detecting systems, evaluated at ingest time, or retrospectively tagged to events. - -type: keyword - -example: MITRE ATT&CK - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`threat.group.alias`*:: -+ --- -The alias(es) of the group for a set of related intrusion activity that are tracked by a common name in the security community. -While not required, you can use a MITRE ATT&CK® group alias(es). - -type: keyword - -example: [ "Magecart Group 6" ] - --- - -*`threat.group.id`*:: -+ --- -The id of the group for a set of related intrusion activity that are tracked by a common name in the security community. -While not required, you can use a MITRE ATT&CK® group id. - -type: keyword - -example: G0037 - --- - -*`threat.group.name`*:: -+ --- -The name of the group for a set of related intrusion activity that are tracked by a common name in the security community. -While not required, you can use a MITRE ATT&CK® group name. - -type: keyword - -example: FIN6 - --- - -*`threat.group.reference`*:: -+ --- -The reference URL of the group for a set of related intrusion activity that are tracked by a common name in the security community. -While not required, you can use a MITRE ATT&CK® group reference URL. - -type: keyword - -example: https://attack.mitre.org/groups/G0037/ - --- - -*`threat.indicator.as.number`*:: -+ --- -Unique number allocated to the autonomous system. The autonomous system number (ASN) uniquely identifies each network on the Internet. - -type: long - -example: 15169 - --- - -*`threat.indicator.as.organization.name`*:: -+ --- -Organization name. - -type: keyword - -example: Google LLC - --- - -*`threat.indicator.as.organization.name.text`*:: -+ --- -type: match_only_text - --- - -*`threat.indicator.confidence`*:: -+ --- -Identifies the confidence rating assigned by the provider using STIX confidence scales. -Recommended values: - * Not Specified, None, Low, Medium, High - * 0-10 - * Admirality Scale (1-6) - * DNI Scale (5-95) - * WEP Scale (Impossible - Certain) - -type: keyword - -example: High - --- - -*`threat.indicator.description`*:: -+ --- -Describes the type of action conducted by the threat. - -type: keyword - -example: IP x.x.x.x was observed delivering the Angler EK. - --- - -*`threat.indicator.email.address`*:: -+ --- -Identifies a threat indicator as an email address (irrespective of direction). - -type: keyword - -example: phish@example.com - --- - -*`threat.indicator.file.accessed`*:: -+ --- -Last time the file was accessed. -Note that not all filesystems keep track of access time. - -type: date - --- - -*`threat.indicator.file.attributes`*:: -+ --- -Array of file attributes. -Attributes names will vary by platform. Here's a non-exhaustive list of values that are expected in this field: archive, compressed, directory, encrypted, execute, hidden, read, readonly, system, write. - -type: keyword - -example: ["readonly", "system"] - --- - -*`threat.indicator.file.code_signature.digest_algorithm`*:: -+ --- -The hashing algorithm used to sign the process. -This value can distinguish signatures when a file is signed multiple times by the same signer but with a different digest algorithm. - -type: keyword - -example: sha256 - --- - -*`threat.indicator.file.code_signature.exists`*:: -+ --- -Boolean to capture if a signature is present. - -type: boolean - -example: true - --- - -*`threat.indicator.file.code_signature.signing_id`*:: -+ --- -The identifier used to sign the process. -This is used to identify the application manufactured by a software vendor. The field is relevant to Apple *OS only. - -type: keyword - -example: com.apple.xpc.proxy - --- - -*`threat.indicator.file.code_signature.status`*:: -+ --- -Additional information about the certificate status. -This is useful for logging cryptographic errors with the certificate validity or trust status. Leave unpopulated if the validity or trust of the certificate was unchecked. - -type: keyword - -example: ERROR_UNTRUSTED_ROOT - --- - -*`threat.indicator.file.code_signature.subject_name`*:: -+ --- -Subject name of the code signer - -type: keyword - -example: Microsoft Corporation - --- - -*`threat.indicator.file.code_signature.team_id`*:: -+ --- -The team identifier used to sign the process. -This is used to identify the team or vendor of a software product. The field is relevant to Apple *OS only. - -type: keyword - -example: EQHXZ8M8AV - --- - -*`threat.indicator.file.code_signature.timestamp`*:: -+ --- -Date and time when the code signature was generated and signed. - -type: date - -example: 2021-01-01T12:10:30Z - --- - -*`threat.indicator.file.code_signature.trusted`*:: -+ --- -Stores the trust status of the certificate chain. -Validating the trust of the certificate chain may be complicated, and this field should only be populated by tools that actively check the status. - -type: boolean - -example: true - --- - -*`threat.indicator.file.code_signature.valid`*:: -+ --- -Boolean to capture if the digital signature is verified against the binary content. -Leave unpopulated if a certificate was unchecked. - -type: boolean - -example: true - --- - -*`threat.indicator.file.created`*:: -+ --- -File creation time. -Note that not all filesystems store the creation time. - -type: date - --- - -*`threat.indicator.file.ctime`*:: -+ --- -Last time the file attributes or metadata changed. -Note that changes to the file content will update `mtime`. This implies `ctime` will be adjusted at the same time, since `mtime` is an attribute of the file. - -type: date - --- - -*`threat.indicator.file.device`*:: -+ --- -Device that is the source of the file. - -type: keyword - -example: sda - --- - -*`threat.indicator.file.directory`*:: -+ --- -Directory where the file is located. It should include the drive letter, when appropriate. - -type: keyword - -example: /home/alice - --- - -*`threat.indicator.file.drive_letter`*:: -+ --- -Drive letter where the file is located. This field is only relevant on Windows. -The value should be uppercase, and not include the colon. - -type: keyword - -example: C - --- - -*`threat.indicator.file.elf.architecture`*:: -+ --- -Machine architecture of the ELF file. - -type: keyword - -example: x86-64 - --- - -*`threat.indicator.file.elf.byte_order`*:: -+ --- -Byte sequence of ELF file. - -type: keyword - -example: Little Endian - --- - -*`threat.indicator.file.elf.cpu_type`*:: -+ --- -CPU type of the ELF file. - -type: keyword - -example: Intel - --- - -*`threat.indicator.file.elf.creation_date`*:: -+ --- -Extracted when possible from the file's metadata. Indicates when it was built or compiled. It can also be faked by malware creators. - -type: date - --- - -*`threat.indicator.file.elf.exports`*:: -+ --- -List of exported element names and types. - -type: flattened - --- - -*`threat.indicator.file.elf.header.abi_version`*:: -+ --- -Version of the ELF Application Binary Interface (ABI). - -type: keyword - --- - -*`threat.indicator.file.elf.header.class`*:: -+ --- -Header class of the ELF file. - -type: keyword - --- - -*`threat.indicator.file.elf.header.data`*:: -+ --- -Data table of the ELF header. - -type: keyword - --- - -*`threat.indicator.file.elf.header.entrypoint`*:: -+ --- -Header entrypoint of the ELF file. - -type: long - -format: string - --- - -*`threat.indicator.file.elf.header.object_version`*:: -+ --- -"0x1" for original ELF files. - -type: keyword - --- - -*`threat.indicator.file.elf.header.os_abi`*:: -+ --- -Application Binary Interface (ABI) of the Linux OS. - -type: keyword - --- - -*`threat.indicator.file.elf.header.type`*:: -+ --- -Header type of the ELF file. - -type: keyword - --- - -*`threat.indicator.file.elf.header.version`*:: -+ --- -Version of the ELF header. - -type: keyword - --- - -*`threat.indicator.file.elf.imports`*:: -+ --- -List of imported element names and types. - -type: flattened - --- - -*`threat.indicator.file.elf.sections`*:: -+ --- -An array containing an object for each section of the ELF file. -The keys that should be present in these objects are defined by sub-fields underneath `elf.sections.*`. - -type: nested - --- - -*`threat.indicator.file.elf.sections.chi2`*:: -+ --- -Chi-square probability distribution of the section. - -type: long - -format: number - --- - -*`threat.indicator.file.elf.sections.entropy`*:: -+ --- -Shannon entropy calculation from the section. - -type: long - -format: number - --- - -*`threat.indicator.file.elf.sections.flags`*:: -+ --- -ELF Section List flags. - -type: keyword - --- - -*`threat.indicator.file.elf.sections.name`*:: -+ --- -ELF Section List name. - -type: keyword - --- - -*`threat.indicator.file.elf.sections.physical_offset`*:: -+ --- -ELF Section List offset. - -type: keyword - --- - -*`threat.indicator.file.elf.sections.physical_size`*:: -+ --- -ELF Section List physical size. - -type: long - -format: bytes - --- - -*`threat.indicator.file.elf.sections.type`*:: -+ --- -ELF Section List type. - -type: keyword - --- - -*`threat.indicator.file.elf.sections.virtual_address`*:: -+ --- -ELF Section List virtual address. - -type: long - -format: string - --- - -*`threat.indicator.file.elf.sections.virtual_size`*:: -+ --- -ELF Section List virtual size. - -type: long - -format: string - --- - -*`threat.indicator.file.elf.segments`*:: -+ --- -An array containing an object for each segment of the ELF file. -The keys that should be present in these objects are defined by sub-fields underneath `elf.segments.*`. - -type: nested - --- - -*`threat.indicator.file.elf.segments.sections`*:: -+ --- -ELF object segment sections. - -type: keyword - --- - -*`threat.indicator.file.elf.segments.type`*:: -+ --- -ELF object segment type. - -type: keyword - --- - -*`threat.indicator.file.elf.shared_libraries`*:: -+ --- -List of shared libraries used by this ELF object. - -type: keyword - --- - -*`threat.indicator.file.elf.telfhash`*:: -+ --- -telfhash symbol hash for ELF file. - -type: keyword - --- - -*`threat.indicator.file.extension`*:: -+ --- -File extension, excluding the leading dot. -Note that when the file name has multiple extensions (example.tar.gz), only the last one should be captured ("gz", not "tar.gz"). - -type: keyword - -example: png - --- - -*`threat.indicator.file.fork_name`*:: -+ --- -A fork is additional data associated with a filesystem object. -On Linux, a resource fork is used to store additional data with a filesystem object. A file always has at least one fork for the data portion, and additional forks may exist. -On NTFS, this is analogous to an Alternate Data Stream (ADS), and the default data stream for a file is just called $DATA. Zone.Identifier is commonly used by Windows to track contents downloaded from the Internet. An ADS is typically of the form: `C:\path\to\filename.extension:some_fork_name`, and `some_fork_name` is the value that should populate `fork_name`. `filename.extension` should populate `file.name`, and `extension` should populate `file.extension`. The full path, `file.path`, will include the fork name. - -type: keyword - -example: Zone.Identifer - --- - -*`threat.indicator.file.gid`*:: -+ --- -Primary group ID (GID) of the file. - -type: keyword - -example: 1001 - --- - -*`threat.indicator.file.group`*:: -+ --- -Primary group name of the file. - -type: keyword - -example: alice - --- - -*`threat.indicator.file.hash.md5`*:: -+ --- -MD5 hash. - -type: keyword - --- - -*`threat.indicator.file.hash.sha1`*:: -+ --- -SHA1 hash. - -type: keyword - --- - -*`threat.indicator.file.hash.sha256`*:: -+ --- -SHA256 hash. - -type: keyword - --- - -*`threat.indicator.file.hash.sha512`*:: -+ --- -SHA512 hash. - -type: keyword - --- - -*`threat.indicator.file.hash.ssdeep`*:: -+ --- -SSDEEP hash. - -type: keyword - --- - -*`threat.indicator.file.inode`*:: -+ --- -Inode representing the file in the filesystem. - -type: keyword - -example: 256383 - --- - -*`threat.indicator.file.mime_type`*:: -+ --- -MIME type should identify the format of the file or stream of bytes using https://www.iana.org/assignments/media-types/media-types.xhtml[IANA official types], where possible. When more than one type is applicable, the most specific type should be used. - -type: keyword - --- - -*`threat.indicator.file.mode`*:: -+ --- -Mode of the file in octal representation. - -type: keyword - -example: 0640 - --- - -*`threat.indicator.file.mtime`*:: -+ --- -Last time the file content was modified. - -type: date - --- - -*`threat.indicator.file.name`*:: -+ --- -Name of the file including the extension, without the directory. - -type: keyword - -example: example.png - --- - -*`threat.indicator.file.owner`*:: -+ --- -File owner's username. - -type: keyword - -example: alice - --- - -*`threat.indicator.file.path`*:: -+ --- -Full path to the file, including the file name. It should include the drive letter, when appropriate. - -type: keyword - -example: /home/alice/example.png - --- - -*`threat.indicator.file.path.text`*:: -+ --- -type: match_only_text - --- - -*`threat.indicator.file.pe.architecture`*:: -+ --- -CPU architecture target for the file. - -type: keyword - -example: x64 - --- - -*`threat.indicator.file.pe.company`*:: -+ --- -Internal company name of the file, provided at compile-time. - -type: keyword - -example: Microsoft Corporation - --- - -*`threat.indicator.file.pe.description`*:: -+ --- -Internal description of the file, provided at compile-time. - -type: keyword - -example: Paint - --- - -*`threat.indicator.file.pe.file_version`*:: -+ --- -Internal version of the file, provided at compile-time. - -type: keyword - -example: 6.3.9600.17415 - --- - -*`threat.indicator.file.pe.imphash`*:: -+ --- -A hash of the imports in a PE file. An imphash -- or import hash -- can be used to fingerprint binaries even after recompilation or other code-level transformations have occurred, which would change more traditional hash values. -Learn more at https://www.fireeye.com/blog/threat-research/2014/01/tracking-malware-import-hashing.html. - -type: keyword - -example: 0c6803c4e922103c4dca5963aad36ddf - --- - -*`threat.indicator.file.pe.original_file_name`*:: -+ --- -Internal name of the file, provided at compile-time. - -type: keyword - -example: MSPAINT.EXE - --- - -*`threat.indicator.file.pe.product`*:: -+ --- -Internal product name of the file, provided at compile-time. - -type: keyword - -example: Microsoft® Windows® Operating System - --- - -*`threat.indicator.file.size`*:: -+ --- -File size in bytes. -Only relevant when `file.type` is "file". - -type: long - -example: 16384 - --- - -*`threat.indicator.file.target_path`*:: -+ --- -Target path for symlinks. - -type: keyword - --- - -*`threat.indicator.file.target_path.text`*:: -+ --- -type: match_only_text - --- - -*`threat.indicator.file.type`*:: -+ --- -File type (file, dir, or symlink). - -type: keyword - -example: file - --- - -*`threat.indicator.file.uid`*:: -+ --- -The user ID (UID) or security identifier (SID) of the file owner. - -type: keyword - -example: 1001 - --- - -*`threat.indicator.first_seen`*:: -+ --- -The date and time when intelligence source first reported sighting this indicator. - -type: date - -example: 2020-11-05T17:25:47.000Z - --- - -*`threat.indicator.geo.city_name`*:: -+ --- -City name. - -type: keyword - -example: Montreal - --- - -*`threat.indicator.geo.continent_code`*:: -+ --- -Two-letter code representing continent's name. - -type: keyword - -example: NA - --- - -*`threat.indicator.geo.continent_name`*:: -+ --- -Name of the continent. - -type: keyword - -example: North America - --- - -*`threat.indicator.geo.country_iso_code`*:: -+ --- -Country ISO code. - -type: keyword - -example: CA - --- - -*`threat.indicator.geo.country_name`*:: -+ --- -Country name. - -type: keyword - -example: Canada - --- - -*`threat.indicator.geo.location`*:: -+ --- -Longitude and latitude. - -type: geo_point - -example: { "lon": -73.614830, "lat": 45.505918 } - --- - -*`threat.indicator.geo.name`*:: -+ --- -User-defined description of a location, at the level of granularity they care about. -Could be the name of their data centers, the floor number, if this describes a local physical entity, city names. -Not typically used in automated geolocation. - -type: keyword - -example: boston-dc - --- - -*`threat.indicator.geo.postal_code`*:: -+ --- -Postal code associated with the location. -Values appropriate for this field may also be known as a postcode or ZIP code and will vary widely from country to country. - -type: keyword - -example: 94040 - --- - -*`threat.indicator.geo.region_iso_code`*:: -+ --- -Region ISO code. - -type: keyword - -example: CA-QC - --- - -*`threat.indicator.geo.region_name`*:: -+ --- -Region name. - -type: keyword - -example: Quebec - --- - -*`threat.indicator.geo.timezone`*:: -+ --- -The time zone of the location, such as IANA time zone name. - -type: keyword - -example: America/Argentina/Buenos_Aires - --- - -*`threat.indicator.ip`*:: -+ --- -Identifies a threat indicator as an IP address (irrespective of direction). - -type: ip - -example: 1.2.3.4 - --- - -*`threat.indicator.last_seen`*:: -+ --- -The date and time when intelligence source last reported sighting this indicator. - -type: date - -example: 2020-11-05T17:25:47.000Z - --- - -*`threat.indicator.marking.tlp`*:: -+ --- -Traffic Light Protocol sharing markings. -Recommended values are: - * WHITE - * GREEN - * AMBER - * RED - -type: keyword - -example: WHITE - --- - -*`threat.indicator.modified_at`*:: -+ --- -The date and time when intelligence source last modified information for this indicator. - -type: date - -example: 2020-11-05T17:25:47.000Z - --- - -*`threat.indicator.port`*:: -+ --- -Identifies a threat indicator as a port number (irrespective of direction). - -type: long - -example: 443 - --- - -*`threat.indicator.provider`*:: -+ --- -The name of the indicator's provider. - -type: keyword - -example: lrz_urlhaus - --- - -*`threat.indicator.reference`*:: -+ --- -Reference URL linking to additional information about this indicator. - -type: keyword - -example: https://system.example.com/indicator/0001234 - --- - -*`threat.indicator.registry.data.bytes`*:: -+ --- -Original bytes written with base64 encoding. -For Windows registry operations, such as SetValueEx and RegQueryValueEx, this corresponds to the data pointed by `lp_data`. This is optional but provides better recoverability and should be populated for REG_BINARY encoded values. - -type: keyword - -example: ZQBuAC0AVQBTAAAAZQBuAAAAAAA= - --- - -*`threat.indicator.registry.data.strings`*:: -+ --- -Content when writing string types. -Populated as an array when writing string data to the registry. For single string registry types (REG_SZ, REG_EXPAND_SZ), this should be an array with one string. For sequences of string with REG_MULTI_SZ, this array will be variable length. For numeric data, such as REG_DWORD and REG_QWORD, this should be populated with the decimal representation (e.g `"1"`). - -type: wildcard - -example: ["C:\rta\red_ttp\bin\myapp.exe"] - --- - -*`threat.indicator.registry.data.type`*:: -+ --- -Standard registry type for encoding contents - -type: keyword - -example: REG_SZ - --- - -*`threat.indicator.registry.hive`*:: -+ --- -Abbreviated name for the hive. - -type: keyword - -example: HKLM - --- - -*`threat.indicator.registry.key`*:: -+ --- -Hive-relative path of keys. - -type: keyword - -example: SOFTWARE\Microsoft\Windows NT\CurrentVersion\Image File Execution Options\winword.exe - --- - -*`threat.indicator.registry.path`*:: -+ --- -Full path, including hive, key and value - -type: keyword - -example: HKLM\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Image File Execution Options\winword.exe\Debugger - --- - -*`threat.indicator.registry.value`*:: -+ --- -Name of the value written. - -type: keyword - -example: Debugger - --- - -*`threat.indicator.scanner_stats`*:: -+ --- -Count of AV/EDR vendors that successfully detected malicious file or URL. - -type: long - -example: 4 - --- - -*`threat.indicator.sightings`*:: -+ --- -Number of times this indicator was observed conducting threat activity. - -type: long - -example: 20 - --- - -*`threat.indicator.type`*:: -+ --- -Type of indicator as represented by Cyber Observable in STIX 2.0. -Recommended values: - * autonomous-system - * artifact - * directory - * domain-name - * email-addr - * file - * ipv4-addr - * ipv6-addr - * mac-addr - * mutex - * port - * process - * software - * url - * user-account - * windows-registry-key - * x509-certificate - -type: keyword - -example: ipv4-addr - --- - -*`threat.indicator.url.domain`*:: -+ --- -Domain of the url, such as "www.elastic.co". -In some cases a URL may refer to an IP and/or port directly, without a domain name. In this case, the IP address would go to the `domain` field. -If the URL contains a literal IPv6 address enclosed by `[` and `]` (IETF RFC 2732), the `[` and `]` characters should also be captured in the `domain` field. - -type: keyword - -example: www.elastic.co - --- - -*`threat.indicator.url.extension`*:: -+ --- -The field contains the file extension from the original request url, excluding the leading dot. -The file extension is only set if it exists, as not every url has a file extension. -The leading period must not be included. For example, the value must be "png", not ".png". -Note that when the file name has multiple extensions (example.tar.gz), only the last one should be captured ("gz", not "tar.gz"). - -type: keyword - -example: png - --- - -*`threat.indicator.url.fragment`*:: -+ --- -Portion of the url after the `#`, such as "top". -The `#` is not part of the fragment. - -type: keyword - --- - -*`threat.indicator.url.full`*:: -+ --- -If full URLs are important to your use case, they should be stored in `url.full`, whether this field is reconstructed or present in the event source. - -type: wildcard - -example: https://www.elastic.co:443/search?q=elasticsearch#top - --- - -*`threat.indicator.url.full.text`*:: -+ --- -type: match_only_text - --- - -*`threat.indicator.url.original`*:: -+ --- -Unmodified original url as seen in the event source. -Note that in network monitoring, the observed URL may be a full URL, whereas in access logs, the URL is often just represented as a path. -This field is meant to represent the URL as it was observed, complete or not. - -type: wildcard - -example: https://www.elastic.co:443/search?q=elasticsearch#top or /search?q=elasticsearch - --- - -*`threat.indicator.url.original.text`*:: -+ --- -type: match_only_text - --- - -*`threat.indicator.url.password`*:: -+ --- -Password of the request. - -type: keyword - --- - -*`threat.indicator.url.path`*:: -+ --- -Path of the request, such as "/search". - -type: wildcard - --- - -*`threat.indicator.url.port`*:: -+ --- -Port of the request, such as 443. - -type: long - -example: 443 - -format: string - --- - -*`threat.indicator.url.query`*:: -+ --- -The query field describes the query string of the request, such as "q=elasticsearch". -The `?` is excluded from the query string. If a URL contains no `?`, there is no query field. If there is a `?` but no query, the query field exists with an empty string. The `exists` query can be used to differentiate between the two cases. - -type: keyword - --- - -*`threat.indicator.url.registered_domain`*:: -+ --- -The highest registered url domain, stripped of the subdomain. -For example, the registered domain for "foo.example.com" is "example.com". -This value can be determined precisely with a list like the public suffix list (http://publicsuffix.org). Trying to approximate this by simply taking the last two labels will not work well for TLDs such as "co.uk". - -type: keyword - -example: example.com - --- - -*`threat.indicator.url.scheme`*:: -+ --- -Scheme of the request, such as "https". -Note: The `:` is not part of the scheme. - -type: keyword - -example: https - --- - -*`threat.indicator.url.subdomain`*:: -+ --- -The subdomain portion of a fully qualified domain name includes all of the names except the host name under the registered_domain. In a partially qualified domain, or if the the qualification level of the full name cannot be determined, subdomain contains all of the names below the registered domain. -For example the subdomain portion of "www.east.mydomain.co.uk" is "east". If the domain has multiple levels of subdomain, such as "sub2.sub1.example.com", the subdomain field should contain "sub2.sub1", with no trailing period. - -type: keyword - -example: east - --- - -*`threat.indicator.url.top_level_domain`*:: -+ --- -The effective top level domain (eTLD), also known as the domain suffix, is the last part of the domain name. For example, the top level domain for example.com is "com". -This value can be determined precisely with a list like the public suffix list (http://publicsuffix.org). Trying to approximate this by simply taking the last label will not work well for effective TLDs such as "co.uk". - -type: keyword - -example: co.uk - --- - -*`threat.indicator.url.username`*:: -+ --- -Username of the request. - -type: keyword - --- - -*`threat.indicator.x509.alternative_names`*:: -+ --- -List of subject alternative names (SAN). Name types vary by certificate authority and certificate type but commonly contain IP addresses, DNS names (and wildcards), and email addresses. - -type: keyword - -example: *.elastic.co - --- - -*`threat.indicator.x509.issuer.common_name`*:: -+ --- -List of common name (CN) of issuing certificate authority. - -type: keyword - -example: Example SHA2 High Assurance Server CA - --- - -*`threat.indicator.x509.issuer.country`*:: -+ --- -List of country (C) codes - -type: keyword - -example: US - --- - -*`threat.indicator.x509.issuer.distinguished_name`*:: -+ --- -Distinguished name (DN) of issuing certificate authority. - -type: keyword - -example: C=US, O=Example Inc, OU=www.example.com, CN=Example SHA2 High Assurance Server CA - --- - -*`threat.indicator.x509.issuer.locality`*:: -+ --- -List of locality names (L) - -type: keyword - -example: Mountain View - --- - -*`threat.indicator.x509.issuer.organization`*:: -+ --- -List of organizations (O) of issuing certificate authority. - -type: keyword - -example: Example Inc - --- - -*`threat.indicator.x509.issuer.organizational_unit`*:: -+ --- -List of organizational units (OU) of issuing certificate authority. - -type: keyword - -example: www.example.com - --- - -*`threat.indicator.x509.issuer.state_or_province`*:: -+ --- -List of state or province names (ST, S, or P) - -type: keyword - -example: California - --- - -*`threat.indicator.x509.not_after`*:: -+ --- -Time at which the certificate is no longer considered valid. - -type: date - -example: 2020-07-16 03:15:39+00:00 - --- - -*`threat.indicator.x509.not_before`*:: -+ --- -Time at which the certificate is first considered valid. - -type: date - -example: 2019-08-16 01:40:25+00:00 - --- - -*`threat.indicator.x509.public_key_algorithm`*:: -+ --- -Algorithm used to generate the public key. - -type: keyword - -example: RSA - --- - -*`threat.indicator.x509.public_key_curve`*:: -+ --- -The curve used by the elliptic curve public key algorithm. This is algorithm specific. - -type: keyword - -example: nistp521 - --- - -*`threat.indicator.x509.public_key_exponent`*:: -+ --- -Exponent used to derive the public key. This is algorithm specific. - -type: long - -example: 65537 - -Field is not indexed. - --- - -*`threat.indicator.x509.public_key_size`*:: -+ --- -The size of the public key space in bits. - -type: long - -example: 2048 - --- - -*`threat.indicator.x509.serial_number`*:: -+ --- -Unique serial number issued by the certificate authority. For consistency, if this value is alphanumeric, it should be formatted without colons and uppercase characters. - -type: keyword - -example: 55FBB9C7DEBF09809D12CCAA - --- - -*`threat.indicator.x509.signature_algorithm`*:: -+ --- -Identifier for certificate signature algorithm. We recommend using names found in Go Lang Crypto library. See https://github.com/golang/go/blob/go1.14/src/crypto/x509/x509.go#L337-L353. - -type: keyword - -example: SHA256-RSA - --- - -*`threat.indicator.x509.subject.common_name`*:: -+ --- -List of common names (CN) of subject. - -type: keyword - -example: shared.global.example.net - --- - -*`threat.indicator.x509.subject.country`*:: -+ --- -List of country (C) code - -type: keyword - -example: US - --- - -*`threat.indicator.x509.subject.distinguished_name`*:: -+ --- -Distinguished name (DN) of the certificate subject entity. - -type: keyword - -example: C=US, ST=California, L=San Francisco, O=Example, Inc., CN=shared.global.example.net - --- - -*`threat.indicator.x509.subject.locality`*:: -+ --- -List of locality names (L) - -type: keyword - -example: San Francisco - --- - -*`threat.indicator.x509.subject.organization`*:: -+ --- -List of organizations (O) of subject. - -type: keyword - -example: Example, Inc. - --- - -*`threat.indicator.x509.subject.organizational_unit`*:: -+ --- -List of organizational units (OU) of subject. - -type: keyword - --- - -*`threat.indicator.x509.subject.state_or_province`*:: -+ --- -List of state or province names (ST, S, or P) - -type: keyword - -example: California - --- - -*`threat.indicator.x509.version_number`*:: -+ --- -Version of x509 format. - -type: keyword - -example: 3 - --- - -*`threat.software.alias`*:: -+ --- -The alias(es) of the software for a set of related intrusion activity that are tracked by a common name in the security community. -While not required, you can use a MITRE ATT&CK® associated software description. - -type: keyword - -example: [ "X-Agent" ] - --- - -*`threat.software.id`*:: -+ --- -The id of the software used by this threat to conduct behavior commonly modeled using MITRE ATT&CK®. -While not required, you can use a MITRE ATT&CK® software id. - -type: keyword - -example: S0552 - --- - -*`threat.software.name`*:: -+ --- -The name of the software used by this threat to conduct behavior commonly modeled using MITRE ATT&CK®. -While not required, you can use a MITRE ATT&CK® software name. - -type: keyword - -example: AdFind - --- - -*`threat.software.platforms`*:: -+ --- -The platforms of the software used by this threat to conduct behavior commonly modeled using MITRE ATT&CK®. -Recommended Values: - * AWS - * Azure - * Azure AD - * GCP - * Linux - * macOS - * Network - * Office 365 - * SaaS - * Windows - -While not required, you can use a MITRE ATT&CK® software platforms. - -type: keyword - -example: [ "Windows" ] - --- - -*`threat.software.reference`*:: -+ --- -The reference URL of the software used by this threat to conduct behavior commonly modeled using MITRE ATT&CK®. -While not required, you can use a MITRE ATT&CK® software reference URL. - -type: keyword - -example: https://attack.mitre.org/software/S0552/ - --- - -*`threat.software.type`*:: -+ --- -The type of software used by this threat to conduct behavior commonly modeled using MITRE ATT&CK®. -Recommended values - * Malware - * Tool - - While not required, you can use a MITRE ATT&CK® software type. - -type: keyword - -example: Tool - --- - -*`threat.tactic.id`*:: -+ --- -The id of tactic used by this threat. You can use a MITRE ATT&CK® tactic, for example. (ex. https://attack.mitre.org/tactics/TA0002/ ) - -type: keyword - -example: TA0002 - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`threat.tactic.name`*:: -+ --- -Name of the type of tactic used by this threat. You can use a MITRE ATT&CK® tactic, for example. (ex. https://attack.mitre.org/tactics/TA0002/) - -type: keyword - -example: Execution - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`threat.tactic.reference`*:: -+ --- -The reference url of tactic used by this threat. You can use a MITRE ATT&CK® tactic, for example. (ex. https://attack.mitre.org/tactics/TA0002/ ) - -type: keyword - -example: https://attack.mitre.org/tactics/TA0002/ - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`threat.technique.id`*:: -+ --- -The id of technique used by this threat. You can use a MITRE ATT&CK® technique, for example. (ex. https://attack.mitre.org/techniques/T1059/) - -type: keyword - -example: T1059 - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`threat.technique.name`*:: -+ --- -The name of technique used by this threat. You can use a MITRE ATT&CK® technique, for example. (ex. https://attack.mitre.org/techniques/T1059/) - -type: keyword - -example: Command and Scripting Interpreter - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`threat.technique.name.text`*:: -+ --- -type: match_only_text - --- - -*`threat.technique.reference`*:: -+ --- -The reference url of technique used by this threat. You can use a MITRE ATT&CK® technique, for example. (ex. https://attack.mitre.org/techniques/T1059/) - -type: keyword - -example: https://attack.mitre.org/techniques/T1059/ - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`threat.technique.subtechnique.id`*:: -+ --- -The full id of subtechnique used by this threat. You can use a MITRE ATT&CK® subtechnique, for example. (ex. https://attack.mitre.org/techniques/T1059/001/) - -type: keyword - -example: T1059.001 - --- - -*`threat.technique.subtechnique.name`*:: -+ --- -The name of subtechnique used by this threat. You can use a MITRE ATT&CK® subtechnique, for example. (ex. https://attack.mitre.org/techniques/T1059/001/) - -type: keyword - -example: PowerShell - --- - -*`threat.technique.subtechnique.name.text`*:: -+ --- -type: match_only_text - --- - -*`threat.technique.subtechnique.reference`*:: -+ --- -The reference url of subtechnique used by this threat. You can use a MITRE ATT&CK® subtechnique, for example. (ex. https://attack.mitre.org/techniques/T1059/001/) - -type: keyword - -example: https://attack.mitre.org/techniques/T1059/001/ - --- - -[float] -=== tls - -Fields related to a TLS connection. These fields focus on the TLS protocol itself and intentionally avoids in-depth analysis of the related x.509 certificate files. - - -*`tls.cipher`*:: -+ --- -String indicating the cipher used during the current connection. - -type: keyword - -example: TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256 - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`tls.client.certificate`*:: -+ --- -PEM-encoded stand-alone certificate offered by the client. This is usually mutually-exclusive of `client.certificate_chain` since this value also exists in that list. - -type: keyword - -example: MII... - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`tls.client.certificate_chain`*:: -+ --- -Array of PEM-encoded certificates that make up the certificate chain offered by the client. This is usually mutually-exclusive of `client.certificate` since that value should be the first certificate in the chain. - -type: keyword - -example: ["MII...", "MII..."] - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`tls.client.hash.md5`*:: -+ --- -Certificate fingerprint using the MD5 digest of DER-encoded version of certificate offered by the client. For consistency with other hash values, this value should be formatted as an uppercase hash. - -type: keyword - -example: 0F76C7F2C55BFD7D8E8B8F4BFBF0C9EC - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`tls.client.hash.sha1`*:: -+ --- -Certificate fingerprint using the SHA1 digest of DER-encoded version of certificate offered by the client. For consistency with other hash values, this value should be formatted as an uppercase hash. - -type: keyword - -example: 9E393D93138888D288266C2D915214D1D1CCEB2A - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`tls.client.hash.sha256`*:: -+ --- -Certificate fingerprint using the SHA256 digest of DER-encoded version of certificate offered by the client. For consistency with other hash values, this value should be formatted as an uppercase hash. - -type: keyword - -example: 0687F666A054EF17A08E2F2162EAB4CBC0D265E1D7875BE74BF3C712CA92DAF0 - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`tls.client.issuer`*:: -+ --- -Distinguished name of subject of the issuer of the x.509 certificate presented by the client. - -type: keyword - -example: CN=Example Root CA, OU=Infrastructure Team, DC=example, DC=com - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`tls.client.ja3`*:: -+ --- -A hash that identifies clients based on how they perform an SSL/TLS handshake. - -type: keyword - -example: d4e5b18d6b55c71272893221c96ba240 - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`tls.client.not_after`*:: -+ --- -Date/Time indicating when client certificate is no longer considered valid. - -type: date - -example: 2021-01-01T00:00:00.000Z - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`tls.client.not_before`*:: -+ --- -Date/Time indicating when client certificate is first considered valid. - -type: date - -example: 1970-01-01T00:00:00.000Z - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`tls.client.server_name`*:: -+ --- -Also called an SNI, this tells the server which hostname to which the client is attempting to connect to. When this value is available, it should get copied to `destination.domain`. - -type: keyword - -example: www.elastic.co - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`tls.client.subject`*:: -+ --- -Distinguished name of subject of the x.509 certificate presented by the client. - -type: keyword - -example: CN=myclient, OU=Documentation Team, DC=example, DC=com - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`tls.client.supported_ciphers`*:: -+ --- -Array of ciphers offered by the client during the client hello. - -type: keyword - -example: ["TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384", "TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384", "..."] - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`tls.client.x509.alternative_names`*:: -+ --- -List of subject alternative names (SAN). Name types vary by certificate authority and certificate type but commonly contain IP addresses, DNS names (and wildcards), and email addresses. - -type: keyword - -example: *.elastic.co - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`tls.client.x509.issuer.common_name`*:: -+ --- -List of common name (CN) of issuing certificate authority. - -type: keyword - -example: Example SHA2 High Assurance Server CA - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`tls.client.x509.issuer.country`*:: -+ --- -List of country (C) codes - -type: keyword - -example: US - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`tls.client.x509.issuer.distinguished_name`*:: -+ --- -Distinguished name (DN) of issuing certificate authority. - -type: keyword - -example: C=US, O=Example Inc, OU=www.example.com, CN=Example SHA2 High Assurance Server CA - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`tls.client.x509.issuer.locality`*:: -+ --- -List of locality names (L) - -type: keyword - -example: Mountain View - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`tls.client.x509.issuer.organization`*:: -+ --- -List of organizations (O) of issuing certificate authority. - -type: keyword - -example: Example Inc - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`tls.client.x509.issuer.organizational_unit`*:: -+ --- -List of organizational units (OU) of issuing certificate authority. - -type: keyword - -example: www.example.com - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`tls.client.x509.issuer.state_or_province`*:: -+ --- -List of state or province names (ST, S, or P) - -type: keyword - -example: California - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`tls.client.x509.not_after`*:: -+ --- -Time at which the certificate is no longer considered valid. - -type: date - -example: 2020-07-16 03:15:39+00:00 - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`tls.client.x509.not_before`*:: -+ --- -Time at which the certificate is first considered valid. - -type: date - -example: 2019-08-16 01:40:25+00:00 - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`tls.client.x509.public_key_algorithm`*:: -+ --- -Algorithm used to generate the public key. - -type: keyword - -example: RSA - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`tls.client.x509.public_key_curve`*:: -+ --- -The curve used by the elliptic curve public key algorithm. This is algorithm specific. - -type: keyword - -example: nistp521 - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`tls.client.x509.public_key_exponent`*:: -+ --- -Exponent used to derive the public key. This is algorithm specific. - -type: long - -example: 65537 - -{yes-icon} {ecs-ref}[ECS] field. - -Field is not indexed. - --- - -*`tls.client.x509.public_key_size`*:: -+ --- -The size of the public key space in bits. - -type: long - -example: 2048 - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`tls.client.x509.serial_number`*:: -+ --- -Unique serial number issued by the certificate authority. For consistency, if this value is alphanumeric, it should be formatted without colons and uppercase characters. - -type: keyword - -example: 55FBB9C7DEBF09809D12CCAA - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`tls.client.x509.signature_algorithm`*:: -+ --- -Identifier for certificate signature algorithm. We recommend using names found in Go Lang Crypto library. See https://github.com/golang/go/blob/go1.14/src/crypto/x509/x509.go#L337-L353. - -type: keyword - -example: SHA256-RSA - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`tls.client.x509.subject.common_name`*:: -+ --- -List of common names (CN) of subject. - -type: keyword - -example: shared.global.example.net - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`tls.client.x509.subject.country`*:: -+ --- -List of country (C) code - -type: keyword - -example: US - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`tls.client.x509.subject.distinguished_name`*:: -+ --- -Distinguished name (DN) of the certificate subject entity. - -type: keyword - -example: C=US, ST=California, L=San Francisco, O=Example, Inc., CN=shared.global.example.net - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`tls.client.x509.subject.locality`*:: -+ --- -List of locality names (L) - -type: keyword - -example: San Francisco - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`tls.client.x509.subject.organization`*:: -+ --- -List of organizations (O) of subject. - -type: keyword - -example: Example, Inc. - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`tls.client.x509.subject.organizational_unit`*:: -+ --- -List of organizational units (OU) of subject. - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`tls.client.x509.subject.state_or_province`*:: -+ --- -List of state or province names (ST, S, or P) - -type: keyword - -example: California - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`tls.client.x509.version_number`*:: -+ --- -Version of x509 format. - -type: keyword - -example: 3 - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`tls.curve`*:: -+ --- -String indicating the curve used for the given cipher, when applicable. - -type: keyword - -example: secp256r1 - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`tls.established`*:: -+ --- -Boolean flag indicating if the TLS negotiation was successful and transitioned to an encrypted tunnel. - -type: boolean - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`tls.next_protocol`*:: -+ --- -String indicating the protocol being tunneled. Per the values in the IANA registry (https://www.iana.org/assignments/tls-extensiontype-values/tls-extensiontype-values.xhtml#alpn-protocol-ids), this string should be lower case. - -type: keyword - -example: http/1.1 - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`tls.resumed`*:: -+ --- -Boolean flag indicating if this TLS connection was resumed from an existing TLS negotiation. - -type: boolean - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`tls.server.certificate`*:: -+ --- -PEM-encoded stand-alone certificate offered by the server. This is usually mutually-exclusive of `server.certificate_chain` since this value also exists in that list. - -type: keyword - -example: MII... - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`tls.server.certificate_chain`*:: -+ --- -Array of PEM-encoded certificates that make up the certificate chain offered by the server. This is usually mutually-exclusive of `server.certificate` since that value should be the first certificate in the chain. - -type: keyword - -example: ["MII...", "MII..."] - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`tls.server.hash.md5`*:: -+ --- -Certificate fingerprint using the MD5 digest of DER-encoded version of certificate offered by the server. For consistency with other hash values, this value should be formatted as an uppercase hash. - -type: keyword - -example: 0F76C7F2C55BFD7D8E8B8F4BFBF0C9EC - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`tls.server.hash.sha1`*:: -+ --- -Certificate fingerprint using the SHA1 digest of DER-encoded version of certificate offered by the server. For consistency with other hash values, this value should be formatted as an uppercase hash. - -type: keyword - -example: 9E393D93138888D288266C2D915214D1D1CCEB2A - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`tls.server.hash.sha256`*:: -+ --- -Certificate fingerprint using the SHA256 digest of DER-encoded version of certificate offered by the server. For consistency with other hash values, this value should be formatted as an uppercase hash. - -type: keyword - -example: 0687F666A054EF17A08E2F2162EAB4CBC0D265E1D7875BE74BF3C712CA92DAF0 - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`tls.server.issuer`*:: -+ --- -Subject of the issuer of the x.509 certificate presented by the server. - -type: keyword - -example: CN=Example Root CA, OU=Infrastructure Team, DC=example, DC=com - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`tls.server.ja3s`*:: -+ --- -A hash that identifies servers based on how they perform an SSL/TLS handshake. - -type: keyword - -example: 394441ab65754e2207b1e1b457b3641d - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`tls.server.not_after`*:: -+ --- -Timestamp indicating when server certificate is no longer considered valid. - -type: date - -example: 2021-01-01T00:00:00.000Z - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`tls.server.not_before`*:: -+ --- -Timestamp indicating when server certificate is first considered valid. - -type: date - -example: 1970-01-01T00:00:00.000Z - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`tls.server.subject`*:: -+ --- -Subject of the x.509 certificate presented by the server. - -type: keyword - -example: CN=www.example.com, OU=Infrastructure Team, DC=example, DC=com - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`tls.server.x509.alternative_names`*:: -+ --- -List of subject alternative names (SAN). Name types vary by certificate authority and certificate type but commonly contain IP addresses, DNS names (and wildcards), and email addresses. - -type: keyword - -example: *.elastic.co - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`tls.server.x509.issuer.common_name`*:: -+ --- -List of common name (CN) of issuing certificate authority. - -type: keyword - -example: Example SHA2 High Assurance Server CA - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`tls.server.x509.issuer.country`*:: -+ --- -List of country (C) codes - -type: keyword - -example: US - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`tls.server.x509.issuer.distinguished_name`*:: -+ --- -Distinguished name (DN) of issuing certificate authority. - -type: keyword - -example: C=US, O=Example Inc, OU=www.example.com, CN=Example SHA2 High Assurance Server CA - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`tls.server.x509.issuer.locality`*:: -+ --- -List of locality names (L) - -type: keyword - -example: Mountain View - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`tls.server.x509.issuer.organization`*:: -+ --- -List of organizations (O) of issuing certificate authority. - -type: keyword - -example: Example Inc - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`tls.server.x509.issuer.organizational_unit`*:: -+ --- -List of organizational units (OU) of issuing certificate authority. - -type: keyword - -example: www.example.com - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`tls.server.x509.issuer.state_or_province`*:: -+ --- -List of state or province names (ST, S, or P) - -type: keyword - -example: California - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`tls.server.x509.not_after`*:: -+ --- -Time at which the certificate is no longer considered valid. - -type: date - -example: 2020-07-16 03:15:39+00:00 - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`tls.server.x509.not_before`*:: -+ --- -Time at which the certificate is first considered valid. - -type: date - -example: 2019-08-16 01:40:25+00:00 - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`tls.server.x509.public_key_algorithm`*:: -+ --- -Algorithm used to generate the public key. - -type: keyword - -example: RSA - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`tls.server.x509.public_key_curve`*:: -+ --- -The curve used by the elliptic curve public key algorithm. This is algorithm specific. - -type: keyword - -example: nistp521 - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`tls.server.x509.public_key_exponent`*:: -+ --- -Exponent used to derive the public key. This is algorithm specific. - -type: long - -example: 65537 - -{yes-icon} {ecs-ref}[ECS] field. - -Field is not indexed. - --- - -*`tls.server.x509.public_key_size`*:: -+ --- -The size of the public key space in bits. - -type: long - -example: 2048 - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`tls.server.x509.serial_number`*:: -+ --- -Unique serial number issued by the certificate authority. For consistency, if this value is alphanumeric, it should be formatted without colons and uppercase characters. - -type: keyword - -example: 55FBB9C7DEBF09809D12CCAA - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`tls.server.x509.signature_algorithm`*:: -+ --- -Identifier for certificate signature algorithm. We recommend using names found in Go Lang Crypto library. See https://github.com/golang/go/blob/go1.14/src/crypto/x509/x509.go#L337-L353. - -type: keyword - -example: SHA256-RSA - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`tls.server.x509.subject.common_name`*:: -+ --- -List of common names (CN) of subject. - -type: keyword - -example: shared.global.example.net - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`tls.server.x509.subject.country`*:: -+ --- -List of country (C) code - -type: keyword - -example: US - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`tls.server.x509.subject.distinguished_name`*:: -+ --- -Distinguished name (DN) of the certificate subject entity. - -type: keyword - -example: C=US, ST=California, L=San Francisco, O=Example, Inc., CN=shared.global.example.net - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`tls.server.x509.subject.locality`*:: -+ --- -List of locality names (L) - -type: keyword - -example: San Francisco - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`tls.server.x509.subject.organization`*:: -+ --- -List of organizations (O) of subject. - -type: keyword - -example: Example, Inc. - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`tls.server.x509.subject.organizational_unit`*:: -+ --- -List of organizational units (OU) of subject. - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`tls.server.x509.subject.state_or_province`*:: -+ --- -List of state or province names (ST, S, or P) - -type: keyword - -example: California - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`tls.server.x509.version_number`*:: -+ --- -Version of x509 format. - -type: keyword - -example: 3 - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`tls.version`*:: -+ --- -Numeric part of the version parsed from the original string. - -type: keyword - -example: 1.2 - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`tls.version_protocol`*:: -+ --- -Normalized lowercase protocol name parsed from original string. - -type: keyword - -example: tls - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`span.id`*:: -+ --- -Unique identifier of the span within the scope of its trace. -A span represents an operation within a transaction, such as a request to another service, or a database query. - -type: keyword - -example: 3ff9a8981b7ccd5a - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`trace.id`*:: -+ --- -Unique identifier of the trace. -A trace groups multiple events like transactions that belong together. For example, a user request handled by multiple inter-connected services. - -type: keyword - -example: 4bf92f3577b34da6a3ce929d0e0e4736 - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`transaction.id`*:: -+ --- -Unique identifier of the transaction within the scope of its trace. -A transaction is the highest level of work measured within a service, such as a request to a server. - -type: keyword - -example: 00f067aa0ba902b7 - -{yes-icon} {ecs-ref}[ECS] field. - --- - -[float] -=== url - -URL fields provide support for complete or partial URLs, and supports the breaking down into scheme, domain, path, and so on. - - -*`url.domain`*:: -+ --- -Domain of the url, such as "www.elastic.co". -In some cases a URL may refer to an IP and/or port directly, without a domain name. In this case, the IP address would go to the `domain` field. -If the URL contains a literal IPv6 address enclosed by `[` and `]` (IETF RFC 2732), the `[` and `]` characters should also be captured in the `domain` field. - -type: keyword - -example: www.elastic.co - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`url.extension`*:: -+ --- -The field contains the file extension from the original request url, excluding the leading dot. -The file extension is only set if it exists, as not every url has a file extension. -The leading period must not be included. For example, the value must be "png", not ".png". -Note that when the file name has multiple extensions (example.tar.gz), only the last one should be captured ("gz", not "tar.gz"). - -type: keyword - -example: png - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`url.fragment`*:: -+ --- -Portion of the url after the `#`, such as "top". -The `#` is not part of the fragment. - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`url.full`*:: -+ --- -If full URLs are important to your use case, they should be stored in `url.full`, whether this field is reconstructed or present in the event source. - -type: wildcard - -example: https://www.elastic.co:443/search?q=elasticsearch#top - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`url.full.text`*:: -+ --- -type: match_only_text - --- - -*`url.original`*:: -+ --- -Unmodified original url as seen in the event source. -Note that in network monitoring, the observed URL may be a full URL, whereas in access logs, the URL is often just represented as a path. -This field is meant to represent the URL as it was observed, complete or not. - -type: wildcard - -example: https://www.elastic.co:443/search?q=elasticsearch#top or /search?q=elasticsearch - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`url.original.text`*:: -+ --- -type: match_only_text - --- - -*`url.password`*:: -+ --- -Password of the request. - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`url.path`*:: -+ --- -Path of the request, such as "/search". - -type: wildcard - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`url.port`*:: -+ --- -Port of the request, such as 443. - -type: long - -example: 443 - -format: string - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`url.query`*:: -+ --- -The query field describes the query string of the request, such as "q=elasticsearch". -The `?` is excluded from the query string. If a URL contains no `?`, there is no query field. If there is a `?` but no query, the query field exists with an empty string. The `exists` query can be used to differentiate between the two cases. - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`url.registered_domain`*:: -+ --- -The highest registered url domain, stripped of the subdomain. -For example, the registered domain for "foo.example.com" is "example.com". -This value can be determined precisely with a list like the public suffix list (http://publicsuffix.org). Trying to approximate this by simply taking the last two labels will not work well for TLDs such as "co.uk". - -type: keyword - -example: example.com - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`url.scheme`*:: -+ --- -Scheme of the request, such as "https". -Note: The `:` is not part of the scheme. - -type: keyword - -example: https - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`url.subdomain`*:: -+ --- -The subdomain portion of a fully qualified domain name includes all of the names except the host name under the registered_domain. In a partially qualified domain, or if the the qualification level of the full name cannot be determined, subdomain contains all of the names below the registered domain. -For example the subdomain portion of "www.east.mydomain.co.uk" is "east". If the domain has multiple levels of subdomain, such as "sub2.sub1.example.com", the subdomain field should contain "sub2.sub1", with no trailing period. - -type: keyword - -example: east - --- - -*`url.top_level_domain`*:: -+ --- -The effective top level domain (eTLD), also known as the domain suffix, is the last part of the domain name. For example, the top level domain for example.com is "com". -This value can be determined precisely with a list like the public suffix list (http://publicsuffix.org). Trying to approximate this by simply taking the last label will not work well for effective TLDs such as "co.uk". - -type: keyword - -example: co.uk - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`url.username`*:: -+ --- -Username of the request. - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -[float] -=== user - -The user fields describe information about the user that is relevant to the event. -Fields can have one entry or multiple entries. If a user has more than one id, provide an array that includes all of them. - - -*`user.changes.domain`*:: -+ --- -Name of the directory the user is a member of. -For example, an LDAP or Active Directory domain name. - -type: keyword - --- - -*`user.changes.email`*:: -+ --- -User email address. - -type: keyword - --- - -*`user.changes.full_name`*:: -+ --- -User's full name, if available. - -type: keyword - -example: Albert Einstein - --- - -*`user.changes.full_name.text`*:: -+ --- -type: match_only_text - --- - -*`user.changes.group.domain`*:: -+ --- -Name of the directory the group is a member of. -For example, an LDAP or Active Directory domain name. - -type: keyword - --- - -*`user.changes.group.id`*:: -+ --- -Unique identifier for the group on the system/platform. - -type: keyword - --- - -*`user.changes.group.name`*:: -+ --- -Name of the group. - -type: keyword - --- - -*`user.changes.hash`*:: -+ --- -Unique user hash to correlate information for a user in anonymized form. -Useful if `user.id` or `user.name` contain confidential information and cannot be used. - -type: keyword - --- - -*`user.changes.id`*:: -+ --- -Unique identifier of the user. - -type: keyword - -example: S-1-5-21-202424912787-2692429404-2351956786-1000 - --- - -*`user.changes.name`*:: -+ --- -Short name or login of the user. - -type: keyword - -example: a.einstein - --- - -*`user.changes.name.text`*:: -+ --- -type: match_only_text - --- - -*`user.changes.roles`*:: -+ --- -Array of user roles at the time of the event. - -type: keyword - -example: ["kibana_admin", "reporting_user"] - --- - -*`user.domain`*:: -+ --- -Name of the directory the user is a member of. -For example, an LDAP or Active Directory domain name. - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`user.effective.domain`*:: -+ --- -Name of the directory the user is a member of. -For example, an LDAP or Active Directory domain name. - -type: keyword - --- - -*`user.effective.email`*:: -+ --- -User email address. - -type: keyword - --- - -*`user.effective.full_name`*:: -+ --- -User's full name, if available. - -type: keyword - -example: Albert Einstein - --- - -*`user.effective.full_name.text`*:: -+ --- -type: match_only_text - --- - -*`user.effective.group.domain`*:: -+ --- -Name of the directory the group is a member of. -For example, an LDAP or Active Directory domain name. - -type: keyword - --- - -*`user.effective.group.id`*:: -+ --- -Unique identifier for the group on the system/platform. - -type: keyword - --- - -*`user.effective.group.name`*:: -+ --- -Name of the group. - -type: keyword - --- - -*`user.effective.hash`*:: -+ --- -Unique user hash to correlate information for a user in anonymized form. -Useful if `user.id` or `user.name` contain confidential information and cannot be used. - -type: keyword - --- - -*`user.effective.id`*:: -+ --- -Unique identifier of the user. - -type: keyword - -example: S-1-5-21-202424912787-2692429404-2351956786-1000 - --- - -*`user.effective.name`*:: -+ --- -Short name or login of the user. - -type: keyword - -example: a.einstein - --- - -*`user.effective.name.text`*:: -+ --- -type: match_only_text - --- - -*`user.effective.roles`*:: -+ --- -Array of user roles at the time of the event. - -type: keyword - -example: ["kibana_admin", "reporting_user"] - --- - -*`user.email`*:: -+ --- -User email address. - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`user.full_name`*:: -+ --- -User's full name, if available. - -type: keyword - -example: Albert Einstein - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`user.full_name.text`*:: -+ --- -type: match_only_text - --- - -*`user.group.domain`*:: -+ --- -Name of the directory the group is a member of. -For example, an LDAP or Active Directory domain name. - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`user.group.id`*:: -+ --- -Unique identifier for the group on the system/platform. - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`user.group.name`*:: -+ --- -Name of the group. - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`user.hash`*:: -+ --- -Unique user hash to correlate information for a user in anonymized form. -Useful if `user.id` or `user.name` contain confidential information and cannot be used. - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`user.id`*:: -+ --- -Unique identifier of the user. - -type: keyword - -example: S-1-5-21-202424912787-2692429404-2351956786-1000 - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`user.name`*:: -+ --- -Short name or login of the user. - -type: keyword - -example: a.einstein - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`user.name.text`*:: -+ --- -type: match_only_text - --- - -*`user.roles`*:: -+ --- -Array of user roles at the time of the event. - -type: keyword - -example: ["kibana_admin", "reporting_user"] - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`user.target.domain`*:: -+ --- -Name of the directory the user is a member of. -For example, an LDAP or Active Directory domain name. - -type: keyword - --- - -*`user.target.email`*:: -+ --- -User email address. - -type: keyword - --- - -*`user.target.full_name`*:: -+ --- -User's full name, if available. - -type: keyword - -example: Albert Einstein - --- - -*`user.target.full_name.text`*:: -+ --- -type: match_only_text - --- - -*`user.target.group.domain`*:: -+ --- -Name of the directory the group is a member of. -For example, an LDAP or Active Directory domain name. - -type: keyword - --- - -*`user.target.group.id`*:: -+ --- -Unique identifier for the group on the system/platform. - -type: keyword - --- - -*`user.target.group.name`*:: -+ --- -Name of the group. - -type: keyword - --- - -*`user.target.hash`*:: -+ --- -Unique user hash to correlate information for a user in anonymized form. -Useful if `user.id` or `user.name` contain confidential information and cannot be used. - -type: keyword - --- - -*`user.target.id`*:: -+ --- -Unique identifier of the user. - -type: keyword - -example: S-1-5-21-202424912787-2692429404-2351956786-1000 - --- - -*`user.target.name`*:: -+ --- -Short name or login of the user. - -type: keyword - -example: a.einstein - --- - -*`user.target.name.text`*:: -+ --- -type: match_only_text - --- - -*`user.target.roles`*:: -+ --- -Array of user roles at the time of the event. - -type: keyword - -example: ["kibana_admin", "reporting_user"] - --- - -[float] -=== user_agent - -The user_agent fields normally come from a browser request. -They often show up in web service logs coming from the parsed user agent string. - - -*`user_agent.device.name`*:: -+ --- -Name of the device. - -type: keyword - -example: iPhone - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`user_agent.name`*:: -+ --- -Name of the user agent. - -type: keyword - -example: Safari - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`user_agent.original`*:: -+ --- -Unparsed user_agent string. - -type: keyword - -example: Mozilla/5.0 (iPhone; CPU iPhone OS 12_1 like Mac OS X) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/12.0 Mobile/15E148 Safari/604.1 - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`user_agent.original.text`*:: -+ --- -type: match_only_text - --- - -*`user_agent.os.family`*:: -+ --- -OS family (such as redhat, debian, freebsd, windows). - -type: keyword - -example: debian - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`user_agent.os.full`*:: -+ --- -Operating system name, including the version or code name. - -type: keyword - -example: Mac OS Mojave - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`user_agent.os.full.text`*:: -+ --- -type: match_only_text - --- - -*`user_agent.os.kernel`*:: -+ --- -Operating system kernel version as a raw string. - -type: keyword - -example: 4.4.0-112-generic - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`user_agent.os.name`*:: -+ --- -Operating system name, without the version. - -type: keyword - -example: Mac OS X - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`user_agent.os.name.text`*:: -+ --- -type: match_only_text - --- - -*`user_agent.os.platform`*:: -+ --- -Operating system platform (such centos, ubuntu, windows). - -type: keyword - -example: darwin - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`user_agent.os.type`*:: -+ --- -Use the `os.type` field to categorize the operating system into one of the broad commercial families. -One of these following values should be used (lowercase): linux, macos, unix, windows. -If the OS you're dealing with is not in the list, the field should not be populated. Please let us know by opening an issue with ECS, to propose its addition. - -type: keyword - -example: macos - --- - -*`user_agent.os.version`*:: -+ --- -Operating system version as a raw string. - -type: keyword - -example: 10.14.1 - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`user_agent.version`*:: -+ --- -Version of the user agent. - -type: keyword - -example: 12.0 - -{yes-icon} {ecs-ref}[ECS] field. - --- - -[float] -=== vlan - -The VLAN fields are used to identify 802.1q tag(s) of a packet, as well as ingress and egress VLAN associations of an observer in relation to a specific packet or connection. -Network.vlan fields are used to record a single VLAN tag, or the outer tag in the case of q-in-q encapsulations, for a packet or connection as observed, typically provided by a network sensor (e.g. Zeek, Wireshark) passively reporting on traffic. -Network.inner VLAN fields are used to report inner q-in-q 802.1q tags (multiple 802.1q encapsulations) as observed, typically provided by a network sensor (e.g. Zeek, Wireshark) passively reporting on traffic. Network.inner VLAN fields should only be used in addition to network.vlan fields to indicate q-in-q tagging. -Observer.ingress and observer.egress VLAN values are used to record observer specific information when observer events contain discrete ingress and egress VLAN information, typically provided by firewalls, routers, or load balancers. - - -*`vlan.id`*:: -+ --- -VLAN ID as reported by the observer. - -type: keyword - -example: 10 - --- - -*`vlan.name`*:: -+ --- -Optional VLAN name as reported by the observer. - -type: keyword - -example: outside - --- - -[float] -=== vulnerability - -The vulnerability fields describe information about a vulnerability that is relevant to an event. - - -*`vulnerability.category`*:: -+ --- -The type of system or architecture that the vulnerability affects. These may be platform-specific (for example, Debian or SUSE) or general (for example, Database or Firewall). For example (https://qualysguard.qualys.com/qwebhelp/fo_portal/knowledgebase/vulnerability_categories.htm[Qualys vulnerability categories]) -This field must be an array. - -type: keyword - -example: ["Firewall"] - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`vulnerability.classification`*:: -+ --- -The classification of the vulnerability scoring system. For example (https://www.first.org/cvss/) - -type: keyword - -example: CVSS - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`vulnerability.description`*:: -+ --- -The description of the vulnerability that provides additional context of the vulnerability. For example (https://cve.mitre.org/about/faqs.html#cve_entry_descriptions_created[Common Vulnerabilities and Exposure CVE description]) - -type: keyword - -example: In macOS before 2.12.6, there is a vulnerability in the RPC... - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`vulnerability.description.text`*:: -+ --- -type: match_only_text - --- - -*`vulnerability.enumeration`*:: -+ --- -The type of identifier used for this vulnerability. For example (https://cve.mitre.org/about/) - -type: keyword - -example: CVE - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`vulnerability.id`*:: -+ --- -The identification (ID) is the number portion of a vulnerability entry. It includes a unique identification number for the vulnerability. For example (https://cve.mitre.org/about/faqs.html#what_is_cve_id)[Common Vulnerabilities and Exposure CVE ID] - -type: keyword - -example: CVE-2019-00001 - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`vulnerability.reference`*:: -+ --- -A resource that provides additional information, context, and mitigations for the identified vulnerability. - -type: keyword - -example: https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-6111 - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`vulnerability.report_id`*:: -+ --- -The report or scan identification number. - -type: keyword - -example: 20191018.0001 - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`vulnerability.scanner.vendor`*:: -+ --- -The name of the vulnerability scanner vendor. - -type: keyword - -example: Tenable - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`vulnerability.score.base`*:: -+ --- -Scores can range from 0.0 to 10.0, with 10.0 being the most severe. -Base scores cover an assessment for exploitability metrics (attack vector, complexity, privileges, and user interaction), impact metrics (confidentiality, integrity, and availability), and scope. For example (https://www.first.org/cvss/specification-document) - -type: float - -example: 5.5 - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`vulnerability.score.environmental`*:: -+ --- -Scores can range from 0.0 to 10.0, with 10.0 being the most severe. -Environmental scores cover an assessment for any modified Base metrics, confidentiality, integrity, and availability requirements. For example (https://www.first.org/cvss/specification-document) - -type: float - -example: 5.5 - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`vulnerability.score.temporal`*:: -+ --- -Scores can range from 0.0 to 10.0, with 10.0 being the most severe. -Temporal scores cover an assessment for code maturity, remediation level, and confidence. For example (https://www.first.org/cvss/specification-document) - -type: float - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`vulnerability.score.version`*:: -+ --- -The National Vulnerability Database (NVD) provides qualitative severity rankings of "Low", "Medium", and "High" for CVSS v2.0 base score ranges in addition to the severity ratings for CVSS v3.0 as they are defined in the CVSS v3.0 specification. -CVSS is owned and managed by FIRST.Org, Inc. (FIRST), a US-based non-profit organization, whose mission is to help computer security incident response teams across the world. For example (https://nvd.nist.gov/vuln-metrics/cvss) - -type: keyword - -example: 2.0 - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`vulnerability.severity`*:: -+ --- -The severity of the vulnerability can help with metrics and internal prioritization regarding remediation. For example (https://nvd.nist.gov/vuln-metrics/cvss) - -type: keyword - -example: Critical - -{yes-icon} {ecs-ref}[ECS] field. - --- - -[float] -=== x509 - -This implements the common core fields for x509 certificates. This information is likely logged with TLS sessions, digital signatures found in executable binaries, S/MIME information in email bodies, or analysis of files on disk. -When the certificate relates to a file, use the fields at `file.x509`. When hashes of the DER-encoded certificate are available, the `hash` data set should be populated as well (e.g. `file.hash.sha256`). -Events that contain certificate information about network connections, should use the x509 fields under the relevant TLS fields: `tls.server.x509` and/or `tls.client.x509`. - - -*`x509.alternative_names`*:: -+ --- -List of subject alternative names (SAN). Name types vary by certificate authority and certificate type but commonly contain IP addresses, DNS names (and wildcards), and email addresses. - -type: keyword - -example: *.elastic.co - --- - -*`x509.issuer.common_name`*:: -+ --- -List of common name (CN) of issuing certificate authority. - -type: keyword - -example: Example SHA2 High Assurance Server CA - --- - -*`x509.issuer.country`*:: -+ --- -List of country (C) codes - -type: keyword - -example: US - --- - -*`x509.issuer.distinguished_name`*:: -+ --- -Distinguished name (DN) of issuing certificate authority. - -type: keyword - -example: C=US, O=Example Inc, OU=www.example.com, CN=Example SHA2 High Assurance Server CA - --- - -*`x509.issuer.locality`*:: -+ --- -List of locality names (L) - -type: keyword - -example: Mountain View - --- - -*`x509.issuer.organization`*:: -+ --- -List of organizations (O) of issuing certificate authority. - -type: keyword - -example: Example Inc - --- - -*`x509.issuer.organizational_unit`*:: -+ --- -List of organizational units (OU) of issuing certificate authority. - -type: keyword - -example: www.example.com - --- - -*`x509.issuer.state_or_province`*:: -+ --- -List of state or province names (ST, S, or P) - -type: keyword - -example: California - --- - -*`x509.not_after`*:: -+ --- -Time at which the certificate is no longer considered valid. - -type: date - -example: 2020-07-16 03:15:39+00:00 - --- - -*`x509.not_before`*:: -+ --- -Time at which the certificate is first considered valid. - -type: date - -example: 2019-08-16 01:40:25+00:00 - --- - -*`x509.public_key_algorithm`*:: -+ --- -Algorithm used to generate the public key. - -type: keyword - -example: RSA - --- - -*`x509.public_key_curve`*:: -+ --- -The curve used by the elliptic curve public key algorithm. This is algorithm specific. - -type: keyword - -example: nistp521 - --- - -*`x509.public_key_exponent`*:: -+ --- -Exponent used to derive the public key. This is algorithm specific. - -type: long - -example: 65537 - -Field is not indexed. - --- - -*`x509.public_key_size`*:: -+ --- -The size of the public key space in bits. - -type: long - -example: 2048 - --- - -*`x509.serial_number`*:: -+ --- -Unique serial number issued by the certificate authority. For consistency, if this value is alphanumeric, it should be formatted without colons and uppercase characters. - -type: keyword - -example: 55FBB9C7DEBF09809D12CCAA - --- - -*`x509.signature_algorithm`*:: -+ --- -Identifier for certificate signature algorithm. We recommend using names found in Go Lang Crypto library. See https://github.com/golang/go/blob/go1.14/src/crypto/x509/x509.go#L337-L353. - -type: keyword - -example: SHA256-RSA - --- - -*`x509.subject.common_name`*:: -+ --- -List of common names (CN) of subject. - -type: keyword - -example: shared.global.example.net - --- - -*`x509.subject.country`*:: -+ --- -List of country (C) code - -type: keyword - -example: US - --- - -*`x509.subject.distinguished_name`*:: -+ --- -Distinguished name (DN) of the certificate subject entity. - -type: keyword - -example: C=US, ST=California, L=San Francisco, O=Example, Inc., CN=shared.global.example.net - --- - -*`x509.subject.locality`*:: -+ --- -List of locality names (L) - -type: keyword - -example: San Francisco - --- - -*`x509.subject.organization`*:: -+ --- -List of organizations (O) of subject. - -type: keyword - -example: Example, Inc. - --- - -*`x509.subject.organizational_unit`*:: -+ --- -List of organizational units (OU) of subject. - -type: keyword - --- - -*`x509.subject.state_or_province`*:: -+ --- -List of state or province names (ST, S, or P) - -type: keyword - -example: California - --- - -*`x509.version_number`*:: -+ --- -Version of x509 format. - -type: keyword - -example: 3 - --- - -[[exported-fields-host-processor]] -== Host fields - -Info collected for the host machine. - - - - -*`host.containerized`*:: -+ --- -If the host is a container. - - -type: boolean - --- - -*`host.os.build`*:: -+ --- -OS build information. - - -type: keyword - -example: 18D109 - --- - -*`host.os.codename`*:: -+ --- -OS codename, if any. - - -type: keyword - -example: stretch - --- - -[[exported-fields-kubernetes-processor]] -== Kubernetes fields - -Kubernetes metadata added by the kubernetes processor - - - - -*`kubernetes.pod.name`*:: -+ --- -Kubernetes pod name - - -type: keyword - --- - -*`kubernetes.pod.uid`*:: -+ --- -Kubernetes Pod UID - - -type: keyword - --- - -*`kubernetes.pod.ip`*:: -+ --- -Kubernetes Pod IP - - -type: ip - --- - - -*`kubernetes.namespace.name`*:: -+ --- -Kubernetes namespace name - - -type: keyword - --- - -*`kubernetes.namespace.uuid`*:: -+ --- -Kubernetes namespace uuid - - -type: keyword - --- - -*`kubernetes.namespace.labels.*`*:: -+ --- -Kubernetes namespace labels map - - -type: object - --- - -*`kubernetes.namespace.annotations.*`*:: -+ --- -Kubernetes namespace annotations map - - -type: object - --- - -*`kubernetes.node.name`*:: -+ --- -Kubernetes node name - - -type: keyword - --- - -*`kubernetes.node.hostname`*:: -+ --- -Kubernetes hostname as reported by the node’s kernel - - -type: keyword - --- - -*`kubernetes.labels.*`*:: -+ --- -Kubernetes labels map - - -type: object - --- - -*`kubernetes.annotations.*`*:: -+ --- -Kubernetes annotations map - - -type: object - --- - -*`kubernetes.selectors.*`*:: -+ --- -Kubernetes selectors map - - -type: object - --- - -*`kubernetes.replicaset.name`*:: -+ --- -Kubernetes replicaset name - - -type: keyword - --- - -*`kubernetes.deployment.name`*:: -+ --- -Kubernetes deployment name - - -type: keyword - --- - -*`kubernetes.statefulset.name`*:: -+ --- -Kubernetes statefulset name - - -type: keyword - --- - -*`kubernetes.container.name`*:: -+ --- -Kubernetes container name (different than the name from the runtime) - - -type: keyword - --- - -[[exported-fields-process]] -== Process fields - -Process metadata fields - - - - -*`process.exe`*:: -+ --- -type: alias - -alias to: process.executable - --- - -[float] -=== owner - -Process owner information. - - -*`process.owner.id`*:: -+ --- -Unique identifier of the user. - -type: keyword - --- - -*`process.owner.name`*:: -+ --- -Short name or login of the user. - -type: keyword - -example: albert - --- - -*`process.owner.name.text`*:: -+ --- -type: text - --- - -[[exported-fields-system]] -== System Metrics fields - -System status metrics, like CPU and memory usage, that are collected from the operating system. - - - -[float] -=== system - -`system` contains local system metrics. - - - -[float] -=== cpu - -`cpu` contains local CPU stats. - - - -*`system.cpu.total.norm.pct`*:: -+ --- -The percentage of CPU time spent by the process since the last event. This value is normalized by the number of CPU cores and it ranges from 0 to 100%. - - -type: scaled_float - -format: percent - --- - -[float] -=== memory - -`memory` contains local memory stats. - - - -*`system.memory.total`*:: -+ --- -Total memory. - - -type: long - -format: bytes - --- - -[float] -=== actual - -Actual memory used and free. - - - -*`system.memory.actual.free`*:: -+ --- -Actual free memory in bytes. It is calculated based on the OS. On Linux it consists of the free memory plus caches and buffers. On OSX it is a sum of free memory and the inactive memory. On Windows, it is equal to `system.memory.free`. - - -type: long - -format: bytes - --- - -[float] -=== process - -`process` contains process metadata, CPU metrics, and memory metrics. - - - -[float] -=== cpu - -`cpu` contains local CPU stats. - - - -*`system.process.cpu.total.norm.pct`*:: -+ --- -The percentage of CPU time spent by the process since the last event. This value is normalized by the number of CPU cores and it ranges from 0 to 100%. - - -type: scaled_float - -format: percent - --- - -[float] -=== memory - -Memory-specific statistics per process. - - -*`system.process.memory.size`*:: -+ --- -The total virtual memory the process has. - - -type: long - -format: bytes - --- - -*`system.process.memory.rss.bytes`*:: -+ --- -The Resident Set Size. The amount of memory the process occupied in main memory (RAM). - - -type: long - -format: bytes - --- - -[float] -=== cgroup - -Metrics and limits for the cgroup, collected by APM agents on Linux. - - -[float] -=== cpu - -CPU-specific cgroup metrics and limits. - - -*`system.process.cgroup.cpu.id`*:: -+ --- -ID for the current cgroup CPU. - -type: keyword - --- - -[float] -=== cfs - -Completely Fair Scheduler (CFS) cgroup metrics. - - -*`system.process.cgroup.cpu.cfs.period.us`*:: -+ --- -CFS period in microseconds. - -type: long - --- - -*`system.process.cgroup.cpu.cfs.quota.us`*:: -+ --- -CFS quota in microseconds. - -type: long - --- - -*`system.process.cgroup.cpu.stats.periods`*:: -+ --- -Number of periods seen by the CPU. - -type: long - --- - -*`system.process.cgroup.cpu.stats.throttled.periods`*:: -+ --- -Number of throttled periods seen by the CPU. - -type: long - --- - -*`system.process.cgroup.cpu.stats.throttled.ns`*:: -+ --- -Nanoseconds spent throttled seen by the CPU. - -type: long - --- - -[float] -=== cpuacct - -CPU Accounting-specific cgroup metrics and limits. - - -*`system.process.cgroup.cpuacct.id`*:: -+ --- -ID for the current cgroup CPU. - -type: keyword - --- - -*`system.process.cgroup.cpuacct.total.ns`*:: -+ --- -Total CPU time for the current cgroup CPU in nanoseconds. - -type: long - --- - -[float] -=== memory - -Memory-specific cgroup metrics and limits. - - -*`system.process.cgroup.memory.mem.limit.bytes`*:: -+ --- -Memory limit for the current cgroup slice. - -type: long - -format: bytes - --- - -*`system.process.cgroup.memory.mem.usage.bytes`*:: -+ --- -Memory usage by the current cgroup slice. - -type: long - -format: bytes - --- diff --git a/docs/legacy/getting-started-apm-server.asciidoc b/docs/legacy/getting-started-apm-server.asciidoc index c4c84a55e08..06a7f3d1471 100644 --- a/docs/legacy/getting-started-apm-server.asciidoc +++ b/docs/legacy/getting-started-apm-server.asciidoc @@ -1,36 +1,155 @@ [[getting-started-apm-server]] -== Getting started with APM Server +== Self manage APM Server ++++ -Get started +Self manage APM Server ++++ -IMPORTANT: {deprecation-notice-installation} +TIP: The easiest way to get started with Elastic APM is by using our +{ess-product}[hosted {es} Service] on {ecloud}. +The {es} Service is available on AWS, GCP, and Azure. +See <> to get started in minutes. +// TODO: MOVE THIS IMPORTANT: Starting in version 8.0.0, {fleet} uses the APM integration to set up and manage APM index templates, {ilm-init} policies, and ingest pipelines. APM Server will only send data to {es} _after_ the APM integration has been installed. -The easiest way to get started with Elastic APM is by using our -{ess-product}[hosted {es} Service] on {ecloud}. -The {es} Service is available on AWS, GCP, and Azure, -and automatically configures APM Server to work with {es} and {kib}. +The APM Server receives performance data from your APM agents, +validates and processes it, and then transforms the data into {es} documents. +If you're on this page, then you've chosen to self-manage the Elastic Stack, +and you now must decide how to run and configure the APM Server. +There are two options, and the components required are different for each: + +* **<>** +* **<>** +// * **<>** + +[float] +[[setup-apm-server-binary]] +=== APM Server binary + +Install, configure, and run the APM Server binary wherever you need it. + +image::./images/bin-ov.png[APM Server binary overview] + +**Pros**: + +- Simplest self-managed option +- No addition component knowledge required +- YAML configuration simplifies automation + +**Supported outputs**: + +- {es} +- {ess} +- {ls} +- Kafka +- Redis +- File +- Console + +**Required components**: + +- APM agents +- APM Server +- {stack} + +**Configuration method**: YAML + +[float] +[[setup-fleet-managed-apm]] +=== Fleet-managed APM Server + +Fleet is a web-based UI in {kib} that is used to centrally manage {agent}s. +In this deployment model, use {agent} to spin up APM Server instances that can be centrally-managed in a custom-curated user interface. + +NOTE: Fleet-managed APM Server does not have full feature parity with the APM Server binary method of running Elastic APM. + +image::./images/fm-ov.png[APM Server fleet overview] + +// (outputs, stable APIs) +// not the best option for a simple test setup or if only interested in centrally running APM Server + +**Pros**: + +- Conveniently manage one, some, or many different integrations from one central {fleet} UI. + +**Supported outputs**: + +- {es} +- {ess} + +**Required components**: + +- APM agents +- APM Server +- {agent} +- Fleet Server +- {stack} + +**Configuration method**: {kib} UI + +// [float] +// [[setup-apm-server-ea]] +// === Standalone Elastic Agent-managed APM Server +// // I really don't know how to sell this option +// Instead of installing and configuring the APM Server binary, let {agent} orchestrate it for you. +// Install {agent} and manually configure the agent locally on the system where it's installed. +// You are responsible for managing and upgrading {agent}. This approach is recommended for advanced users only. + +// **Pros**: + +// - Easily add integrations for other data sources +// useful if EA already in place for other integrations, and customers want to customize setup rather than using Fleet for configuration +// // TODO: +// // maybe get some more hints on this one from the EA team to align with highlighting the same pros & cons. + +// **Available on Elastic Cloud**: ❌ + +// This supports all of the same outputs as binary +// see https://github.com/elastic/apm-server/issues/10467 +// **Supported outputs**: + +// **Configuration method**: YAML + +// image::./images/ea-ov.png[APM Server ea overview] + +// @simitt's notes for how to include EA-managed in the decision tree: +// **** +// If we generally describe Standalone Elastic Agent managed APM Server then we should also add it to this diagram: +// Do you want to use other integrations? +// -> yes: Would you like to use the comfort of Fleet UI based management? -> yes: Fleet managed APM Server; -> no: Standalone Elastic Agent managed APM Server +// -> no: What is your prefered way of configuration? -> yaml: APM Server binary; -> Kibana UI: Fleet managed APM Server +// **** + +// Components required: + +// [options="header"] +// |==== +// | Installation method | APM Server | Elastic Agent | Fleet Server +// | APM Server binary | ✔️ | | +// // | Standalone Elastic Agent-managed APM Server | ✔️ | ✔️ | +// | Fleet-managed APM Server | ✔️ | ✔️ | ✔️ +// |==== [float] -=== Hosted {es} Service +=== Help me decide + +Use the decision tree below to help determine which method of configuring and running the APM Server is best for your use case. -Skip managing your own {es}, {kib}, and APM Server by using our -{ess-product}[hosted {es} Service] on -{ecloud}. +[subs=attributes+] +include::../diagrams/apm-decision-tree.asciidoc[APM Server decision tree] -image::images/apm-architecture-cloud.png[Install Elastic APM on cloud] -{ess-trial}[Try out the {es} Service for free], -then see {cloud}/ec-manage-apm-settings.html[Add APM user settings] for information on how to configure Elastic APM. +=== APM Server binary + +This guide will explain how to set up and configure the APM Server binary. [float] -=== Install and manage the stack yourself +==== Prerequisites -If you'd rather install the stack yourself, first see the https://www.elastic.co/support/matrix[Elastic Support Matrix] for information about supported operating systems and product compatibility. +// tag::prereq[] +First, see the https://www.elastic.co/support/matrix[Elastic Support Matrix] for information about supported operating systems and product compatibility. You'll need: @@ -38,27 +157,18 @@ You'll need: * *{kib}* for visualizing with the APM UI. We recommend you use the same version of {es}, {kib}, and APM Server. - -image::images/apm-architecture-diy.png[Install Elastic APM yourself] - See {stack-ref}/installing-elastic-stack.html[Installing the {stack}] for more information about installing these products. -After installing the {stack}, read the following topics to learn how to install, -configure, and start APM Server: +// end::prereq[] -* <> -* <> -* <> -* <> +image::images/apm-architecture-diy.png[Install Elastic APM yourself] // ******************************************************* // STEP 1 // ******************************************************* [[installing]] -=== Step 1: Install - -IMPORTANT: {deprecation-notice-installation} +==== Step 1: Install NOTE: *Before you begin*: If you haven't installed the {stack}, do that now. See {stack-ref}/installing-elastic-stack.html[Learn how to install the @@ -193,18 +303,16 @@ See <> for deploying Docker containers. // ******************************************************* [[apm-server-configuration]] -=== Step 2: Set up and configure - -IMPORTANT: {deprecation-notice-installation} +==== Step 2: Set up and configure [float] -==== {ecloud} +===== {ecloud} If you're running APM in Elastic cloud, see {cloud}/ec-manage-apm-settings.html[Add APM user settings] for information on how to configure Elastic APM. [float] -==== Self installation +===== Self installation // This content is reused in the upgrading guide // tag::why-apm-integration[] @@ -213,7 +321,7 @@ Starting in version 8.0.0, {fleet} uses the APM integration to set up and manage // end::why-apm-integration[] [float] -===== Install the APM integration +====== Install the APM integration // This content is reused in the upgrading guide // tag::install-apm-integration[] @@ -230,7 +338,7 @@ See {fleet-guide}/air-gapped.html[Air-gapped environments] for more information. // end::install-apm-integration[] [float] -===== Configure APM +====== Configure APM Configure APM by editing the `apm-server.yml` configuration file. The location of this file varies by platform--see the <> for help locating it. @@ -244,27 +352,23 @@ apm-server: output.elasticsearch: hosts: ["localhost:9200"] <2> username: "elastic" <3> - password: "changeme" <4> + password: "changeme" ---- <1> The `host:port` APM Server listens on. <2> The {es} `host:port` to connect to. <3> This example uses basic authentication. The user provided here needs the privileges required to publish events to {es}. To create a dedicated user for this role, see <>. -<4> We've hard-coded the password here, -but you should store sensitive values in the <>. All available configuration options are outlined in -{apm-server-ref-v}/configuring-howto-apm-server.html[configuring APM Server]. +{apm-guide-ref}/configuring-howto-apm-server.html[configuring APM Server]. // ******************************************************* // STEP 3 // ******************************************************* [[apm-server-starting]] -=== Step 3: Start - -IMPORTANT: {deprecation-notice-installation} +==== Step 3: Start In a production environment, you would put APM Server on its own machines, similar to how you run {es}. @@ -293,7 +397,7 @@ You can change the defaults in `apm-server.yml` or by supplying a different addr [float] [[running-deb-rpm]] -==== Debian Package / RPM +===== Debian Package / RPM For Debian package and RPM installations, we recommend the `apm-server` process runs as a non-root user. Therefore, these installation methods create an `apm-server` user which you can use to start the process. @@ -308,16 +412,14 @@ sudo -u apm-server apm-server [] ---------------------------------- By default, APM Server loads its configuration file from `/etc/apm-server/apm-server.yml`. -See the <<_deb_and_rpm,deb & rpm default paths>> for a full directory layout. +See the <> for a full directory layout. // ******************************************************* // STEP 4 // ******************************************************* [[next-steps]] -=== Step 4: Next steps - -IMPORTANT: {deprecation-notice-installation} +==== Step 4: Next steps // Use a tagged region to pull APM Agent information from the APM Overview If you haven't already, you can now install APM Agents in your services! @@ -335,10 +437,68 @@ If you haven't already, you can now install APM Agents in your services! Once you have at least one {apm-agent} sending data to APM Server, you can start visualizing your data in the {kibana-ref}/xpack-apm.html[{apm-app}]. -If you're migrating from Jaeger, see <>. +If you're migrating from Jaeger, see <>. // Shared APM & YUM include::{libbeat-dir}/repositories.asciidoc[] // Shared docker include::{libbeat-dir}/shared-docker.asciidoc[] + + +=== Fleet-managed APM Server + +This guide will explain how to set up and configure a Fleet-managed APM Server. + +[float] +==== Prerequisites + +You need {es} for storing and searching your data, and {kib} for visualizing and managing it. +When setting these components up, you need: + +include::{ingest-docs-root}/docs/en/ingest-management/tab-widgets/prereq.asciidoc[tag=self-managed] + +==== Step 1: Set up Fleet + +Use {fleet} in {kib} to get APM data into the {stack}. +The first time you use {fleet}, you'll need to set it up and add a +{fleet-server}: + +include::{ingest-docs-root}/docs/en/ingest-management/tab-widgets/add-fleet-server/content.asciidoc[tag=self-managed] + +For more information, refer to {fleet-guide}/fleet-server.html[{fleet-server}]. + +==== Step 2: Add and configure the APM integration + +include::{obs-repo-dir}/observability/tab-widgets/add-apm-integration/content.asciidoc[tag=self-managed] + +==== Step 3: Install APM agents + +APM agents are written in the same language as your service. +To monitor a new service, you must install the agent and configure it with a service name, +APM Server host, and Secret token. + +* **Service name**: The APM integration maps an instrumented service's name–defined in each {apm-agent}'s configuration– +to the index that its data is stored in {es}. +Service names are case-insensitive and must be unique. +For example, you cannot have a service named `Foo` and another named `foo`. +Special characters will be removed from service names and replaced with underscores (`_`). + +* **APM Server URL**: The host and port that APM Server listens for events on. +This should match the host and port defined when setting up the APM integration. + +* **Secret token**: Authentication method for {apm-agent} and APM Server communication. +This should match the secret token defined when setting up the APM integration. + +TIP: You can edit your APM integration settings if you need to change the APM Server URL +or secret token to match your APM agents. + +include::./tab-widgets/install-agents-widget.asciidoc[] + +==== Step 4: View your data + +Back in {kib}, under {observability}, select APM. +You should see application performance monitoring data flowing into the {stack}! + +[role="screenshot"] +image::./guide/images/kibana-apm-sample-data.png[{apm-app} with data] diff --git a/docs/legacy/guide/apm-breaking-changes.asciidoc b/docs/legacy/guide/apm-breaking-changes.asciidoc deleted file mode 100644 index 269eb5e722c..00000000000 --- a/docs/legacy/guide/apm-breaking-changes.asciidoc +++ /dev/null @@ -1,285 +0,0 @@ -:issue: https://github.com/elastic/apm-server/issues/ -:pull: https://github.com/elastic/apm-server/pull/ - -[[apm-breaking-changes]] -== Breaking changes - -This section discusses the changes that you need to be aware of when migrating your application from one version of APM to another. - -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> - -Also see {observability-guide}/whats-new.html[What's new in {observability} {minor-version}]. - -//NOTE: The notable-breaking-changes tagged regions are re-used in the -//Installation and Upgrade Guide - -// tag::notable-v8-breaking-changes[] -// end::notable-v8-breaking-changes[] -// tag::716-bc[] -// end::716-bc[] - -// tag::715-bc[] -[[breaking-7.15.0]] -=== 7.15.0 APM Breaking changes - -The following breaking changes were introduced in 7.15: - -- `network.connection_type` is now `network.connection.type` {pull}5671[5671] -- `transaction.page` and `error.page` no longer recorded {pull}5872[5872] -- experimental:["This breaking change applies to the experimental tail-based sampling feature."] `apm-server.sampling.tail` now requires `apm-server.data_streams.enabled` {pull}5952[5952] -- beta:["This breaking change applies to the beta APM integration."] The `traces-sampled-*` data stream is now `traces-apm.sampled-*` {pull}5952[5952] - -// end::715-bc[] - -[[breaking-7.14.0]] -=== 7.14.0 APM Breaking changes - -// tag::714-bc[] -No breaking changes. -// end::714-bc[] - -[[breaking-7.13.0]] -=== 7.13.0 APM Breaking changes - -// tag::713-bc[] -No breaking changes. -// end::713-bc[] - -[[breaking-7.12.0]] -=== 7.12.0 APM Breaking changes - -// tag::712-bc[] -There are three breaking changes to be aware of; -these changes only impact users ingesting data with -{apm-server-ref-v}/jaeger.html[Jaeger clients]. - -* Leading `0s` are no longer removed from Jaeger client trace/span ids. -+ --- -This change ensures distributed tracing continues to work across platforms by creating -consistent, full trace/span IDs from Jaeger clients, Elastic APM agents, -and OpenTelemetry SDKs. --- - -* Jaeger spans will now have a type of "app" where they previously were "custom". -+ --- -If the Jaeger span type is not inferred, it will now be "app". -This aligns with the OpenTelemetry Collector exporter -and improves the functionality of the _time spent by span type_ charts in the {apm-app}. --- - -* Jaeger spans may now have a more accurate outcome of "unknown". -+ --- -Previously, a "success" outcome was assumed when a span didn't fail. -The new default assigns "unknown", and only sets an outcome of "success" or "failure" when -the outcome is explicitly known. -This change aligns with Elastic APM agents and the OpenTelemetry Collector exporter. --- -// end::712-bc[] - -[[breaking-7.11.0]] -=== 7.11.0 APM Breaking changes - -// tag::notable-breaking-changes[] -No breaking changes. -// end::notable-breaking-changes[] - -[[breaking-7.10.0]] -=== 7.10.0 APM Breaking changes - -// tag::notable-breaking-changes[] -No breaking changes. -// end::notable-breaking-changes[] - -[[breaking-7.9.0]] -=== 7.9.0 APM Breaking changes - -// tag::notable-v79-breaking-changes[] -No breaking changes. -// end::notable-v79-breaking-changes[] - -[[breaking-7.8.0]] -=== 7.8.0 APM Breaking changes - -// tag::notable-v78-breaking-changes[] -No breaking changes. -// end::notable-v78-breaking-changes[] - -[[breaking-7.7.0]] -=== 7.7.0 APM Breaking changes - -// tag::notable-v77-breaking-changes[] -There are no breaking changes in APM Server. -However, a previously hardcoded feature is now configurable. -Failing to follow these {apm-guide-7x}/upgrading-to-77.html[upgrade steps] will result in increased span metadata ingestion when upgrading to version 7.7. -// end::notable-v77-breaking-changes[] - -[[breaking-7.6.0]] -=== 7.6.0 APM Breaking changes - -// tag::notable-v76-breaking-changes[] -No breaking changes. -// end::notable-v76-breaking-changes[] - -[[breaking-7.5.0]] -=== 7.5.0 APM Breaking changes - -// tag::notable-v75-breaking-changes[] - -APM Server:: -+ -* Introduced dedicated `apm-server.ilm.setup.*` flags. -This means you can now customize {ilm-init} behavior from within the APM Server configuration. -As a side effect, `setup.template.*` settings will be ignored for {ilm-init} related templates per event type. -See {apm-server-ref}/ilm.html[set up {ilm-init}] for more information. -+ -* By default, {ilm-init} policies will not longer be versioned. -All event types will switch to the new default policy: rollover after 30 days or when reaching a size of 50 GB. -See {apm-server-ref}/ilm.html[default policy] for more information. - -APM:: -+ -* To make use of all the new features introduced in 7.5, -you must ensure you are using version 7.5+ of APM Server and version 7.5+ of {kib}. - -// end::notable-v75-breaking-changes[] - -[[breaking-7.4.0]] -=== 7.4.0 APM Breaking changes - -// tag::notable-v74-breaking-changes[] -No breaking changes. -// end::notable-v74-breaking-changes[] - -[[breaking-7.3.0]] -=== 7.3.0 APM Breaking changes - -No breaking changes. - -[[breaking-7.2.0]] -=== 7.2.0 APM Breaking changes - -No breaking changes. - -[[breaking-7.1.0]] -=== 7.1.0 APM Breaking changes - -No breaking changes. - -[[breaking-7.0.0]] -=== 7.0.0 APM Breaking changes - -APM Server:: -+ -[[breaking-remove-v1]] -**Removed deprecated Intake v1 API endpoints.** Before upgrading APM Server, -ensure all APM agents are upgraded to a version that supports APM Server ≥ 6.5. -View the {apm-overview-ref-v}/agent-server-compatibility.html[agent/server compatibility matrix] -to determine if your agent versions are compatible. -+ -[[breaking-ecs]] -**Moved fields in {es} to be compliant with the Elastic Common Schema (ECS).** -APM has aligned with the field names defined in the -https://github.com/elastic/ecs[Elastic Common Schema (ECS)]. -Utilizing this common schema will allow for easier data correlation within {es}. -+ -See the ECS field changes table for full details on which fields have changed. - -APM UI:: -+ -[[breaking-new-endpoints]] -**Moved to new data endpoints.** -When you upgrade to 7.x, -data in indices created prior to 7.0 will not automatically appear in the APM UI. -We offer a {kib} Migration Assistant (in the {kib} Management section) to help you migrate your data. -The migration assistant will re-index your older data in the new ECS format. - -[float] -[[ecs-compliance]] -==== Elastic Common Schema field changes - -include::../field-name-changes.asciidoc[] - -[[breaking-6.8.0]] -=== 6.8.0 APM Breaking changes - -No breaking changes. - -[[breaking-6.7.0]] -=== 6.7.0 APM Breaking changes - -No breaking changes. - -[[breaking-6.6.0]] -=== 6.6.0 APM Breaking changes - -No breaking changes. - -[[breaking-6.5.0]] -=== 6.5.0 APM Breaking changes - -No breaking changes. - -[[breaking-6.4.0]] -=== 6.4.0 APM Breaking changes - -We previously split APM data into separate indices (transaction, span, error, etc.). -In 6.4 APM {kib} UI starts to leverage those separate indices for queries. - -In case you only update {kib} but run an older version of APM Server you will not be able to see any APM data by default. -To fix this, use the {kibana-ref}/apm-settings-kb.html[{kib} APM settings] to specify the location of the APM index: -["source","sh"] ------------------------------------------------------------- -apm_oss.errorIndices: apm-* -apm_oss.spanIndices: apm-* -apm_oss.transactionIndices: apm-* -apm_oss.onboardingIndices: apm-* ------------------------------------------------------------- - -In case you are upgrading APM Server from an older version, you might need to refresh your APM index pattern for certain APM UI features to work. -Also ensure to add the new config options in `apm-server.yml` in case you keep your existing configuration file: -["source","sh"] ------------------------------------------------------------- -output.elasticsearch: - indices: - - index: "apm-%{[observer.version]}-sourcemap" - when.contains: - processor.event: "sourcemap" - - index: "apm-%{[observer.version]}-error-%{+yyyy.MM.dd}" - when.contains: - processor.event: "error" - - index: "apm-%{[observer.version]}-transaction-%{+yyyy.MM.dd}" - when.contains: - processor.event: "transaction" - - index: "apm-%{[observer.version]}-span-%{+yyyy.MM.dd}" - when.contains: - processor.event: "span" - - index: "apm-%{[observer.version]}-metric-%{+yyyy.MM.dd}" - when.contains: - processor.event: "metric" - - index: "apm-%{[observer.version]}-onboarding-%{+yyyy.MM.dd}" - when.contains: - processor.event: "onboarding" ------------------------------------------------------------- diff --git a/docs/legacy/guide/apm-data-model.asciidoc b/docs/legacy/guide/apm-data-model.asciidoc deleted file mode 100644 index 5ce016a238a..00000000000 --- a/docs/legacy/guide/apm-data-model.asciidoc +++ /dev/null @@ -1,295 +0,0 @@ -[[apm-data-model]] -== Data Model - -IMPORTANT: {deprecation-notice-data} -If you've already upgraded, see <>. - -Elastic APM agents capture different types of information from within their instrumented applications. -These are known as events, and can be `spans`, `transactions`, `errors`, or `metrics`. - -* <> -* <> -* <> -* <> - -Events can contain additional <> which further enriches your data. - -[[transaction-spans]] -=== Spans - -IMPORTANT: {deprecation-notice-data} -If you've already upgraded, see <>. - -*Spans* contain information about the execution of a specific code path. -They measure from the start to the end of an activity, -and they can have a parent/child relationship with other spans. - -Agents automatically instrument a variety of libraries to capture these spans from within your application, -but you can also use the Agent API for custom instrumentation of specific code paths. - -Among other things, spans can contain: - -* A `transaction.id` attribute that refers to its parent <>. -* A `parent.id` attribute that refers to its parent span or transaction. -* Its start time and duration. -* A `name`. -* A `type`, `subtype`, and `action`. -* An optional `stack trace`. Stack traces consist of stack frames, -which represent a function call on the call stack. -They include attributes like function name, file name and path, line number, etc. - -TIP: Most agents limit keyword fields, like `span.id`, to 1024 characters, -and non-keyword fields, like `span.start.us`, to 10,000 characters. - -Spans are stored in {apm-server-ref-v}/span-indices.html[span indices]. -This storage is separate from {apm-server-ref-v}/transaction-indices.html[transaction indices] by default. - -[float] -[[dropped-spans]] -==== Dropped spans - -For performance reasons, APM agents can choose to sample or omit spans purposefully. -This can be useful in preventing edge cases, like long-running transactions with over 100 spans, -that would otherwise overload both the Agent and the APM Server. -When this occurs, the {apm-app} will display the number of spans dropped. - -To configure the number of spans recorded per transaction, see the relevant Agent documentation: - -* Go: {apm-go-ref-v}/configuration.html#config-transaction-max-spans[`ELASTIC_APM_TRANSACTION_MAX_SPANS`] -* iOS: _Not yet supported_ -* Java: {apm-java-ref-v}/config-core.html#config-transaction-max-spans[`transaction_max_spans`] -* .NET: {apm-dotnet-ref-v}/config-core.html#config-transaction-max-spans[`TransactionMaxSpans`] -* Node.js: {apm-node-ref-v}/configuration.html#transaction-max-spans[`transactionMaxSpans`] -* PHP: {apm-php-ref-v}/configuration-reference.html#config-transaction-max-spans[`transaction_max_spans`] -* Python: {apm-py-ref-v}/configuration.html#config-transaction-max-spans[`transaction_max_spans`] -* Ruby: {apm-ruby-ref-v}/configuration.html#config-transaction-max-spans[`transaction_max_spans`] - -[float] -[[missing-spans]] -==== Missing spans - -Agents stream spans to the APM Server separately from their transactions. -Because of this, unforeseen errors may cause spans to go missing. -Agents know how many spans a transaction should have; -if the number of expected spans does not equal the number of spans received by the APM Server, -the {apm-app} will calculate the difference and display a message. - -[[transactions]] -=== Transactions - -IMPORTANT: {deprecation-notice-data} -If you've already upgraded, see <>. - -*Transactions* are a special kind of <> that have additional attributes associated with them. -They describe an event captured by an Elastic {apm-agent} instrumenting a service. -You can think of transactions as the highest level of work you’re measuring within a service. -As an example, a transaction might be a: - -* Request to your server -* Batch job -* Background job -* Custom transaction type - -Agents decide whether to sample transactions or not, -and provide settings to control sampling behavior. -If sampled, the <> of a transaction are sent and stored as separate documents. -Within one transaction there can be 0, 1, or many spans captured. - -A transaction contains: - -* The timestamp of the event -* A unique id, type, and name -* Data about the environment in which the event is recorded: -** Service - environment, framework, language, etc. -** Host - architecture, hostname, IP, etc. -** Process - args, PID, PPID, etc. -** URL - full, domain, port, query, etc. -** <> - (if supplied) email, ID, username, etc. -* Other relevant information depending on the agent. Example: The JavaScript RUM agent captures transaction marks, -which are points in time relative to the start of the transaction with some label. - -In addition, agents provide options for users to capture custom <>. -Metadata can be indexed - <>, or not-indexed - <>. - -Transactions are grouped by their `type` and `name` in the APM UI's -{kibana-ref}/transactions.html[Transaction overview]. -If you're using a supported framework, APM agents will automatically handle the naming for you. -If you're not, or if you wish to override the default, -all agents have API methods to manually set the `type` and `name`. - -* `type` should be a keyword of specific relevance in the service's domain, -e.g. `request`, `backgroundjob`, etc. -* `name` should be a generic designation of a transaction in the scope of a single service, -e.g. `GET /users/:id`, `UsersController#show`, etc. - -TIP: Most agents limit keyword fields (e.g. `labels`) to 1024 characters, -non-keyword fields (e.g. `span.db.statement`) to 10,000 characters. - -Transactions are stored in {apm-server-ref-v}/transaction-indices.html[transaction indices]. - -[[errors]] -=== Errors - -IMPORTANT: {deprecation-notice-data} -If you've already upgraded, see <>. - -An error event contains at least -information about the original `exception` that occurred -or about a `log` created when the exception occurred. -For simplicity, errors are represented by a unique ID. - -An Error contains: - -* Both the captured `exception` and the captured `log` of an error can contain a `stack trace`, -which is helpful for debugging. -* The `culprit` of an error indicates where it originated. -* An error might relate to the <> during which it happened, -via the `transaction.id`. -* Data about the environment in which the event is recorded: -** Service - environment, framework, language, etc. -** Host - architecture, hostname, IP, etc. -** Process - args, PID, PPID, etc. -** URL - full, domain, port, query, etc. -** <> - (if supplied) email, ID, username, etc. - -In addition, agents provide options for users to capture custom <>. -Metadata can be indexed - <>, or not-indexed - <>. - -TIP: Most agents limit keyword fields (e.g. `error.id`) to 1024 characters, -non-keyword fields (e.g. `error.exception.message`) to 10,000 characters. - -Errors are stored in {apm-server-ref-v}/error-indices.html[error indices]. - -[[metrics]] -=== Metrics - -IMPORTANT: {deprecation-notice-data} -If you've already upgraded, see <>. - -APM agents automatically pick up basic host-level metrics, -including system and process-level CPU and memory metrics. -Agent specific metrics are also available, -like {apm-java-ref-v}/metrics.html[JVM metrics] in the Java Agent, -and {apm-go-ref-v}/metrics.html[Go runtime] metrics in the Go Agent. - -Infrastructure and application metrics are important sources of information when debugging production systems, -which is why we've made it easy to filter metrics for specific hosts or containers in the {kib} {kibana-ref}/metrics.html[metrics overview]. - -Metrics have the `processor.event` property set to `metric`. - -TIP: Most agents limit keyword fields (e.g. `processor.event`) to 1024 characters, -non-keyword fields (e.g. `system.memory.total`) to 10,000 characters. - -Metrics are stored in {apm-server-ref-v}/metricset-indices.html[metric indices]. - -For a full list of tracked metrics, see the relevant agent documentation: - -* {apm-go-ref-v}/metrics.html[Go] -* {apm-java-ref-v}/metrics.html[Java] -* {apm-node-ref-v}/metrics.html[Node.js] -* {apm-py-ref-v}/metrics.html[Python] -* {apm-ruby-ref-v}/metrics.html[Ruby] - -// This heading is linked to from the APM UI section in Kibana -[[metadata]] -=== Metadata - -IMPORTANT: {deprecation-notice-data} -If you've already upgraded, see <>. - -Metadata can enrich your events and make application performance monitoring even more useful. -Let's explore the different types of metadata that Elastic APM offers. - -[float] -[[labels-fields]] -==== Labels - -Labels add *indexed* information to transactions, spans, and errors. -Indexed means the data is searchable and aggregatable in {es}. -Add additional key-value pairs to define multiple labels. - -* Indexed: Yes -* {es} type: {ref}/object.html[object] -* {es} field: `labels` -* Applies to: <> | <> | <> - -Label values can be a string, boolean, or number, although some agents only support string values at this time. -Because labels for a given key, regardless of agent used, are stored in the same place in {es}, -all label values of a given key must have the same data type. -Multiple data types per key will throw an exception, for example: `{foo: bar}` and `{foo: 42}` is not allowed. - -IMPORTANT: Avoid defining too many user-specified labels. -Defining too many unique fields in an index is a condition that can lead to a -{ref}/mapping.html#mapping-limit-settings[mapping explosion]. - -[float] -===== Agent API reference - -* Go: {apm-go-ref-v}/api.html#context-set-label[`SetLabel`] -* Java: {apm-java-ref-v}/public-api.html#api-transaction-add-tag[`setLabel`] -* .NET: {apm-dotnet-ref-v}/public-api.html#api-transaction-tags[`Labels`] -* Node.js: {apm-node-ref-v}/agent-api.html#apm-set-label[`setLabel`] | {apm-node-ref-v}/agent-api.html#apm-add-labels[`addLabels`] -* PHP: {apm-php-ref}/public-api.html#api-transaction-interface-set-label[`Transaction` `setLabel`] | {apm-php-ref}/public-api.html#api-span-interface-set-label[`Span` `setLabel`] -* Python: {apm-py-ref-v}/api.html#api-label[`elasticapm.label()`] -* Ruby: {apm-ruby-ref-v}/api.html#api-agent-set-label[`set_label`] -* Rum: {apm-rum-ref-v}/agent-api.html#apm-add-labels[`addLabels`] - -[float] -[[custom-fields]] -==== Custom context - -Custom context adds *non-indexed*, -custom contextual information to transactions and errors. -Non-indexed means the data is not searchable or aggregatable in {es}, -and you cannot build dashboards on top of the data. -This also means you don't have to worry about {ref}/mapping.html#mapping-limit-settings[mapping explosions], -as these fields are not added to the mapping. - -Non-indexed information is useful for providing contextual information to help you -quickly debug performance issues or errors. - -* Indexed: No -* {es} type: {ref}/object.html[object] -* {es} fields: `transaction.custom` | `error.custom` -* Applies to: <> | <> - -IMPORTANT: Setting a circular object, a large object, or a non JSON serializable object can lead to errors. - -[float] -===== Agent API reference - -* Go: {apm-go-ref-v}/api.html#context-set-custom[`SetCustom`] -* iOS: _coming soon_ -* Java: {apm-java-ref-v}/public-api.html#api-transaction-add-custom-context[`addCustomContext`] -* .NET: _coming soon_ -* Node.js: {apm-node-ref-v}/agent-api.html#apm-set-custom-context[`setCustomContext`] -* PHP: _coming soon_ -* Python: {apm-py-ref-v}/api.html#api-set-custom-context[`set_custom_context`] -* Ruby: {apm-ruby-ref-v}/api.html#api-agent-set-custom-context[`set_custom_context`] -* Rum: {apm-rum-ref-v}/agent-api.html#apm-set-custom-context[`setCustomContext`] - -[float] -[[user-fields]] -==== User context - -User context adds *indexed* user information to transactions and errors. -Indexed means the data is searchable and aggregatable in {es}. - -* Indexed: Yes -* {es} type: {ref}/keyword.html[keyword] -* {es} fields: `user.email` | `user.name` | `user.id` -* Applies to: <> | <> - -[float] -===== Agent API reference - -* Go: {apm-go-ref-v}/api.html#context-set-username[`SetUsername`] | {apm-go-ref-v}/api.html#context-set-user-id[`SetUserID`] | -{apm-go-ref-v}/api.html#context-set-user-email[`SetUserEmail`] -* iOS: _coming soon_ -* Java: {apm-java-ref-v}/public-api.html#api-transaction-set-user[`setUser`] -* .NET _coming soon_ -* Node.js: {apm-node-ref-v}/agent-api.html#apm-set-user-context[`setUserContext`] -* PHP: _coming soon_ -* Python: {apm-py-ref-v}/api.html#api-set-user-context[`set_user_context`] -* Ruby: {apm-ruby-ref-v}/api.html#api-agent-set-user[`set_user`] -* Rum: {apm-rum-ref-v}/agent-api.html#apm-set-user-context[`setUserContext`] diff --git a/docs/legacy/guide/apm-doc-directory.asciidoc b/docs/legacy/guide/apm-doc-directory.asciidoc deleted file mode 100644 index a5390e97a36..00000000000 --- a/docs/legacy/guide/apm-doc-directory.asciidoc +++ /dev/null @@ -1,73 +0,0 @@ -[[components]] -== Components and documentation - -IMPORTANT: {deprecation-notice-data} -If you've already upgraded, see <>. - -Elastic APM consists of four components: *APM agents*, *APM Server*, *{es}*, and *{kib}*. - -image::./images/apm-architecture-cloud.png[Architecture of Elastic APM] - -[float] -=== APM Agents - -APM agents are open source libraries written in the same language as your service. -You may only need one, or you might use all of them. -You install them into your service as you would install any other library. -They instrument your code and collect performance data and errors at runtime. -This data is buffered for a short period and sent on to APM Server. - -Each agent has its own documentation: - -* {apm-go-ref-v}/introduction.html[Go agent] -* {apm-ios-ref-v}/intro.html[iOS agent] -* {apm-java-ref-v}/intro.html[Java agent] -* {apm-dotnet-ref-v}/intro.html[.NET agent] -* {apm-node-ref-v}/intro.html[Node.js agent] -* {apm-php-ref-v}/intro.html[PHP agent] -* {apm-py-ref-v}/getting-started.html[Python agent] -* {apm-ruby-ref-v}/introduction.html[Ruby agent] -* {apm-rum-ref-v}/intro.html[JavaScript Real User Monitoring (RUM) agent] - -[float] -=== APM Server - -APM Server is a free and open application that receives performance data from your APM agents. -It's a {apm-server-ref-v}/overview.html#why-separate-component[separate component by design], -which helps keep the agents light, prevents certain security risks, and improves compatibility across the {stack}. - -After the APM Server has validated and processed events from the APM agents, -the server transforms the data into {es} documents and stores them in corresponding -{apm-server-ref-v}/exploring-es-data.html[{es} indices]. -In a matter of seconds, you can start viewing your application performance data in the {kib} {apm-app}. - -The {apm-server-ref-v}/index.html[APM Server reference] provides everything you need when it comes to working with the server. -Here you can learn more about {apm-server-ref-v}/getting-started-apm-server.html[installation], -{apm-server-ref-v}/configuring-howto-apm-server.html[configuration], -{apm-server-ref-v}/securing-apm-server.html[security], -{apm-server-ref-v}/monitoring.html[monitoring], and more. - -[float] -=== {es} - -{ref}/index.html[{es}] is a highly scalable free and open full-text search and analytics engine. -It allows you to store, search, and analyze large volumes of data quickly and in near real time. -{es} is used to store APM performance metrics and make use of its aggregations. - -[float] -=== {kib} {apm-app} - -{kibana-ref}/index.html[{kib}] is a free and open analytics and visualization platform designed to work with {es}. -You use {kib} to search, view, and interact with data stored in {es}. - -Since application performance monitoring is all about visualizing data and detecting bottlenecks, -it's crucial you understand how to use the {kibana-ref}/xpack-apm.html[{apm-app}] in {kib}. -The following sections will help you get started: - -* {apm-app-ref}/apm-ui.html[Set up] -* {apm-app-ref}/apm-getting-started.html[Get started] -* {apm-app-ref}/apm-how-to.html[How-to guides] - -APM also has built-in integrations with {ml-cap}. To learn more about this feature, -or the {anomaly-detect} feature that's built on top of it, -refer to {kibana-ref}/machine-learning-integration.html[{ml-cap} integration]. diff --git a/docs/legacy/guide/cross-cluster-search.asciidoc b/docs/legacy/guide/cross-cluster-search.asciidoc deleted file mode 100644 index 0bc955be510..00000000000 --- a/docs/legacy/guide/cross-cluster-search.asciidoc +++ /dev/null @@ -1,49 +0,0 @@ -[[apm-cross-cluster-search]] -=== Cross-cluster search - -IMPORTANT: {deprecation-notice-data} -If you've already upgraded, see <>. - -Elastic APM utilizes {es}'s cross-cluster search functionality. -Cross-cluster search lets you run a single search request against one or more -{ref}/modules-remote-clusters.html[remote clusters] -- -making it easy to search APM data across multiple sources. -This means you can also have deployments per data type, making sizing and scaling more predictable, -and allowing for better performance while managing multiple observability use cases. - -[float] -[[set-up-ccs]] -==== Set up cross-cluster search - -*Step 1. Set up remote clusters.* - -If you're using the Hosted {es} Service, see {cloud}/ec-enable-ccs.html[Enable cross-cluster search]. - -// lint ignore elasticsearch -You can add remote clusters directly in {kib}, under *Management* > *Elasticsearch* > *Remote clusters*. -All you need is a name for the remote cluster and the seed node(s). -Remember the names of your remote clusters, you'll need them in step two. -See {ref}/ccr-getting-started.html[managing remote clusters] for detailed information on the setup process. - -Alternatively, you can {ref}/modules-remote-clusters.html#configuring-remote-clusters[configure remote clusters] -in {es}'s `elasticsearch.yml` file. - -*Step 2. Edit the default {apm-app} index pattern.* - -{apm-app} {data-sources} determine which clusters and indices to display data from. -{data-sources-cap} follow this convention: `:`. - -To display data from all remote clusters and the local cluster, -duplicate and prepend the defaults with `*:`. -For example, the default {data-source} for Error indices is `logs-apm*,apm*`. -To add all remote clusters, change this to `*:logs-apm*,*:apm*,logs-apm*,apm*` - -You can also specify certain clusters to display data from, for example, -`cluster-one:logs-apm*,cluster-one:apm*,logs-apm*,apm*`. - -There are two ways to edit the default {data-source}: - -* In the {apm-app} -- Navigate to *APM* > *Settings* > *Indices*, and change all `xpack.apm.indices.*` values to -include remote clusters. -* In `kibana.yml` -- Update the {kibana-ref}/apm-settings-kb.html[`xpack.apm.indices.*`] configuration values to -include remote clusters. diff --git a/docs/legacy/guide/data-security.asciidoc b/docs/legacy/guide/data-security.asciidoc deleted file mode 100644 index 784bcf9ba8f..00000000000 --- a/docs/legacy/guide/data-security.asciidoc +++ /dev/null @@ -1,461 +0,0 @@ -[[data-security]] -=== Data security - -IMPORTANT: {deprecation-notice-data} -If you've already upgraded, see <>. - -When setting up Elastic APM, it's essential to review all captured data carefully to ensure -it does not contain sensitive information. -When it does, we offer several different ways to filter, manipulate, or obfuscate this data. - -**Built-in data filters** - -Elastic APM provides built-in support for filtering the following types of data: - -[options="header"] -|==== -|Data type |Common sensitive data -|<> |Passwords, credit card numbers, authorization, etc. -|<> |Passwords, credit card numbers, etc. -|<> |Client IP address and user agent. -|<> |URLs visited, click events, user browser errors, resources used, etc. -|<> |Sensitive user or business information -|==== - -**Custom filters** - -There are two ways to filter other types APM data: - -|==== -|<> | Applied at ingestion time. -All agents and fields are supported. Data leaves the instrumented service. -There are no performance overhead implications on the instrumented service. - -|<> | Not supported by all agents. -Data is sanitized before leaving the instrumented service. -Potential overhead implications on the instrumented service -|==== - -[discrete] -[[built-in-filtering]] -=== Built-in data filtering - -Elastic APM provides built-in support for filtering or obfuscating the following types of data. - -[discrete] -[[filter-http-header]] -==== HTTP headers - -By default, APM agents capture HTTP request and response headers (including cookies). -Most Elastic APM agents provide the ability to sanitize HTTP header fields, -including cookies and `application/x-www-form-urlencoded` data (POST form fields). -Query string and captured request bodies, like `application/json` data, are not sanitized. - -The default list of sanitized fields attempts to target common field names for data relating to -passwords, credit card numbers, authorization, etc., but can be customized to fit your data. -This sensitive data never leaves the instrumented service. - -This setting supports {kibana-ref}/agent-configuration.html[Central configuration], -which means the list of sanitized fields can be updated without needing to redeploy your services: - -* Go: {apm-go-ref-v}/configuration.html#config-sanitize-field-names[`ELASTIC_APM_SANITIZE_FIELD_NAMES`] -* Java: {apm-java-ref-v}/config-core.html#config-sanitize-field-names[`sanitize_field_names`] -* .NET: {apm-dotnet-ref-v}/config-core.html#config-sanitize-field-names[`sanitizeFieldNames`] -* Node.js: {apm-node-ref-v}/configuration.html#sanitize-field-names[`sanitizeFieldNames`] -// * PHP: {apm-php-ref-v}[``] -* Python: {apm-py-ref-v}/configuration.html#config-sanitize-field-names[`sanitize_field_names`] -* Ruby: {apm-ruby-ref-v}/configuration.html#config-sanitize-field-names[`sanitize_field_names`] - -Alternatively, you can completely disable the capturing of HTTP headers. -This setting also supports {kibana-ref}/agent-configuration.html[Central configuration]: - -* Go: {apm-go-ref-v}/configuration.html#config-capture-headers[`ELASTIC_APM_CAPTURE_HEADERS`] -* Java: {apm-java-ref-v}/config-core.html#config-sanitize-field-names[`capture_headers`] -* .NET: {apm-dotnet-ref-v}/config-http.html#config-capture-headers[`CaptureHeaders`] -* Node.js: {apm-node-ref-v}/configuration.html#capture-headers[`captureHeaders`] -// * PHP: {apm-php-ref-v}[``] -* Python: {apm-py-ref-v}/configuration.html#config-capture-headers[`capture_headers`] -* Ruby: {apm-ruby-ref-v}/configuration.html#config-capture-headers[`capture_headers`] - -[discrete] -[[filter-http-body]] -==== HTTP bodies - -By default, the body of HTTP requests is not recorded. -Request bodies often contain sensitive data like passwords or credit card numbers, -so use care when enabling this feature. - -This setting supports {kibana-ref}/agent-configuration.html[Central configuration], -which means the list of sanitized fields can be updated without needing to redeploy your services: - -* Go: {apm-go-ref-v}/configuration.html#config-capture-body[`ELASTIC_APM_CAPTURE_BODY`] -* Java: {apm-java-ref-v}/config-core.html#config-sanitize-field-names[`capture_body`] -* .NET: {apm-dotnet-ref-v}/config-http.html#config-capture-body[`CaptureBody`] -* Node.js: {apm-node-ref-v}//configuration.html#capture-body[`captureBody`] -// * PHP: {apm-php-ref-v}[``] -* Python: {apm-py-ref-v}/configuration.html#config-capture-body[`capture_body`] -* Ruby: {apm-ruby-ref-v}/configuration.html#config-capture-body[`capture_body`] - -[discrete] -[[filter-personal-data]] -==== Personal data - -By default, the APM Server captures some personal data associated with trace events: - -* `client.ip`: The client's IP address. Typically derived from the HTTP headers of incoming requests. -`client.ip` is also used in conjunction with the {ref}/geoip-processor.html[`geoip` processor] to assign -geographical information to trace events. To learn more about how `client.ip` is derived, -see <>. -* `user_agent`: User agent data, including the client operating system, device name, vendor, and version. - -The capturing of this data can be turned off by setting -<<`capture_personal_data`,capture_personal_data>> to `false`. - -[discrete] -[[filter-real-user-data]] -==== Real user monitoring data - -Protecting user data is important. -For that reason, individual RUM instrumentations can be disabled in the RUM agent with the -{apm-rum-ref-v}/configuration.html#disable-instrumentations[`disableInstrumentations`] configuration variable. -Disabled instrumentations produce no spans or transactions. - -[options="header"] -|==== -|Disable |Configuration value -|HTTP requests |`fetch` and `xmlhttprequest` -|Page load metrics including static resources |`page-load` -|JavaScript errors on the browser |`error` -|User click events including URLs visited, mouse clicks, and navigation events |`eventtarget` -|Single page application route changes |`history` -|==== - -[discrete] -[[filter-database-statements]] -==== Database statements - -For SQL databases, APM agents do not capture the parameters of prepared statements. -Note that Elastic APM currently does not make an effort to strip parameters of regular statements. -Not using prepared statements makes your code vulnerable to SQL injection attacks, -so be sure to use prepared statements. - -For non-SQL data stores, such as {es} or MongoDB, -Elastic APM captures the full statement for queries. -For inserts or updates, the full document is not stored. -To filter or obfuscate data in non-SQL database statements, -or to remove the statement entirely, -you can set up an ingest node pipeline. - -[discrete] -[[filter-agent-specific]] -==== Agent-specific options - -Certain agents offer additional filtering and obfuscating options: - -**Agent configuration options** - -* (Node.js) Remove errors raised by the server-side process: -Disable with {apm-node-ref-v}/configuration.html#capture-exceptions[captureExceptions]. - -* (Java) Remove process arguments from transactions: -* Disabled by default with {apm-java-ref-v}/config-reporter.html#config-include-process-args[`include_process_args`]. - -[discrete] -[[custom-filters]] -=== Custom filters - -There are two ways to filter or obfuscate other types of APM data: - -* <> -* <> - -[discrete] -[[filter-ingest-pipeline]] -==== Create an ingest node pipeline filter - -Ingest node pipelines specify a series of processors that transform data in a specific way. -Transformation happens prior to indexing–inflicting no performance overhead on the monitored application. -Pipelines are a flexible and easy way to filter or obfuscate Elastic APM data. - -**Example** - -Say you decide to <>, -but quickly notice that sensitive information is being collected in the -`http.request.body.original` field: - -[source,json] ----- -{ - "email": "test@abc.com", - "password": "hunter2" -} ----- - -To obfuscate the passwords stored in the request body, -use a series of {ref}/processors.html[ingest processors]. -To start, create a pipeline with a simple description and an empty array of processors: - -[source,json] ----- -{ - "pipeline": { - "description": "redact http.request.body.original.password", - "processors": [] <1> - } -} ----- -<1> The processors defined below will go in this array - -Add the first processor to the processors array. -Because the agent captures the request body as a string, use the -{ref}/json-processor.html[JSON processor] to convert the original field value into a structured JSON object. -Save this JSON object in a new field: - -[source,json] ----- -{ - "json": { - "field": "http.request.body.original", - "target_field": "http.request.body.original_json", - "ignore_failure": true - } -} ----- - -If `body.original_json` is not `null`, redact the `password` with the {ref}/set-processor.html[set processor], -by setting the value of `body.original_json.password` to `"redacted"`: - -[source,json] ----- -{ - "set": { - "field": "http.request.body.original_json.password", - "value": "redacted", - "if": "ctx?.http?.request?.body?.original_json != null" - } -} ----- - -Use the {ref}/convert-processor.html[convert processor] to convert the JSON value of `body.original_json` to a string and set it as the `body.original` value: - -[source,json] ----- -{ - "convert": { - "field": "http.request.body.original_json", - "target_field": "http.request.body.original", - "type": "string", - "if": "ctx?.http?.request?.body?.original_json != null", - "ignore_failure": true - } -} ----- - -Finally, use the {ref}/remove-processor.html[remove processor] to remove the `body.original_json` field: - -[source,json] ----- -{ - "remove": { - "field": "http.request.body.original", - "if": "ctx?.http?.request?.body?.original_json != null", - "ignore_failure": true - } -} ----- - -Now that the pipeline has been defined, -use the {ref}/put-pipeline-api.html[create or update pipeline API] to register the new pipeline in {es}. -Name the pipeline `apm_redacted_body_password`: - -[source,console] ----- -PUT _ingest/pipeline/apm_redacted_body_password -{ - "description": "redact http.request.body.original.password", - "processors": [ - { - "json": { - "field": "http.request.body.original", - "target_field": "http.request.body.original_json", - "ignore_failure": true - } - }, - { - "set": { - "field": "http.request.body.original_json.password", - "value": "redacted", - "if": "ctx?.http?.request?.body?.original_json != null" - } - }, - { - "convert": { - "field": "http.request.body.original_json", - "target_field": "http.request.body.original", - "type": "string", - "if": "ctx?.http?.request?.body?.original_json != null", - "ignore_failure": true - } - }, - { - "remove": { - "field": "http.request.body.original_json", - "if": "ctx?.http?.request?.body?.original_json != null", - "ignore_failure": true - } - } - ] -} ----- - -To make sure the `apm_redacted_body_password` pipeline works correctly, -test it with the {ref}/simulate-pipeline-api.html[simulate pipeline API]. -This API allows you to run multiple documents through a pipeline to ensure it is working correctly. - -The request below simulates running three different documents through the pipeline: - -[source,console] ----- -POST _ingest/pipeline/apm_redacted_body_password/_simulate -{ - "docs": [ - { - "_source": { <1> - "http": { - "request": { - "body": { - "original": """{"email": "test@abc.com", "password": "hunter2"}""" - } - } - } - } - }, - { - "_source": { <2> - "some-other-field": true - } - }, - { - "_source": { <3> - "http": { - "request": { - "body": { - "original": """["invalid json" """ - } - } - } - } - } - ] -} ----- -<1> This document features the same sensitive data from the original example above -<2> This document only contains an unrelated field -<3> This document contains invalid JSON - -The API response should be similar to this: - -[source,json] ----- -{ - "docs" : [ - { - "doc" : { - "_source" : { - "http" : { - "request" : { - "body" : { - "original" : { - "password" : "redacted", - "email" : "test@abc.com" - } - } - } - } - } - } - }, - { - "doc" : { - "_source" : { - "nobody" : true - } - } - }, - { - "doc" : { - "_source" : { - "http" : { - "request" : { - "body" : { - "original" : """["invalid json" """ - } - } - } - } - } - } - ] -} ----- - -As you can see, only the first simulated document has a redacted password field. -As expected, all other documents are unaffected. - -The final step in this process is to add the newly created `apm_redacted_body_password` pipeline -to the default `apm` pipeline. This ensures that all APM data ingested into {es} runs through the pipeline. - -Get the current list of `apm` pipelines: - -[source,console] ----- -GET _ingest/pipeline/apm ----- - -Append the newly created pipeline to the end of the processors array and register the `apm` pipeline. -Your request will look similar to this: - -[source,console] ----- -{ - "apm" : { - "processors" : [ - { - "pipeline" : { - "name" : "apm_user_agent" - } - }, - { - "pipeline" : { - "name" : "apm_user_geo" - } - }, - { - "pipeline": { - "name": "apm_redacted_body_password" - } - ], - "description" : "Default enrichment for APM events" - } -} ----- - -That's it! Sit back and relax–passwords have been redacted from your APM HTTP body data. - -TIP: See {apm-server-ref-v}/configuring-ingest-node.html[parse data using ingest node pipelines] -to learn more about the default `apm` pipeline. - -[discrete] -[[filter-in-agent]] -==== {apm-agent} filters - -Some APM agents offer a way to manipulate or drop APM events _before_ they are sent to the APM Server. -Please see the relevant agent's documentation for more information and examples: - -// * Go: {apm-go-ref-v}/[] -// * Java: {apm-java-ref-v}/[] -* .NET: {apm-dotnet-ref-v}/public-api.html#filter-api[Filter API]. -* Node.js: {apm-node-ref-v}/agent-api.html#apm-add-filter[`addFilter()`]. -// * PHP: {apm-php-ref-v}[] -* Python: {apm-py-ref-v}/sanitizing-data.html[custom processors]. -// * Ruby: {apm-ruby-ref-v}/[] diff --git a/docs/legacy/guide/distributed-tracing.asciidoc b/docs/legacy/guide/distributed-tracing.asciidoc deleted file mode 100644 index d3364839fde..00000000000 --- a/docs/legacy/guide/distributed-tracing.asciidoc +++ /dev/null @@ -1,125 +0,0 @@ -[[distributed-tracing]] -=== Distributed tracing - -IMPORTANT: {deprecation-notice-data} -If you've already upgraded, see <>. - -A `trace` is a group of <> and <> with a common root. -Each `trace` tracks the entirety of a single request. -When a `trace` travels through multiple services, as is common in a microservice architecture, -it is known as a distributed trace. - -[float] -=== Why is distributed tracing important? - -Distributed tracing enables you to analyze performance throughout your microservice architecture -by tracing the entirety of a request -- from the initial web request on your front-end service -all the way to database queries made on your back-end services. - -Tracking requests as they propagate through your services provides an end-to-end picture of -where your application is spending time, where errors are occurring, and where bottlenecks are forming. -Distributed tracing eliminates individual service's data silos and reveals what's happening outside of -service borders. - -For supported technologies, distributed tracing works out-of-the-box, with no additional configuration required. - -[float] -=== How distributed tracing works - -Distributed tracing works by injecting a custom `traceparent` HTTP header into outgoing requests. -This header includes information, like `trace-id`, which is used to identify the current trace, -and `parent-id`, which is used to identify the parent of the current span on incoming requests -or the current span on an outgoing request. - -When a service is working on a request, it checks for the existence of this HTTP header. -If it's missing, the service starts a new trace. -If it exists, the service ensures the current action is added as a child of the existing trace, -and continues to propagate the trace. - -[float] -==== Trace propagation examples - -In this example, Elastic's Ruby agent communicates with Elastic's Java agent. -Both support the `traceparent` header, and trace data is successfully propagated. - -// lint ignore traceparent -image::./images/dt-trace-ex1.png[How traceparent propagation works] - -In this example, Elastic's Ruby agent communicates with OpenTelemetry's Java agent. -Both support the `traceparent` header, and trace data is successfully propagated. - -// lint ignore traceparent -image::./images/dt-trace-ex2.png[How traceparent propagation works] - -In this example, the trace meets a piece of middleware that doesn't propagate the `traceparent` header. -The distributed trace ends and any further communication will result in a new trace. - -// lint ignore traceparent -image::./images/dt-trace-ex3.png[How traceparent propagation works] - - -[float] -[[w3c-tracecontext]] -==== W3C Trace Context specification - -All Elastic agents now support the official W3C Trace Context specification and `traceparent` header. -See the table below for the minimum required agent version: - -[options="header"] -|==== -|Agent name |Agent Version -|**Go Agent**| ≥`1.6` -|**Java Agent**| ≥`1.14` -|**.NET Agent**| ≥`1.3` -|**Node.js Agent**| ≥`3.4` -|**Python Agent**| ≥`5.4` -|**Ruby Agent**| ≥`3.5` -|**RUM Agent**| ≥`5.0` -|==== - -NOTE: Older Elastic agents use a unique `elastic-apm-traceparent` header. -For backward-compatibility purposes, new versions of Elastic agents still support this header. - -[float] -=== Visualize distributed tracing - -The {apm-app}'s timeline visualization provides a visual deep-dive into each of your application's traces: - -[role="screenshot"] -image::./images/apm-distributed-tracing.png[Distributed tracing in the APM UI] - -[float] -=== Manual distributed tracing - -Elastic agents automatically propagate distributed tracing context for supported technologies. -If your service communicates over a different, unsupported protocol, -you can manually propagate distributed tracing context from a sending service to a receiving service -with each agent's API. - -[float] -==== Add the `traceparent` header to outgoing requests - -Sending services must add the `traceparent` header to outgoing requests. - --- -include::../../shared/distributed-trace-send/distributed-trace-send-widget.asciidoc[] --- - -[float] -==== Parse the `traceparent` header on incoming requests - -Receiving services must parse the incoming `traceparent` header, -and start a new transaction or span as a child of the received context. - --- -include::../../shared/distributed-trace-receive/distributed-trace-receive-widget.asciidoc[] --- - -[float] -=== Distributed tracing with RUM - -Some additional setup may be required to correlate requests correctly with the Real User Monitoring (RUM) agent. - -See the {apm-rum-ref}/distributed-tracing-guide.html[RUM distributed tracing guide] -for information on enabling cross-origin requests, setting up server configuration, -and working with dynamically-generated HTML. diff --git a/docs/legacy/guide/docker-compose.yml b/docs/legacy/guide/docker-compose.yml deleted file mode 100644 index 0f129982405..00000000000 --- a/docs/legacy/guide/docker-compose.yml +++ /dev/null @@ -1,75 +0,0 @@ -version: '2.2' -services: - apm-server: - image: docker.elastic.co/apm/apm-server:{VERSION} - depends_on: - elasticsearch: - condition: service_healthy - kibana: - condition: service_healthy - cap_add: ["CHOWN", "DAC_OVERRIDE", "SETGID", "SETUID"] - cap_drop: ["ALL"] - ports: - - 8200:8200 - networks: - - elastic - command: > - apm-server -e - -E apm-server.rum.enabled=true - -E setup.kibana.host=kibana:5601 - -E setup.template.settings.index.number_of_replicas=0 - -E apm-server.kibana.enabled=true - -E apm-server.kibana.host=kibana:5601 - -E output.elasticsearch.hosts=["elasticsearch:9200"] - healthcheck: - interval: 10s - retries: 12 - test: curl --write-out 'HTTP %{http_code}' --fail --silent --output /dev/null http://127.0.0.1:8200/ - - elasticsearch: - image: docker.elastic.co/elasticsearch/elasticsearch:{VERSION} - environment: - - bootstrap.memory_lock=true - - cluster.name=docker-cluster - - cluster.routing.allocation.disk.threshold_enabled=false - - discovery.type=single-node - - ES_JAVA_OPTS=-XX:UseAVX=2 -Xms1g -Xmx1g - ulimits: - memlock: - hard: -1 - soft: -1 - volumes: - - esdata:/usr/share/elasticsearch/data - ports: - - 9200:9200 - networks: - - elastic - healthcheck: - interval: 20s - retries: 10 - test: curl -s http://localhost:9200/_cluster/health | grep -vq '"status":"red"' - - kibana: - image: docker.elastic.co/kibana/kibana:{VERSION} - depends_on: - elasticsearch: - condition: service_healthy - environment: - ELASTICSEARCH_URL: http://elasticsearch:9200 - ELASTICSEARCH_HOSTS: http://elasticsearch:9200 - ports: - - 5601:5601 - networks: - - elastic - healthcheck: - interval: 10s - retries: 20 - test: curl --write-out 'HTTP %{http_code}' --fail --silent --output /dev/null http://localhost:5601/api/status - -volumes: - esdata: - driver: local - -networks: - elastic: - driver: bridge diff --git a/docs/legacy/guide/features.asciidoc b/docs/legacy/guide/features.asciidoc deleted file mode 100644 index 8a2dcf39174..00000000000 --- a/docs/legacy/guide/features.asciidoc +++ /dev/null @@ -1,34 +0,0 @@ -[[apm-features]] -== Elastic APM features - -IMPORTANT: {deprecation-notice-data} -If you've already upgraded, see <>. - -++++ -Features -++++ - -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> - -include::./data-security.asciidoc[] - -include::./distributed-tracing.asciidoc[] - -include::./rum.asciidoc[] - -include::./trace-sampling.asciidoc[] - -include::./opentracing.asciidoc[] - -include::./opentelemetry-elastic.asciidoc[] - -include::./obs-integrations.asciidoc[] - -include::./cross-cluster-search.asciidoc[] \ No newline at end of file diff --git a/docs/legacy/guide/images/7.7-apm-agent-configuration.png b/docs/legacy/guide/images/7.7-apm-agent-configuration.png deleted file mode 100644 index ded0553219a..00000000000 Binary files a/docs/legacy/guide/images/7.7-apm-agent-configuration.png and /dev/null differ diff --git a/docs/legacy/guide/images/7.7-apm-alert.png b/docs/legacy/guide/images/7.7-apm-alert.png deleted file mode 100644 index 4cee7214637..00000000000 Binary files a/docs/legacy/guide/images/7.7-apm-alert.png and /dev/null differ diff --git a/docs/legacy/guide/images/7.7-service-maps-java.png b/docs/legacy/guide/images/7.7-service-maps-java.png deleted file mode 100644 index e1a42f4c76e..00000000000 Binary files a/docs/legacy/guide/images/7.7-service-maps-java.png and /dev/null differ diff --git a/docs/legacy/guide/images/7.8-service-map-anomaly.png b/docs/legacy/guide/images/7.8-service-map-anomaly.png deleted file mode 100644 index b661e8f09d1..00000000000 Binary files a/docs/legacy/guide/images/7.8-service-map-anomaly.png and /dev/null differ diff --git a/docs/legacy/guide/images/apm-architecture-cloud.png b/docs/legacy/guide/images/apm-architecture-cloud.png deleted file mode 100644 index 6bc7001fb9f..00000000000 Binary files a/docs/legacy/guide/images/apm-architecture-cloud.png and /dev/null differ diff --git a/docs/legacy/guide/images/apm-architecture-diy.png b/docs/legacy/guide/images/apm-architecture-diy.png deleted file mode 100644 index d4e96466081..00000000000 Binary files a/docs/legacy/guide/images/apm-architecture-diy.png and /dev/null differ diff --git a/docs/legacy/guide/images/apm-distributed-tracing.png b/docs/legacy/guide/images/apm-distributed-tracing.png deleted file mode 100644 index 7d51e273f9d..00000000000 Binary files a/docs/legacy/guide/images/apm-distributed-tracing.png and /dev/null differ diff --git a/docs/legacy/guide/images/apm-highlight-breakdown-charts.png b/docs/legacy/guide/images/apm-highlight-breakdown-charts.png deleted file mode 100644 index cbe6eb4bbe3..00000000000 Binary files a/docs/legacy/guide/images/apm-highlight-breakdown-charts.png and /dev/null differ diff --git a/docs/legacy/guide/images/apm-highlight-rum-maps.png b/docs/legacy/guide/images/apm-highlight-rum-maps.png deleted file mode 100644 index f3992fac4a8..00000000000 Binary files a/docs/legacy/guide/images/apm-highlight-rum-maps.png and /dev/null differ diff --git a/docs/legacy/guide/images/apm-highlight-sample-rate.png b/docs/legacy/guide/images/apm-highlight-sample-rate.png deleted file mode 100644 index 11c63d0dcfb..00000000000 Binary files a/docs/legacy/guide/images/apm-highlight-sample-rate.png and /dev/null differ diff --git a/docs/legacy/guide/images/apm-settings-kib.png b/docs/legacy/guide/images/apm-settings-kib.png deleted file mode 100644 index 876f135da93..00000000000 Binary files a/docs/legacy/guide/images/apm-settings-kib.png and /dev/null differ diff --git a/docs/legacy/guide/images/apm-transactions-overview.png b/docs/legacy/guide/images/apm-transactions-overview.png deleted file mode 100644 index c3c10fcb35e..00000000000 Binary files a/docs/legacy/guide/images/apm-transactions-overview.png and /dev/null differ diff --git a/docs/legacy/guide/images/breakdown-release-notes.png b/docs/legacy/guide/images/breakdown-release-notes.png deleted file mode 100644 index afca76a7632..00000000000 Binary files a/docs/legacy/guide/images/breakdown-release-notes.png and /dev/null differ diff --git a/docs/legacy/guide/images/chained-exceptions.png b/docs/legacy/guide/images/chained-exceptions.png deleted file mode 100644 index e187defe5a0..00000000000 Binary files a/docs/legacy/guide/images/chained-exceptions.png and /dev/null differ diff --git a/docs/legacy/guide/images/dt-sampling-example.png b/docs/legacy/guide/images/dt-sampling-example.png deleted file mode 100644 index 015b7c67e7f..00000000000 Binary files a/docs/legacy/guide/images/dt-sampling-example.png and /dev/null differ diff --git a/docs/legacy/guide/images/dt-trace-ex1.png b/docs/legacy/guide/images/dt-trace-ex1.png deleted file mode 100644 index ca97955ee8b..00000000000 Binary files a/docs/legacy/guide/images/dt-trace-ex1.png and /dev/null differ diff --git a/docs/legacy/guide/images/dt-trace-ex2.png b/docs/legacy/guide/images/dt-trace-ex2.png deleted file mode 100644 index 3df0827f586..00000000000 Binary files a/docs/legacy/guide/images/dt-trace-ex2.png and /dev/null differ diff --git a/docs/legacy/guide/images/dt-trace-ex3.png b/docs/legacy/guide/images/dt-trace-ex3.png deleted file mode 100644 index 1bb666b030a..00000000000 Binary files a/docs/legacy/guide/images/dt-trace-ex3.png and /dev/null differ diff --git a/docs/legacy/guide/images/geo-location.jpg b/docs/legacy/guide/images/geo-location.jpg deleted file mode 100644 index 5b80e1e7a8f..00000000000 Binary files a/docs/legacy/guide/images/geo-location.jpg and /dev/null differ diff --git a/docs/legacy/guide/images/java-kafka.png b/docs/legacy/guide/images/java-kafka.png deleted file mode 100644 index b568e3592e9..00000000000 Binary files a/docs/legacy/guide/images/java-kafka.png and /dev/null differ diff --git a/docs/legacy/guide/images/java-metadata.png b/docs/legacy/guide/images/java-metadata.png deleted file mode 100644 index f7d28526f43..00000000000 Binary files a/docs/legacy/guide/images/java-metadata.png and /dev/null differ diff --git a/docs/legacy/guide/images/jvm-release-notes.png b/docs/legacy/guide/images/jvm-release-notes.png deleted file mode 100644 index ffeab27e102..00000000000 Binary files a/docs/legacy/guide/images/jvm-release-notes.png and /dev/null differ diff --git a/docs/legacy/guide/images/kibana-apm-sample-data.png b/docs/legacy/guide/images/kibana-apm-sample-data.png new file mode 100644 index 00000000000..7aeb5f1ac37 Binary files /dev/null and b/docs/legacy/guide/images/kibana-apm-sample-data.png differ diff --git a/docs/legacy/guide/images/kibana-geo-data.png b/docs/legacy/guide/images/kibana-geo-data.png deleted file mode 100644 index a80faefed97..00000000000 Binary files a/docs/legacy/guide/images/kibana-geo-data.png and /dev/null differ diff --git a/docs/legacy/guide/images/remote-config-release-notes.png b/docs/legacy/guide/images/remote-config-release-notes.png deleted file mode 100644 index 19e52a203be..00000000000 Binary files a/docs/legacy/guide/images/remote-config-release-notes.png and /dev/null differ diff --git a/docs/legacy/guide/images/siem-apm-integration.png b/docs/legacy/guide/images/siem-apm-integration.png deleted file mode 100644 index ef217bcbad2..00000000000 Binary files a/docs/legacy/guide/images/siem-apm-integration.png and /dev/null differ diff --git a/docs/legacy/guide/images/structured-filters.jpg b/docs/legacy/guide/images/structured-filters.jpg deleted file mode 100644 index c454707025d..00000000000 Binary files a/docs/legacy/guide/images/structured-filters.jpg and /dev/null differ diff --git a/docs/legacy/guide/index.asciidoc b/docs/legacy/guide/index.asciidoc deleted file mode 100644 index ce78e0cfd51..00000000000 --- a/docs/legacy/guide/index.asciidoc +++ /dev/null @@ -1,38 +0,0 @@ -include::../../version.asciidoc[] -include::{asciidoc-dir}/../../shared/attributes.asciidoc[] - -:apm-ref-all: https://www.elastic.co/guide/en/apm/get-started/ - -ifndef::apm-integration-docs[] -[[gettting-started]] -= APM Overview -endif::[] - -ifdef::apm-integration-docs[] -// Overwrite links to the APM Overview and APM Server Ref. Point to APM Guide instead. -:apm-overview-ref-v: {apm-guide-ref} -:apm-guide-ref: {apm-guide-ref} -:apm-server-ref-v: {apm-guide-ref} -:apm-server-ref: {apm-guide-ref} - -[[legacy-apm-overview]] -= Legacy APM Overview - -include::./overview.asciidoc[] -endif::[] - -include::./apm-doc-directory.asciidoc[] - -include::./install-and-run.asciidoc[] - -include::./quick-start-overview.asciidoc[] - -include::./apm-data-model.asciidoc[] - -include::./features.asciidoc[] - -include::./troubleshooting.asciidoc[] - -include::./apm-breaking-changes.asciidoc[] - -include::./redirects.asciidoc[] diff --git a/docs/legacy/guide/install-and-run.asciidoc b/docs/legacy/guide/install-and-run.asciidoc deleted file mode 100644 index f045766d889..00000000000 --- a/docs/legacy/guide/install-and-run.asciidoc +++ /dev/null @@ -1,102 +0,0 @@ -[[install-and-run]] -== Quick start guide - -IMPORTANT: {deprecation-notice-installation} - -This guide describes how to get started quickly with Elastic APM. You’ll learn how to: - -* Spin up {es}, {kib}, and APM Server on {ess} -* Install APM agents -* Basic configuration options -* Visualize your APM data in {kib} - -[float] -[[before-installation]] -=== Step 1: Spin up the {stack} - -include::../tab-widgets/spin-up-stack-widget.asciidoc[] - -[float] -[[agents]] -=== Step 2: Install APM agents - -// This tagged region is reused in the Observability docs. -// tag::apm-agent[] -APM agents are written in the same language as your service. -To monitor a new service, you must install the agent and configure it with a service name, APM Server URL, and Secret token or API key. - -[[choose-service-name]] -* *Service name*: Service names are used to differentiate data from each of your services. -Elastic APM includes the service name field on every document that it saves in {es}. -If you change the service name after using Elastic APM, -you will see the old service name and the new service name as two separate services. -Make sure you choose a good service name before you get started. -+ -The service name can only contain alphanumeric characters, -spaces, underscores, and dashes (must match `^[a-zA-Z0-9 _-]+$`). - -* *APM Server URL*: The host and port that APM Server listens for events on. - -* *Secret token or API key*: Authentication method for Agent/Server communication. -See {apm-server-ref-v}/secure-communication-agents.html[secure communication with APM Agents] to learn more. - -Select your service's language for installation instructions: -// end::apm-agent[] - --- -include::../tab-widgets/install-agents-widget.asciidoc[] --- - -TIP: Check the {apm-overview-ref-v}/agent-server-compatibility.html[Agent/Server compatibility matrix] to ensure you're using agents that are compatible with your version of {es}. - - -[float] -[[configure-apm]] -=== Step 3: Advanced configuration (optional) - -// This tagged region is reused in the Observability docs. -// tag::configure-agents[] -There are many different ways to tweak and tune the Elastic APM ecosystem to your needs. - -*Configure APM agents* - -APM agents have a number of configuration options that allow you to fine tune things like -environment names, sampling rates, instrumentations, metrics, and more. -Broadly speaking, there are two ways to configure APM agents: -// end::configure-agents[] - -include::../tab-widgets/configure-agent-widget.asciidoc[] - -*Configure APM Server* - -include::../tab-widgets/configure-server-widget.asciidoc[] - -[float] -[[visualize-kibana]] -=== Step 4: Visualize in {kib} - -The {apm-app} in {kib} allows you to monitor your software services and applications in real-time; -visualize detailed performance information on your services, identify and analyze errors, -and monitor host-level and agent-specific metrics like JVM and Go runtime metrics. - -To open the {apm-app}: - -. Launch {kib}: -+ --- -include::../../shared/open-kibana/open-kibana-widget.asciidoc[] --- - -. In the side navigation, under *{observability}*, select *APM*. - -[float] -[[what-next]] -=== What's next? - -Now that you have APM data streaming into {ES}, -head over to the {kibana-ref}/xpack-apm.html[{apm-app} reference] to learn more about what you can -do with {kib}'s {apm-app}. - -// Need to add more here -// Get a deeper understanding by learning about [[concepts]] -// Learn how to do things with [[how-to guides]] \ No newline at end of file diff --git a/docs/legacy/guide/obs-integrations.asciidoc b/docs/legacy/guide/obs-integrations.asciidoc deleted file mode 100644 index c4d569a3fd8..00000000000 --- a/docs/legacy/guide/obs-integrations.asciidoc +++ /dev/null @@ -1,196 +0,0 @@ -[[observability-integrations]] -=== {observability} integrations - -IMPORTANT: {deprecation-notice-data} -If you've already upgraded, see <>. - -Elastic APM supports integrations with other observability solutions. - -// remove float tag once other integrations are added -[float] -[[apm-logging-integration]] -==== Logging integration - -Many applications use logging frameworks to help record, format, and append an application's logs. -Elastic APM now offers a way to make your application logs even more useful, -by integrating with the most popular logging frameworks in their respective languages. -This means you can easily inject trace information into your logs, -allowing you to explore logs in the {observability-guide}/monitor-logs.html[{logs-app}], -then jump straight into the corresponding APM traces -- all while preserving the trace context. - -To get started: - -. <> -. <> -. <> - -[float] -[[legacy-enable-log-correlation]] -===== Enable Log correlation - -// temporary attribute for ECS 1.1 -// Remove after 7.4 release -:ecs-ref: https://www.elastic.co/guide/en/ecs/1.1 - -Some Agents require you to first enable log correlation in the Agent. -This is done with a configuration variable, and is different for each Agent. -See the relevant https://www.elastic.co/guide/en/apm/agent/index.html[Agent documentation] for further information. - -// Not enough of the Agent docs are ready yet. -// Commenting these out and will replace when ready. -// * *Java*: {apm-java-ref-v}/config-logging.html#config-enable-log-correlation[`enable_log_correlation`] -// * *.NET*: {apm-dotnet-ref-v}/[] -// * *Node.js*: {apm-node-ref-v}/[] -// * *Python*: {apm-py-ref-v}/[] -// * *Ruby*: {apm-ruby-ref-v}/[] -// * *Rum*: {apm-rum-ref-v}/[] - -[float] -[[legacy-add-apm-identifiers-to-logs]] -===== Add APM identifiers to your logs - -Once log correlation is enabled, -you must ensure your logs contain APM identifiers. - -In some supported frameworks, this is already done for you. -In other scenarios, like for unstructured logs, -you'll need to add APM identifiers to your logs in any easy to parse manner. - -Log correlation relies on these fields: - -- Service level: {ecs-ref}/ecs-service.html[`service.name`], {ecs-ref}/ecs-service.html[`service.version`], and {ecs-ref}/ecs-service.html[`service.environment`] -- Trace level: {ecs-ref}/ecs-tracing.html[`trace.id`] and {ecs-ref}/ecs-tracing.html[`transaction.id`] - -The process for adding these fields will differ based the Agent you're using, the logging framework, -and the type and structure of your logs. - -See the relevant https://www.elastic.co/guide/en/apm/agent/index.html[Agent documentation] to learn more. - -// Not enough of the Agent docs have been backported yet. -// Commenting these out and will replace when ready. -// * *Go*: {apm-go-ref-v}/supported-tech.html#supported-tech-logging[Logging frameworks] -// * *Java*: {apm-java-ref-v}/[] NOT merged yet https://github.com/elastic/apm-agent-java/pull/854 -// * *.NET*: {apm-dotnet-ref-v}/[] -// * *Node.js*: {apm-node-ref-v}/[] -// * *Python*: {apm-py-ref-v}/[] -// * *Ruby*: {apm-ruby-ref-v}/[] Not backported yet https://www.elastic.co/guide/en/apm/agent/ruby/master/log-correlation.html -// * *Rum*: {apm-rum-ref-v}/[] - -[float] -[[legacy-ingest-logs-in-es]] -===== Ingest your logs into {es} - -Once your logs contain the appropriate identifiers (fields), you need to ingest them into {es}. -Luckily, we've got a tool for that -- {filebeat} is Elastic's log shipper. -The {filebeat-ref}/filebeat-installation-configuration.html[{filebeat} quick start] -guide will walk you through the setup process. - -Because logging frameworks and formats vary greatly between different programming languages, -there is no one-size-fits-all approach for ingesting your logs into {es}. -The following tips should hopefully get you going in the right direction: - -**Download {filebeat}** - -There are many ways to download and get started with {filebeat}. -Read the {filebeat-ref}/filebeat-installation-configuration.html[{filebeat} quick start] guide to determine which is best for you. - -**Configure {filebeat}** - -Modify the {filebeat-ref}/configuring-howto-filebeat.html[`filebeat.yml`] configuration file to your needs. -Here are some recommendations: - -* Set `filebeat.inputs` to point to the source of your logs -* Point {filebeat} to the same {stack} that is receiving your APM data - * If you're using Elastic cloud, set `cloud.id` and `cloud.auth`. - * If your using a manual setup, use `output.elasticsearch.hosts`. - -[source,yml] ----- -filebeat.inputs: -- type: log <1> - paths: <2> - - /var/log/*.log -cloud.id: "staging:dXMtZWFzdC0xLmF3cy5mb3VuZC5pbyRjZWMNjN2Q3YTllOTYyNTc0Mw==" <3> -cloud.auth: "elastic:YOUR_PASSWORD" <4> ----- -<1> Configures the `log` input -<2> Path(s) that must be crawled to fetch the log lines -<3> Used to resolve the {es} and {kib} URLs for {ecloud} -<4> Authorization token for {ecloud} - -**JSON logs** - -For JSON logs you can use the {filebeat-ref}/filebeat-input-log.html[`log` input] to read lines from log files. -Here's what a sample configuration might look like: - -[source,yml] ----- -filebeat.inputs: - json.keys_under_root: true <1> - json.add_error_key: true <2> - json.message_key: message <3> ----- -<1> `true` copies JSON keys to the top level in the output document -<2> Tells {filebeat} to add an `error.message` and `error.type: json` key in case of JSON unmarshalling errors -<3> Specifies the JSON key on which to apply line filtering and multiline settings - -**Parsing unstructured logs** - -Consider the following log that is decorated with the `transaction.id` and `trace.id` fields: - -[source,log] ----- -2019-09-18 21:29:49,525 - django.server - ERROR - "GET / HTTP/1.1" 500 27 | elasticapm transaction.id=fcfbbe447b9b6b5a trace.id=f965f4cc5b59bdc62ae349004eece70c span.id=None ----- - -All that's needed now is an {filebeat-ref}/configuring-ingest-node.html[ingest node processor] to preprocess your logs and -extract these structured fields before they are indexed in {es}. -To do this, you'd need to create a pipeline that uses {es}'s {ref}/grok-processor.html[Grok Processor]. -Here's an example: - -[source, json] ----- -PUT _ingest/pipeline/log-correlation -{ - "description": "Parses the log correlation IDs out of the raw plain-text log", - "processors": [ - { - "grok": { - "field": "message", <1> - "patterns": ["%{GREEDYDATA:message} | elasticapm transaction.id=%{DATA:transaction.id} trace.id=%{DATA:trace.id} span.id=%{DATA:span.id}"] <2> - } - } - ] -} ----- -<1> The field to use for grok expression parsing -<2> An ordered list of grok expression to match and extract named captures with: -`%{DATA:transaction.id}` captures the value of `transaction.id`, -`%{DATA:trace.id}` captures the value or `trace.id`, and -`%{DATA:span.id}` captures the value of `span.id`. - -NOTE: Depending on how you've added APM data to your logs, -you may need to tweak this grok pattern in order to work for your setup. -In addition, it's possible to extract more structure out of your logs. -Make sure to follow the {ecs-ref}/ecs-field-reference.html[Elastic Common Schema] -when defining which fields you are storing in {es}. - -Then, configure {filebeat} to use the processor in `filebeat.yml`: - -[source, json] ----- -output.elasticsearch: - pipeline: "log-correlation" ----- - -If your logs contain messages that span multiple lines of text (common in Java stack traces), -you'll also need to configure {filebeat-ref}/multiline-examples.html[multiline settings]. - -The following example shows how to configure {filebeat} to handle a multiline message where the first line of the message begins with a bracket ([). - -[source,yml] ----- -multiline.pattern: '^\[' -multiline.negate: true -multiline.match: after ----- diff --git a/docs/legacy/guide/opentelemetry-elastic.asciidoc b/docs/legacy/guide/opentelemetry-elastic.asciidoc deleted file mode 100644 index 02f5606d2f2..00000000000 --- a/docs/legacy/guide/opentelemetry-elastic.asciidoc +++ /dev/null @@ -1,11 +0,0 @@ -[[open-telemetry-elastic]] -=== OpenTelemetry integration - -{ot-what}[OpenTelemetry] is a set of APIs, SDKs, tooling, and integrations that enable the capture and management of -telemetry data from your services and applications. For more information about the -OpenTelemetry project, see the {ot-spec}[spec]. - -Elastic integrates with OpenTelemetry, allowing you to reuse your existing instrumentation -to easily send observability data to the {stack}. - -See <> to learn more. diff --git a/docs/legacy/guide/opentracing.asciidoc b/docs/legacy/guide/opentracing.asciidoc deleted file mode 100644 index de90fa8917c..00000000000 --- a/docs/legacy/guide/opentracing.asciidoc +++ /dev/null @@ -1,24 +0,0 @@ -[[opentracing]] -=== OpenTracing bridge - -IMPORTANT: {deprecation-notice-data} - -Most Elastic APM agents have https://opentracing.io/[OpenTracing] compatible bridges. - -The OpenTracing bridge allows you to create Elastic APM <> and <> using the OpenTracing API. -This means you can reuse your existing OpenTracing instrumentation to quickly and easily begin using Elastic APM. - -[float] -==== Agent specific details - -Not all features of the OpenTracing API are supported, and there are some Elastic APM-specific tags you should be aware of. Please see the relevant Agent documentation for more detailed information: - -* {apm-go-ref-v}/opentracing.html[Go agent] -* {apm-java-ref-v}/opentracing-bridge.html[Java agent] -* {apm-node-ref-v}/opentracing.html[Node.js agent] -// * {apm-py-ref-v}/opentelemetry-bridge.html[Python agent] -* https://www.elastic.co/guide/en/apm/agent/python/6.x/opentelemetry-bridge.html[Python agent] -* {apm-ruby-ref-v}/opentracing.html[Ruby agent] -* {apm-rum-ref-v}/opentracing.html[JavaScript Real User Monitoring (RUM) agent] - -Additionally, the iOS agent can utilize the https://github.com/open-telemetry/opentelemetry-swift/tree/main/Sources/Importers/OpenTracingShim[`opentelemetry-swift/OpenTracingShim`]. diff --git a/docs/legacy/guide/overview.asciidoc b/docs/legacy/guide/overview.asciidoc deleted file mode 100644 index 7dbb95308d6..00000000000 --- a/docs/legacy/guide/overview.asciidoc +++ /dev/null @@ -1,31 +0,0 @@ -**** -There are two ways to install, run, and manage Elastic APM: - -* With the Elastic APM integration -* With the standalone (legacy) APM Server binary - -This documentation focuses on option two: the **standalone (legacy) APM Server binary**. -{deprecation-notice-installation} -**** - -Elastic APM is an application performance monitoring system built on the {stack}. -It allows you to monitor software services and applications in real-time, by -collecting detailed performance information on response time for incoming requests, -database queries, calls to caches, external HTTP requests, and more. -This makes it easy to pinpoint and fix performance problems quickly. - -Elastic APM also automatically collects unhandled errors and exceptions. -Errors are grouped based primarily on the stack trace, -so you can identify new errors as they appear and keep an eye on how many times specific errors happen. - -Metrics are another vital source of information when debugging production systems. -Elastic APM agents automatically pick up basic host-level metrics and agent-specific metrics, -like JVM metrics in the Java Agent, and Go runtime metrics in the Go Agent. - -[float] -== Give Elastic APM a try - -Learn more about the <> that make up Elastic APM, -or jump right into the <>. - -NOTE: These docs will indiscriminately use the word "service" for both services and applications. \ No newline at end of file diff --git a/docs/legacy/guide/quick-start-overview.asciidoc b/docs/legacy/guide/quick-start-overview.asciidoc deleted file mode 100644 index 7f8e29a2fec..00000000000 --- a/docs/legacy/guide/quick-start-overview.asciidoc +++ /dev/null @@ -1,59 +0,0 @@ - -[[quick-start-overview]] -=== Quick start development environment - -IMPORTANT: {deprecation-notice-installation} - -// This tagged region is reused in the Observability docs. -// tag::dev-environment[] -ifeval::["{release-state}"=="unreleased"] - -Version {version} of APM Server has not yet been released. - -endif::[] - -ifeval::["{release-state}"!="unreleased"] - -If you're just looking for a quick way to try out Elastic APM, you can easily get started with Docker. -Just follow the steps below. - -**Create a docker-compose.yml file** - -The https://www.docker.elastic.co/[Elastic Docker registry] contains Docker images for all of the products -in the {stack}. -You can use Docker compose to easily get the default distributions of {es}, {kib}, -and APM Server up and running in Docker. - -Create a `docker-compose.yml` file and copy and paste in the following: - -["source","yaml",subs="attributes"] --------------------------------------------- -include::./docker-compose.yml[] --------------------------------------------- - -**Compose** - -Run `docker-compose up`. -Compose will download the official docker containers and start {es}, {kib}, and APM Server. - -**Install Agents** - -When Compose finishes, navigate to http://localhost:5601/app/kibana#/home/tutorial/apm. -Complete steps 4-6 to configure your application to collect and report APM data. - -**Visualize** - -Use the {apm-app} at http://localhost:5601/app/apm to visualize your application performance data! - -When you're done, `ctrl+c` will stop all of the containers. - -**Advanced Docker usage** - -If you're interested in learning more about all of the APM features available, -or running the Elastic stack on Docker in a production environment, see the following documentation: - -* {apm-server-ref-v}/running-on-docker.html[Running APM Server on Docker] -* {ref}/docker.html#docker-compose-file[Running {es} and {kib} on Docker] - -endif::[] -// end::dev-environment[] diff --git a/docs/legacy/guide/redirects.asciidoc b/docs/legacy/guide/redirects.asciidoc deleted file mode 100644 index 4c2a140b801..00000000000 --- a/docs/legacy/guide/redirects.asciidoc +++ /dev/null @@ -1,60 +0,0 @@ -ifndef::apm-integration-docs[] -["appendix",role="exclude",id="redirects"] -= Deleted pages -endif::[] - -ifdef::apm-integration-docs[] -["appendix",role="exclude",id="legacy-apm-redirects"] -= Deleted pages -endif::[] - -The following pages do not exist. They may have moved, been deleted, or have not been created yet. - -[role="exclude",id="go-compatibility"] -=== Go Agent Compatibility - -This page has moved. Please see <>. - -[role="exclude",id="java-compatibility"] -=== Java Agent Compatibility - -This page has moved. Please see <>. - -[role="exclude",id="dotnet-compatibility"] -=== .NET Agent Compatibility - -This page has moved. Please see <>. - -[role="exclude",id="nodejs-compatibility"] -=== Node.js Agent Compatibility - -This page has moved. Please see <>. - -[role="exclude",id="python-compatibility"] -=== Python Agent Compatibility - -This page has moved. Please see <>. - -[role="exclude",id="ruby-compatibility"] -=== Ruby Agent Compatibility - -This page has moved. Please see <>. - -[role="exclude",id="rum-compatibility"] -=== RUM Agent Compatibility - -This page has moved. Please see <>. - -[role="exclude",id="apm-release-notes"] -=== APM release highlights - -This page has moved. -Please see {observability-guide}/whats-new.html[What's new in {observability} {minor-version}]. - -Please see <>. - -[role="exclude",id="whats-new"] -=== What's new in APM {minor-version} - -This page has moved. -Please see {observability-guide}/whats-new.html[What's new in {observability} {minor-version}]. diff --git a/docs/legacy/guide/rum.asciidoc b/docs/legacy/guide/rum.asciidoc deleted file mode 100644 index 0cbfd7ec13c..00000000000 --- a/docs/legacy/guide/rum.asciidoc +++ /dev/null @@ -1,16 +0,0 @@ -[[rum]] -=== Real User Monitoring (RUM) - -IMPORTANT: {deprecation-notice-data} -If you've already upgraded, see <>. - -Real User Monitoring captures user interaction with clients such as web browsers. -The {apm-rum-ref-v}[JavaScript Agent] is Elastic’s RUM Agent. -To use it you need to {apm-server-ref-v}/configuration-rum.html[enable RUM support] in the APM Server. - -Unlike Elastic APM backend agents which monitor requests and responses, -the RUM JavaScript agent monitors the real user experience and interaction within your client-side application. -The RUM JavaScript agent is also framework-agnostic, which means it can be used with any front-end JavaScript application. - -You will be able to measure metrics such as "Time to First Byte", `domInteractive`, -and `domComplete` which helps you discover performance issues within your client-side application as well as issues that relate to the latency of your server-side application. \ No newline at end of file diff --git a/docs/legacy/guide/trace-sampling.asciidoc b/docs/legacy/guide/trace-sampling.asciidoc deleted file mode 100644 index 5a1395b918f..00000000000 --- a/docs/legacy/guide/trace-sampling.asciidoc +++ /dev/null @@ -1,114 +0,0 @@ -[[trace-sampling]] -=== Transaction sampling - -IMPORTANT: {deprecation-notice-data} -If you've already upgraded, see <>. - -Elastic APM supports head-based, probability sampling. -_Head-based_ means the sampling decision for each trace is made when that trace is initiated. -_Probability sampling_ means that each trace has a defined and equal probability of being sampled. - -For example, a sampling value of `.2` indicates a transaction sample rate of `20%`. -This means that only `20%` of traces will send and retain all of their associated information. -The remaining traces will drop contextual information to reduce the transfer and storage size of the trace. - -TIP: The APM integration supports both head-based and tail-based sampling. -Learn more <>. - -[float] -==== Why sample? - -Distributed tracing can generate a substantial amount of data, -and storage can be a concern for users running `100%` sampling -- especially as they scale. - -The goal of probability sampling is to provide you with a representative set of data that allows -you to make statistical inferences about the entire group of data. -In other words, in most cases, you can still find anomalous patterns in your applications, detect outages, track errors, -and lower MTTR, even when sampling at less than `100%`. - -[float] -==== What data is sampled? - -A sampled trace retains all data associated with it. - -Non-sampled traces drop <> data. -Spans contain more granular information about what is happening within a transaction, -like external requests or database calls. -Spans also contain contextual information and labels. - -Regardless of the sampling decision, all traces retain transaction and error data. -This means the following data will always accurately reflect *all* of your application's requests, regardless of the configured sampling rate: - -* Transaction duration and transactions per minute -* Transaction breakdown metrics -* Errors, error occurrence, and error rate - -// To turn off the sending of all data, including transaction and error data, set `active` to `false`. - -[float] -==== Sample rates - -What's the best sampling rate? Unfortunately, there isn't one. -Sampling is dependent on your data, the throughput of your application, data retention policies, and other factors. -A sampling rate from `.1%` to `100%` would all be considered normal. -You may even decide to have a unique sample rate per service -- for example, if a certain service -experiences considerably more or less traffic than another. - -// Regardless, cost conscious customers are likely to be fine with a lower sample rate. - -[float] -==== Sampling with distributed tracing - -The initiating service makes the sampling decision in a distributed trace, -and all downstream services respect that decision. - -In each example below, `Service A` initiates four transactions. -In the first example, `Service A` samples at `.5` (`50%`). In the second, `Service A` samples at `1` (`100%`). -Each subsequent service respects the initial sampling decision, regardless of their configured sample rate. -The result is a sampling percentage that matches the initiating service: - -image::./images/dt-sampling-example.png[How sampling impacts distributed tracing] - -[float] -==== {apm-app} implications - -Because the transaction sample rate is respected by downstream services, -the {apm-app} always knows which transactions have and haven't been sampled. -This prevents the app from showing broken traces. -In addition, because transaction and error data is never sampled, -you can always expect metrics and errors to be accurately reflected in the {apm-app}. - -*Service maps* - -Service maps rely on distributed traces to draw connections between services. -A minimum required version of APM agents is required for Service maps to work. -See {kibana-ref}/service-maps.html[Service maps] for more information. - -// Follow-up: Add link from https://www.elastic.co/guide/en/kibana/current/service-maps.html#service-maps-how -// to this page. - -[float] -==== Adjust the sample rate - -There are three ways to adjust the transaction sample rate of your APM agents: - -Dynamic:: -The transaction sample rate can be changed dynamically (no redeployment necessary) on a per-service and per-environment -basis with {kibana-ref}/agent-configuration.html[APM Agent Configuration] in {kib}. - -{kib} API:: -APM Agent configuration exposes an API that can be used to programmatically change -your agents' sampling rate. -An example is provided in the {kibana-ref}/agent-config-api.html[Agent configuration API reference]. - -Configuration:: -Each agent provides a configuration value used to set the transaction sample rate. -See the relevant agent's documentation for more details: - -* Go: {apm-go-ref-v}/configuration.html#config-transaction-sample-rate[`ELASTIC_APM_TRANSACTION_SAMPLE_RATE`] -* Java: {apm-java-ref-v}/config-core.html#config-transaction-sample-rate[`transaction_sample_rate`] -* .NET: {apm-dotnet-ref-v}/config-core.html#config-transaction-sample-rate[`TransactionSampleRate`] -* Node.js: {apm-node-ref-v}/configuration.html#transaction-sample-rate[`transactionSampleRate`] -* PHP: {apm-php-ref-v}/configuration-reference.html#config-transaction-sample-rate[`transaction_sample_rate`] -* Python: {apm-py-ref-v}/configuration.html#config-transaction-sample-rate[`transaction_sample_rate`] -* Ruby: {apm-ruby-ref-v}/configuration.html#config-transaction-sample-rate[`transaction_sample_rate`] \ No newline at end of file diff --git a/docs/legacy/guide/troubleshooting.asciidoc b/docs/legacy/guide/troubleshooting.asciidoc deleted file mode 100644 index d9a0dee05b7..00000000000 --- a/docs/legacy/guide/troubleshooting.asciidoc +++ /dev/null @@ -1,37 +0,0 @@ -[[troubleshooting-guide]] -== Troubleshooting - -IMPORTANT: {deprecation-notice-data} -If you've already upgraded, see <>. - -If you run into trouble, there are three places you can look for help. - -[float] -=== Troubleshooting documentation - -The APM Server, {apm-app}, and each {apm-agent} has a troubleshooting guide: - -* {apm-server-ref-v}/troubleshooting.html[APM Server troubleshooting] -* {kibana-ref}/troubleshooting.html[{apm-app} troubleshooting] -* {apm-dotnet-ref-v}/troubleshooting.html[.NET agent troubleshooting] -* {apm-go-ref-v}/troubleshooting.html[Go agent troubleshooting] -* {apm-ios-ref-v}/troubleshooting.html[iOS agent troubleshooting] -* {apm-java-ref-v}/trouble-shooting.html[Java agent troubleshooting] -* {apm-node-ref-v}/troubleshooting.html[Node.js agent troubleshooting] -* {apm-php-ref-v}/troubleshooting.html[PHP agent troubleshooting] -* {apm-py-ref-v}/troubleshooting.html[Python agent troubleshooting] -* {apm-ruby-ref-v}/debugging.html[Ruby agent troubleshooting] -* {apm-rum-ref-v}/troubleshooting.html[RUM troubleshooting] - -[float] -=== Elastic Support - -We offer a support experience unlike any other. -Our team of professionals 'speak human and code' and love making your day. -https://www.elastic.co/subscriptions[Learn more about subscriptions]. - -[float] -=== Discussion forum - -For additional questions and feature requests, -visit our https://discuss.elastic.co/c/apm[discussion forum]. diff --git a/docs/legacy/high-availability.asciidoc b/docs/legacy/high-availability.asciidoc index 8ca589a68fe..07f14db747f 100644 --- a/docs/legacy/high-availability.asciidoc +++ b/docs/legacy/high-availability.asciidoc @@ -1,8 +1,6 @@ [[high-availability]] === High Availability -IMPORTANT: {deprecation-notice-installation} - To achieve high availability you can place multiple instances of APM Server behind a regular HTTP load balancer, for example HAProxy or Nginx. @@ -10,7 +8,7 @@ for example HAProxy or Nginx. The endpoint `/` always returns an `HTTP 200`. You can configure your load balancer to send HTTP requests to this endpoint to determine if an APM Server is running. -See <> for more information on that endpoint. +See <> for more information on that endpoint. In case of temporal issues, like unavailable {es} or a sudden high workload, APM Server does not have an internal queue to buffer requests, @@ -18,3 +16,5 @@ but instead leverages an HTTP request timeout to act as back-pressure. If {es} goes down, the APM Server will eventually deny incoming requests. Both the APM Server and {apm-agent}(s) will issue logs accordingly. + +TIP: Fleet-managed APM Server users might also be interested in {fleet-guide}/fleet-agent-proxy-support.html[Fleet/Agent proxy support]. \ No newline at end of file diff --git a/docs/legacy/howto.asciidoc b/docs/legacy/howto.asciidoc deleted file mode 100644 index 68651422676..00000000000 --- a/docs/legacy/howto.asciidoc +++ /dev/null @@ -1,29 +0,0 @@ -[[howto-guides]] -= How-to guides - -IMPORTANT: {deprecation-notice-data} -If you've already upgraded, please see <> instead. - -Learn how to perform common {beatname_uc} configuration and management tasks. - -* <> -* <> -* <> -* <<{beatname_lc}-template>> -* <> -* <> -* <> - -include::./sourcemaps.asciidoc[] - -include::./ilm.asciidoc[] - -include::./jaeger-support.asciidoc[] - -include::{libbeat-dir}/howto/load-index-templates.asciidoc[] - -include::./storage-management.asciidoc[] - -include::./configuring-ingest.asciidoc[] - -include::./data-ingestion.asciidoc[] diff --git a/docs/legacy/ilm.asciidoc b/docs/legacy/ilm.asciidoc deleted file mode 100644 index 81cdddb5eb3..00000000000 --- a/docs/legacy/ilm.asciidoc +++ /dev/null @@ -1,10 +0,0 @@ -[[ilm]] -== Custom {ilm} - -// Appends `-legacy` to each section's ID so that they are different from the APM integration IDs -:append-legacy: -legacy - -IMPORTANT: {deprecation-notice-data} -If you've already upgraded, please see <> instead. - -include::../ilm-how-to.asciidoc[tag=ilm-integration] diff --git a/docs/legacy/index.asciidoc b/docs/legacy/index.asciidoc deleted file mode 100644 index 62a2715de16..00000000000 --- a/docs/legacy/index.asciidoc +++ /dev/null @@ -1,91 +0,0 @@ -// Remove these two include statements when the APM Server Reference is removed from the build -include::../version.asciidoc[] -include::{asciidoc-dir}/../../shared/attributes.asciidoc[] - -:libbeat-dir: {docdir}/legacy/copied-from-beats/docs -:libbeat-outputs-dir: {docdir}/legacy/copied-from-beats/outputs -:version: {apm_server_version} -:beatname_lc: apm-server -:beatname_uc: APM Server -:beatname_pkg: {beatname_lc} -:beat_kib_app: APM app -:beat_monitoring_user: apm_system -:beat_monitoring_user_version: 6.5.0 -:beat_monitoring_version: 6.5 -:beat_default_index_prefix: apm -:access_role: {beat_default_index_prefix}_user -:beat_version_key: observer.version -:dockerimage: docker.elastic.co/apm/{beatname_lc}:{version} -:dockergithub: https://github.com/elastic/apm-server-docker/tree/{doc-branch} -:dockerconfig: https://raw.githubusercontent.com/elastic/apm-server/{doc-branch}/apm-server.docker.yml -:discuss_forum: apm -:github_repo_name: apm-server -:sample_date_0: 2019.10.20 -:sample_date_1: 2019.10.21 -:sample_date_2: 2019.10.22 -:repo: apm-server -:no_kibana: -:no_ilm: -:no-pipeline: -:no-processors: -:no-indices-rules: -:no_dashboards: -:apm-server: -:deb_os: -:rpm_os: -:mac_os: -:docker_platform: -:win_os: -:linux_os: -:apm-package-dir: {docdir}/legacy/apm-package - -:github_repo_link: https://github.com/elastic/apm-server/blob/v{version} -ifeval::["{version}" == "8.0.0"] -:github_repo_link: https://github.com/elastic/apm-server/blob/main -endif::[] - -:downloads: https://artifacts.elastic.co/downloads/apm-server - -ifndef::apm-integration-docs[] -[[apm-server]] -= APM Server Reference -endif::[] - -ifdef::apm-integration-docs[] -// Overwrite links to the APM Overview and APM Server Ref. Point to APM Guide instead. -:apm-overview-ref-v: {apm-guide-ref} -:apm-guide-ref: {apm-guide-ref} -:apm-server-ref-v: {apm-guide-ref} -:apm-server-ref: {apm-guide-ref} - -[[overview]] -= Legacy APM Server Reference - -include::./overview.asciidoc[] -endif::[] - -include::./getting-started-apm-server.asciidoc[] - -include::./setting-up-and-running.asciidoc[] - -include::./howto.asciidoc[leveloffset=+1] - -:beat-specific-output-config: {docdir}/legacy/configuring-output-after.asciidoc -include::./configuring.asciidoc[leveloffset=+1] - -:beat-specific-security: {docdir}/legacy/security.asciidoc -include::{libbeat-dir}/shared-securing-beat.asciidoc[leveloffset=+1] - -include::{libbeat-dir}/monitoring/monitoring-beats.asciidoc[leveloffset=+1] - -include::./intake-api.asciidoc[leveloffset=+1] - -include::./exploring-es-data.asciidoc[leveloffset=+1] - -include::./fields.asciidoc[leveloffset=+1] - -include::./troubleshooting.asciidoc[leveloffset=+1] - -include::./breaking-changes.asciidoc[leveloffset=+1] - -include::./redirects.asciidoc[] diff --git a/docs/legacy/intake-api.asciidoc b/docs/legacy/intake-api.asciidoc deleted file mode 100644 index 6ff004f3988..00000000000 --- a/docs/legacy/intake-api.asciidoc +++ /dev/null @@ -1,17 +0,0 @@ -[[intake-api]] -= API - -IMPORTANT: {deprecation-notice-api} -If you've already upgraded, see <>. - -The APM Server exposes endpoints for: - -* <> -* <> -* <> -* <> - -include::./events-api.asciidoc[] -include::./sourcemap-api.asciidoc[] -include::./agent-configuration.asciidoc[] -include::./server-info.asciidoc[] diff --git a/docs/legacy/jaeger-reference.asciidoc b/docs/legacy/jaeger-reference.asciidoc deleted file mode 100644 index 794c16bf54f..00000000000 --- a/docs/legacy/jaeger-reference.asciidoc +++ /dev/null @@ -1,62 +0,0 @@ -[[jaeger-reference]] -== Configure Jaeger - -++++ -Jaeger -++++ - -IMPORTANT: {deprecation-notice-config} -If you've already upgraded, please see <> instead. - -// this content is reused in the how-to guides -// tag::jaeger-intro[] -Elastic APM integrates with https://www.jaegertracing.io/[Jaeger], an open-source, distributed tracing system. -This integration allows users with an existing Jaeger setup to switch from the default Jaeger backend, -to the {stack} -- transform data with APM Server, store data in {es}, and visualize traces in the {kib} {apm-app}. -Best of all, no instrumentation changes are needed in your application code. -// end::jaeger-intro[] - -Ready to get started? See the <> guide. - -[float] -[[jaeger-supported]] -=== Supported architecture - -Jaeger architecture supports different data formats and transport protocols -that define how data can be sent to a collector. Elastic APM, as a Jaeger collector, -supports communication with *Jaeger agents* via gRPC. - -* APM Server serves Jaeger gRPC over the same <> as the Elastic {apm-agent} protocol. - -* The APM Server gRPC endpoint supports TLS. If `apm-server.ssl` is configured, -SSL settings will automatically be applied to APM Server's Jaeger gRPC endpoint. - -* The gRPC endpoint supports probabilistic sampling. -Sampling decisions can be configured <> with APM Agent central configuration, or <> in each Jaeger client. - -See the https://www.jaegertracing.io/docs/1.22/architecture[Jaeger docs] -for more information on Jaeger architecture. - -[float] -[[jaeger-caveats]] -=== Caveats - -There are some limitations and differences between Elastic APM and Jaeger that you should be aware of. - -*Jaeger integration limitations:* - -* Because Jaeger has its own trace context header, and does not currently support W3C trace context headers, -it is not possible to mix and match the use of Elastic's APM agents and Jaeger's clients. -* Elastic APM only supports probabilistic sampling. - -*Differences between APM Agents and Jaeger Clients:* - -* Jaeger clients only sends trace data. -APM agents support a larger number of features, like -multiple types of metrics, and application breakdown charts. -When using Jaeger, features like this will not be available in the {apm-app}. -* Elastic APM's {apm-overview-ref-v}/apm-data-model.html[data model] is different than Jaegers. -For Jaeger trace data to work with Elastic's data model, we rely on spans being tagged with the appropriate -https://github.com/opentracing/specification/blob/master/semantic_conventions.md[`span.kind`]. -** Server Jaeger spans are mapped to Elastic APM {apm-overview-ref-v}/transactions.html[transactions]. -** Client Jaeger spans are mapped to Elastic APM {apm-overview-ref-v}/transaction-spans.html[spans] -- unless the span is the root, in which case it is mapped to an Elastic APM {apm-overview-ref-v}/transactions.html[transaction]. diff --git a/docs/legacy/jaeger-support.asciidoc b/docs/legacy/jaeger-support.asciidoc deleted file mode 100644 index 137f74942f4..00000000000 --- a/docs/legacy/jaeger-support.asciidoc +++ /dev/null @@ -1,70 +0,0 @@ -[[jaeger]] -== Jaeger integration - -++++ -Integrate with Jaeger -++++ - -IMPORTANT: {deprecation-notice-data} -If you've already upgraded, please see <> instead. - -include::./jaeger-reference.asciidoc[tag=jaeger-intro] - -[float] -[[jaeger-get-started]] -==== Get started - -Connect your preexisting Jaeger setup to Elastic APM in three steps: - -* <> -* <> -* <> - -IMPORTANT: There are <> to this integration. - -[float] -[[jaeger-configure-agent-client]] -==== Configure Jaeger agents - -APM Server serves Jaeger gRPC over the same <> as the Elastic {apm-agent} protocol. - -include::./tab-widgets/jaeger-widget.asciidoc[] - -[float] -[[jaeger-configure-sampling]] -==== Configure Sampling - -APM Server supports probabilistic sampling, which can be used to reduce the amount of data that your agents collect and send. -Probabilistic sampling makes a random sampling decision based on the configured sampling value. -For example, a value of `.2` means that 20% of traces will be sampled. - -There are two different ways to configure the sampling rate of your Jaeger agents: - -* <> -* <> - -[float] -[[jaeger-configure-sampling-central]] -===== APM Agent central configuration (default) - -Central sampling, with APM Agent central configuration, -allows Jaeger clients to poll APM Server for the sampling rate. -This means sample rates can be configured on the fly, on a per-service and per-environment basis. - -include::./tab-widgets/jaeger-sampling-widget.asciidoc[] - -[float] -[[jaeger-configure-sampling-local]] -===== Local sampling in each Jaeger client - -If you don't have access to the {apm-app}, -you'll need to change the Jaeger client's `sampler.type` and `sampler.param`. -This enables you to set the sampling configuration locally in each Jaeger client. -See the official https://www.jaegertracing.io/docs/1.22/sampling/[Jaeger sampling documentation] -for more information. - -[float] -[[jaeger-configure-start]] -==== Start sending span data - -That's it! Data sent from Jaeger clients to the APM Server can now be viewed in the {apm-app}. diff --git a/docs/legacy/metricset-indices.asciidoc b/docs/legacy/metricset-indices.asciidoc index c6e15023917..b5fdeadd9c7 100644 --- a/docs/legacy/metricset-indices.asciidoc +++ b/docs/legacy/metricset-indices.asciidoc @@ -1,9 +1,5 @@ [[metricset-indices]] -== Metrics documents - -++++ -Metrics documents -++++ +== Application metrics APM Server stores application metrics sent by agents as documents in {es}. Metric documents contain a timestamp, one or more metric fields, @@ -129,6 +125,7 @@ The `@timestamp` field of these documents holds the start of the aggregation int [[example-metric-document]] === Example metric document +// tag::example[] Below is an example of a metric document as stored in {es}, containing JVM metrics produced by the {apm-java-agent}. The document contains two related metrics: `jvm.gc.time` and `jvm.gc.count`. These are accompanied by various fields describing the environment in which the metrics were captured: service name, host name, Kubernetes pod UID, container ID, process ID, and more. @@ -138,3 +135,4 @@ These fields make it possible to search and aggregate across various dimensions, ---- include::../data/elasticsearch/metricset.json[] ---- +// end::example[] \ No newline at end of file diff --git a/docs/legacy/overview.asciidoc b/docs/legacy/overview.asciidoc index 2ac4cc023af..133d88a66d9 100644 --- a/docs/legacy/overview.asciidoc +++ b/docs/legacy/overview.asciidoc @@ -5,7 +5,6 @@ There are two ways to install, run, and manage Elastic APM: * With the standalone (legacy) APM Server binary This documentation focuses on option two: the **standalone (legacy) APM Server binary**. -{deprecation-notice-installation} **** The APM Server receives data from APM agents and transforms them into {es} documents. diff --git a/docs/legacy/redirects.asciidoc b/docs/legacy/redirects.asciidoc index d87d0ab9761..5df5ed7d713 100644 --- a/docs/legacy/redirects.asciidoc +++ b/docs/legacy/redirects.asciidoc @@ -8,7 +8,7 @@ The following pages have moved or been deleted. [role="exclude",id="event-types"] === Event types -This page has moved. Please see {apm-overview-ref-v}/apm-data-model.html[APM data model]. +This page has moved. Please see {apm-guide-ref}/data-model.html[APM data model]. // [role="exclude",id="errors"] // === Errors @@ -30,134 +30,134 @@ This page has moved. Please see {apm-overview-ref-v}/apm-data-model.html[APM dat [role="exclude",id="error-endpoint"] === Error endpoint -The error endpoint has been deprecated. Instead, see <>. +The error endpoint has been deprecated. Instead, see <>. [role="exclude",id="error-schema-definition"] === Error schema definition -The error schema has moved. Please see <>. +The error schema has moved. Please see <>. [role="exclude",id="error-api-examples"] === Error API examples -The error API examples have moved. Please see <>. +The error API examples have moved. Please see <>. [role="exclude",id="error-payload-schema"] === Error payload schema -This schema has changed. Please see <>. +This schema has changed. Please see <>. [role="exclude",id="error-service-schema"] === Error service schema -This schema has changed. Please see <>. +This schema has changed. Please see <>. [role="exclude",id="error-system-schema"] === Error system schema -This schema has changed. Please see <>. +This schema has changed. Please see <>. [role="exclude",id="error-context-schema"] === Error context schema -This schema has changed. Please see <>. +This schema has changed. Please see <>. [role="exclude",id="error-stacktraceframe-schema"] === Error stack trace frame schema -This schema has changed. Please see <>. +This schema has changed. Please see <>. [role="exclude",id="payload-with-error"] === Payload with error -This is no longer helpful. Please see <>. +This is no longer helpful. Please see <>. [role="exclude",id="payload-with-minimal-exception"] === Payload with minimal exception -This is no longer helpful. Please see <>. +This is no longer helpful. Please see <>. [role="exclude",id="payload-with-minimal-log"] === Payload with minimal log -This is no longer helpful. Please see <>. +This is no longer helpful. Please see <>. // Transaction API [role="exclude",id="transaction-endpoint"] === Transaction endpoint -The transaction endpoint has been deprecated. Instead, see <>. +The transaction endpoint has been deprecated. Instead, see <>. [role="exclude",id="transaction-schema-definition"] === Transaction schema definition -The transaction schema has moved. Please see <>. +The transaction schema has moved. Please see <>. [role="exclude",id="transaction-api-examples"] === Transaction API examples -The transaction API examples have moved. Please see <>. +The transaction API examples have moved. Please see <>. [role="exclude",id="transaction-span-schema"] === Transaction span schema -This schema has changed. Please see <>. +This schema has changed. Please see <>. [role="exclude",id="transaction-payload-schema"] === Transaction payload schema -This schema has changed. Please see <>. +This schema has changed. Please see <>. [role="exclude",id="transaction-service-schema"] === Transaction service schema -This schema has changed. Please see <>. +This schema has changed. Please see <>. [role="exclude",id="transaction-system-schema"] === Transaction system schema -This schema has changed. Please see <>. +This schema has changed. Please see <>. [role="exclude",id="transaction-context-schema"] === Transaction context schema -This schema has changed. Please see <>. +This schema has changed. Please see <>. [role="exclude",id="transaction-stacktraceframe-schema"] === Transaction stack trace frame schema -This schema has changed. Please see <>. +This schema has changed. Please see <>. [role="exclude",id="transaction-request-schema"] === Transaction request schema -This schema has changed. Please see <>. +This schema has changed. Please see <>. [role="exclude",id="transaction-user-schema"] === Transaction user schema -This schema has changed. Please see <>. +This schema has changed. Please see <>. [role="exclude",id="payload-with-transactions"] === Payload with transactions -This is no longer helpful. Please see <>. +This is no longer helpful. Please see <>. [role="exclude",id="payload-with-minimal-transaction"] === Payload with minimal transaction -This is no longer helpful. Please see <>. +This is no longer helpful. Please see <>. [role="exclude",id="payload-with-minimal-span"] === Payload with minimal span -This is no longer helpful. Please see <>. +This is no longer helpful. Please see <>. [role="exclude",id="example-intakev2-events"] === Example Request Body -This page has moved. Please see <>. +This page has moved. Please see <>. // V1 intake API @@ -251,18 +251,6 @@ https://github.com/elastic/apm-contrib/tree/main/kibana[apm-contrib] repository. // This section has moved. Please see <>. -ifndef::apm-integration-docs[] -[role="exclude",id="api-key"] -=== API keys - -This section has moved. See <>. - -[role="exclude",id="secret-token"] -=== API keys - -This section has moved. See <>. -endif::[] - [role="exclude",id="aws-lambda-arch"] === APM Architecture for AWS Lambda @@ -277,3 +265,64 @@ This section has moved. See {apm-lambda-ref}/aws-lambda-config-options.html[Conf === Using AWS Secrets Manager to manage APM authentication keys This section has moved. See {apm-lambda-ref}/aws-lambda-secrets-manager.html[Using AWS Secrets Manager to manage APM authentication keys]. + +[role="exclude",id="go-compatibility"] +=== Go Agent Compatibility + +This page has moved. Please see <>. + +[role="exclude",id="java-compatibility"] +=== Java Agent Compatibility + +This page has moved. Please see <>. + +[role="exclude",id="dotnet-compatibility"] +=== .NET Agent Compatibility + +This page has moved. Please see <>. + +[role="exclude",id="nodejs-compatibility"] +=== Node.js Agent Compatibility + +This page has moved. Please see <>. + +[role="exclude",id="python-compatibility"] +=== Python Agent Compatibility + +This page has moved. Please see <>. + +[role="exclude",id="ruby-compatibility"] +=== Ruby Agent Compatibility + +This page has moved. Please see <>. + +[role="exclude",id="rum-compatibility"] +=== RUM Agent Compatibility + +This page has moved. Please see <>. + +[role="exclude",id="apm-release-notes"] +=== APM release highlights + +This page has moved. +Please see {observability-guide}/whats-new.html[What's new in {observability} {minor-version}]. + +Please see <>. + +[role="exclude",id="whats-new"] +=== What's new in APM {minor-version} + +This page has moved. +Please see {observability-guide}/whats-new.html[What's new in {observability} {minor-version}]. + +[role="exclude",id="troubleshooting"] +=== Troubleshooting + +This page has moved. +Please see <>. + +[role="exclude",id="input-apm"] +=== Configuring + +This page has moved. +Please see <>. diff --git a/docs/legacy/secure-communication-agents.asciidoc b/docs/legacy/secure-communication-agents.asciidoc deleted file mode 100644 index f3d45e9632e..00000000000 --- a/docs/legacy/secure-communication-agents.asciidoc +++ /dev/null @@ -1,657 +0,0 @@ -[[secure-communication-agents]] -== Secure communication with APM agents - -IMPORTANT: {deprecation-notice-config} -If you've already upgraded, see <>. - -Communication between APM agents and APM Server can be both encrypted and authenticated. -Encryption is achievable through <>. - -Authentication can be achieved in two main ways: - -* <> -* <> - -Both options can be enabled at the same time, -allowing Elastic APM agents to chose whichever mechanism they support. -In addition, since both mechanisms involve sending a secret as plain text, -they should be used in combination with SSL/TLS encryption. - -As soon as an authenticated communication is enabled, requests without a valid token or API key will be denied by APM Server. -An exception to this rule can be configured with <>, -which is useful for APM agents running on the client side, like the Real User Monitoring (RUM) agent. - -There is a less straightforward and more restrictive way to authenticate clients through -<>, which is currently a mainstream option only -for the RUM agent (through the browser) and the Jaeger agent. - -[[ssl-setup]] -=== SSL/TLS communication - -IMPORTANT: {deprecation-notice-config} -If you've already upgraded, see <>. - -// Use the shared ssl short description -include::./ssl-input.asciidoc[] - -[[api-key-legacy]] -=== API keys - -IMPORTANT: {deprecation-notice-config} -If you've already upgraded, see <>. - -NOTE: API keys are sent as plain-text, -so they only provide security when used in combination with <>. -They are not applicable for agents running on clients, like the RUM agent, -as there is no way to prevent them from being publicly exposed. - -Configure API keys to authorize requests to the APM Server. -To enable API key authorization, set `apm-server.auth.api_key.enabled` to `true`. - -There are multiple, unique privileges you can assign to each API key. -API keys can have one or more of these privileges: - -* *Agent configuration* (`config_agent:read`): Required for agents to read -{kibana-ref}/agent-configuration.html[Agent configuration remotely]. -* *Ingest* (`event:write`): Required for ingesting Agent events. -* *Source map* (`sourcemap:write`): Required for <>. - -To secure the communication between APM Agents and the APM Server with API keys, -make sure <> is enabled, then complete these steps: - -. <> -. <> -. <> -. <> - -[[configure-api-key]] -[float] -=== Enable and configure API keys - -API keys are disabled by default. Enable and configure this feature in the `apm-server.auth.api_key` -section of the +{beatname_lc}.yml+ configuration file. - -At a minimum, you must enable API keys, -and should set a limit on the number of unique API keys that APM Server allows per minute. -Here's an example `apm-server.auth.api_key` config using 50 unique API keys: - -[source,yaml] ----- -apm-server.auth.api_key.enabled: true <1> -apm-server.auth.api_key.limit: 50 <2> ----- -<1> Enables API keys -<2> Restricts the number of unique API keys that {es} allows each minute. -This value should be the number of unique API keys configured in your monitored services. - -All other configuration options are described in <>. - -[[create-apikey-user]] -[float] -=== Create an API key user in {kib} - -API keys can only have the same or lower access rights than the user that creates them. -Instead of using a superuser account to create API keys, you can create a role with the minimum required -privileges. - -The user creating an {apm-agent} API key must have at least the `manage_own_api_key` cluster privilege -and the APM application-level privileges that it wishes to grant. -The example below uses the {kib} {kibana-ref}/role-management-api.html[role management API] -to create a role named `apm_agent_key_role`. - -[source,js] ----- -POST /_security/role/apm_agent_key_role -{ - "cluster": ["manage_own_api_key"], - "applications": [{ - "application": "apm", - "privileges": ["event:write", "config_agent:read", "sourcemap:write"], - "resources": ["*"] - }] -} ----- - -Assign the newly created `apm_agent_key_role` role to any user that wishes to create {apm-agent} API keys. - -[[create-api-key]] -[float] -=== Create an API key - -Using a superuser account, or a user with the role created in the previous step, -open {kib}, navigate to **{stack-manage-app}** > **API keys** and click **Create API key**. - -Enter a name for your API key and select **Restrict privileges**. -In the role descriptors box, copy and paste the following JSON. -This example creates an API key with privileges for ingesting APM events, -reading agent central configuration, uploading a source map: - -[source,json] ----- -{ - "apm": { - "applications": [ - { - "application": "apm", - "privileges": ["sourcemap:write", "event:write", "config_agent:read"], <1> - "resources": ["*"] - } - ] - } -} ----- -<1> This example adds all three API privileges to the new API key. -Privileges are described <>. Remove any privileges that you do not need. - -To set an expiration date for the API key, select **Expire after time** -and input the lifetime of the API key in days. - -Click **Create API key** and then copy the Base64 encoded API key. -You will need this for the next step, and you will not be able to view it again. - -[role="screenshot"] -image::images/api-key-copy.png[API key copy base64] - -[[set-api-key]] -[float] -=== Set the API key in your APM agents - -You can now apply your newly created API keys in the configuration of each of your APM agents. -See the relevant agent documentation for additional information: - -// Not relevant for RUM and iOS -* *Go agent*: {apm-go-ref}/configuration.html#config-api-key[`ELASTIC_APM_API_KEY`] -* *.NET agent*: {apm-dotnet-ref}/config-reporter.html#config-api-key[`ApiKey`] -* *Java agent*: {apm-java-ref}/config-reporter.html#config-api-key[`api_key`] -* *Node.js agent*: {apm-node-ref}/configuration.html#api-key[`apiKey`] -* *PHP agent*: {apm-php-ref-v}/configuration-reference.html#config-api-key[`api_key`] -* *Python agent*: {apm-py-ref}/configuration.html#config-api-key[`api_key`] -* *Ruby agent*: {apm-ruby-ref}/configuration.html#config-api-key[`api_key`] - -[[configure-api-key-alternative]] -[float] -=== Alternate API key creation methods - -API keys can also be created and validated outside of {kib}: - -* <> -* <> - -[[create-api-key-workflow-apm-server]] -[float] -==== APM Server API key workflow - -deprecated::[8.6.0, Users should create API Keys through {kib} or the {es} REST API] - -APM Server provides a command line interface for creating, retrieving, invalidating, and verifying API keys. -Keys created using this method can only be used for communication with APM Server. - -[[create-api-key-subcommands]] -[float] -===== `apikey` subcommands - -include::{libbeat-dir}/command-reference.asciidoc[tag=apikey-subcommands] - -[[create-api-key-privileges]] -[float] -===== Privileges - -If privileges are not specified at creation time, the created key will have all privileges. - -* `--agent-config` grants the `config_agent:read` privilege -* `--ingest` grants the `event:write` privilege -* `--sourcemap` grants the `sourcemap:write` privilege - -[[create-api-key-workflow]] -[float] -===== Create an API key - -Create an API key with the `create` subcommand. - -The following example creates an API key with a `name` of `java-001`, -and gives the "agent configuration" and "ingest" privileges. - -["source","sh",subs="attributes"] ------ -{beatname_lc} apikey create --ingest --agent-config --name java-001 ------ - -The response will look similar to this: - -[source,console-result] --------------------------------------------------- -Name ........... java-001 -Expiration ..... never -Id ............. qT4tz28B1g59zC3uAXfW -API Key ........ rH55zKd5QT6wvs3UbbkxOA (won't be shown again) -Credentials .... cVQ0dHoyOEIxZzU5ekMzdUFYZlc6ckg1NXpLZDVRVDZ3dnMzVWJia3hPQQ== (won't be shown again) --------------------------------------------------- - -You should always verify the privileges of an API key after creating it. -Verification can be done using the `verify` subcommand. - -The following example verifies that the `java-001` API key has the "agent configuration" and "ingest" privileges. - -["source","sh",subs="attributes"] ------ -{beatname_lc} apikey verify --agent-config --ingest --credentials cVQ0dHoyOEIxZzU5ekMzdUFYZlc6ckg1NXpLZDVRVDZ3dnMzVWJia3hPQQ== ------ - -If the API key has the requested privileges, the response will look similar to this: - -[source,console-result] --------------------------------------------------- -Authorized for privilege "event:write"...: Yes -Authorized for privilege "config_agent:read"...: Yes --------------------------------------------------- - -To invalidate an API key, use the `invalidate` subcommand. -Due to {es} caching, there may be a delay between when this subcommand is executed and when it takes effect. - -The following example invalidates the `java-001` API key. - -["source","sh",subs="attributes"] ------ -{beatname_lc} apikey invalidate --name java-001 ------ - -The response will look similar to this: - -[source,console-result] --------------------------------------------------- -Invalidated keys ... qT4tz28B1g59zC3uAXfW -Error count ........ 0 --------------------------------------------------- - -A full list of `apikey` subcommands and flags is available in the <>. - -[[create-api-key-workflow-es]] -[float] -==== {es} API key workflow - -It is also possible to create API keys using the {es} -{ref}/security-api-create-api-key.html[create API key API]. - -This example creates an API key named `java-002`: - -[source,kibana] ----- -POST /_security/api_key -{ - "name": "java-002", <1> - "expiration": "1d", <2> - "role_descriptors": { - "apm": { - "applications": [ - { - "application": "apm", - "privileges": ["sourcemap:write", "event:write", "config_agent:read"], <3> - "resources": ["*"] - } - ] - } - } -} ----- -<1> The name of the API key -<2> The expiration time of the API key -<3> Any assigned privileges - -The response will look similar to this: - -[source,console-result] ----- -{ - "id" : "GnrUT3QB7yZbSNxKET6d", - "name" : "java-002", - "expiration" : 1599153532262, - "api_key" : "RhHKisTmQ1aPCHC_TPwOvw" -} ----- - -The `credential` string, which is what agents use to communicate with APM Server, -is a base64 encoded representation of the API key's `id:api_key`. -It can be created like this: - -[source,console-result] --------------------------------------------------- -echo -n GnrUT3QB7yZbSNxKET6d:RhHKisTmQ1aPCHC_TPwOvw | base64 --------------------------------------------------- - -You can verify your API key has been base64-encoded correctly with the -{ref}/security-api-authenticate.html[Authenticate API]: - -["source","sh",subs="attributes"] ------ -curl -H "Authorization: ApiKey R0gzRWIzUUI3eVpiU054S3pYSy06bXQyQWl4TlZUeEcyUjd4cUZDS0NlUQ==" localhost:9200/_security/_authenticate ------ - -If the API key has been encoded correctly, you'll see a response similar to the following: - -[source,console-result] ----- -{ - "username":"1325298603", - "roles":[], - "full_name":null, - "email":null, - "metadata":{ - "saml_nameid_format":"urn:oasis:names:tc:SAML:2.0:nameid-format:transient", - "saml(http://saml.elastic-cloud.com/attributes/principal)":[ - "1325298603" - ], - "saml_roles":[ - "superuser" - ], - "saml_principal":[ - "1325298603" - ], - "saml_nameid":"_7b0ab93bbdbc21d825edf7dca9879bd8d44c0be2", - "saml(http://saml.elastic-cloud.com/attributes/roles)":[ - "superuser" - ] - }, - "enabled":true, - "authentication_realm":{ - "name":"_es_api_key", - "type":"_es_api_key" - }, - "lookup_realm":{ - "name":"_es_api_key", - "type":"_es_api_key" - } -} ----- - -You can then use the APM Server CLI to verify that the API key has the requested privileges: - -["source","sh",subs="attributes"] ------ -{beatname_lc} apikey verify --credentials R25yVVQzUUI3eVpiU054S0VUNmQ6UmhIS2lzVG1RMWFQQ0hDX1RQd092dw== ------ - -If the API key has the requested privileges, the response will look similar to this: - -[source,console-result] ----- -Authorized for privilege "config_agent:read"...: Yes -Authorized for privilege "event:write"...: Yes -Authorized for privilege "sourcemap:write"...: Yes ----- - -[float] -[[api-key-settings]] -=== API key configuration options - -[float] -[[api-key-auth-settings]] -==== `auth.api_key.*` configuration options - -You can specify the following options in the `apm-server.auth.api_key.*` section of the -+{beatname_lc}.yml+ configuration file. -They apply to API key communication between the APM Server and APM Agents. - -NOTE: These settings are different from the API key settings used for {es} output and monitoring. - -[float] -===== `enabled` - -Enable API key authorization by setting `enabled` to `true`. -By default, `enabled` is set to `false`, and API key support is disabled. - -TIP: Not using Elastic APM agents? -When enabled, third-party APM agents must include a valid API key in the following format: -`Authorization: ApiKey `. The key must be the base64 encoded representation of the API key's `id:name`. - -[float] -===== `limit` - -Each unique API key triggers one request to {es}. -This setting restricts the number of unique API keys are allowed per minute. -The minimum value for this setting should be the number of API keys configured in your monitored services. -The default `limit` is `100`. - -[float] -==== `auth.api_key.elasticsearch.*` configuration options - -All of the `auth.api_key.elasticsearch.*` configurations are optional. -If none are set, configuration settings from the `apm-server.output` section will be reused. - -[float] -===== `elasticsearch.hosts` - -API keys are fetched from {es}. -This configuration needs to point to a secured {es} cluster that is able to serve API key requests. - - -[float] -===== `elasticsearch.protocol` - -The name of the protocol {es} is reachable on. -The options are: `http` or `https`. The default is `http`. -If nothing is configured, configuration settings from the `output` section will be reused. - -[float] -===== `elasticsearch.path` - -An optional HTTP path prefix that is prepended to the HTTP API calls. -If nothing is configured, configuration settings from the `output` section will be reused. - -[float] -===== `elasticsearch.proxy_url` - -The URL of the proxy to use when connecting to the {es} servers. -The value may be either a complete URL or a "host[:port]", in which case the "http"scheme is assumed. -If nothing is configured, configuration settings from the `output` section will be reused. - -[float] -===== `elasticsearch.timeout` - -The HTTP request timeout in seconds for the {es} request. -If nothing is configured, configuration settings from the `output` section will be reused. - -[float] -==== `auth.api_key.elasticsearch.ssl.*` configuration options - -SSL is off by default. Set `elasticsearch.protocol` to `https` if you want to enable `https`. - -[float] -===== `elasticsearch.ssl.enabled` - -Enable custom SSL settings. -Set to false to ignore custom SSL settings for secure communication. - -[float] -===== `elasticsearch.ssl.verification_mode` - -Configure SSL verification mode. -If `none` is configured, all server hosts and certificates will be accepted. -In this mode, SSL based connections are susceptible to man-in-the-middle attacks. -**Use only for testing**. Default is `full`. - -[float] -===== `elasticsearch.ssl.supported_protocols` - -List of supported/valid TLS versions. -By default, all TLS versions from 1.0 to 1.2 are enabled. - -[float] -===== `elasticsearch.ssl.certificate_authorities` - -List of root certificates for HTTPS server verifications. - -[float] -===== `elasticsearch.ssl.certificate` - -The path to the certificate for SSL client authentication. - -[float] -===== `elasticsearch.ssl.key` - -The client certificate key used for client authentication. -This option is required if certificate is specified. - -[float] -===== `elasticsearch.ssl.key_passphrase` - -An optional passphrase used to decrypt an encrypted key stored in the configured key file. -It is recommended to use the provided keystore instead of entering the passphrase in plain text. - -[float] -===== `elasticsearch.ssl.cipher_suites` - -The list of cipher suites to use. The first entry has the highest priority. -If this option is omitted, the Go crypto library’s default suites are used (recommended). - -[float] -===== `elasticsearch.ssl.curve_types` - -The list of curve types for ECDHE (Elliptic Curve Diffie-Hellman ephemeral key exchange). - -[float] -===== `elasticsearch.ssl.renegotiation` - -Configure what types of renegotiation are supported. -Valid options are `never`, `once`, and `freely`. Default is `never`. - -* `never` - Disables renegotiation. -* `once` - Allows a remote server to request renegotiation once per connection. -* `freely` - Allows a remote server to repeatedly request renegotiation. - -[float] -[[api-key-settings-legacy]] -==== `api_key.*` configuration options - -deprecated::[7.14.0, Replaced by `auth.api_key.*`. See <>] - -In versions prior to 7.14.0, API Key authorization was known as `apm-server.api_key`. In 7.14.0 this was renamed `apm-server.auth.api_key`. -The old configuration will continue to work until 8.0.0, and the new configuration will take precedence. - -[[secret-token-legacy]] -=== Secret token - -IMPORTANT: {deprecation-notice-config} -If you've already upgraded, see <>. - -You can configure a secret token to authorize requests to the APM Server. -This ensures that only your agents are able to send data to your APM servers. -Both the agents and the APM servers have to be configured with the same secret token. - -NOTE: Secret tokens are sent as plain-text, -so they only provide security when used in combination with <>. - -To secure the communication between APM agents and the APM Server with a secret token: - -. Make sure <> is enabled -. <> -. <> - -NOTE: Secret tokens are not applicable for the RUM Agent, -as there is no way to prevent them from being publicly exposed. - -[[set-secret-token]] -[float] -=== Set a secret token - -**APM Server configuration** - -// lint ignore fleet -NOTE: {ess} and {ece} deployments provision a secret token when the deployment is created. -The secret token can be found and reset in the {ecloud} console under **Deployments** -- **APM & Fleet**. - -Here's how you set the secret token in APM Server: - -[source,yaml] ----- -apm-server.auth.secret_token: ----- - -We recommend saving the token in the APM Server <>. - -IMPORTANT: Secret tokens are not applicable for the RUM agent, -as there is no way to prevent them from being publicly exposed. - -**Agent specific configuration** - -Each Agent has a configuration for setting the value of the secret token: - -* *Go agent*: {apm-go-ref}/configuration.html#config-secret-token[`ELASTIC_APM_SECRET_TOKEN`] -* *iOS agent*: {apm-ios-ref-v}/configuration.html#secretToken[`secretToken`] -* *Java agent*: {apm-java-ref}/config-reporter.html#config-secret-token[`secret_token`] -* *.NET agent*: {apm-dotnet-ref}/config-reporter.html#config-secret-token[`ELASTIC_APM_SECRET_TOKEN`] -* *Node.js agent*: {apm-node-ref}/configuration.html#secret-token[`Secret Token`] -* *PHP agent*: {apm-php-ref-v}/configuration-reference.html#config-secret-token[`secret_token`] -* *Python agent*: {apm-py-ref}/configuration.html#config-secret-token[`secret_token`] -* *Ruby agent*: {apm-ruby-ref}/configuration.html#config-secret-token[`secret_token`] - -[[https-in-agents]] -[float] -=== HTTPS communication in APM agents - -To enable secure communication in your agents, you need to update the configured server URL to use `HTTPS` instead of `HTTP`. - -* *Go agent*: {apm-go-ref}/configuration.html#config-server-url[`ELASTIC_APM_SERVER_URL`] -* *Java agent*: {apm-java-ref}/config-reporter.html#config-server-urls[`server_urls`] -* *.NET agent*: {apm-dotnet-ref}/config-reporter.html#config-server-url[`ServerUrl`] -* *Node.js agent*: {apm-node-ref}/configuration.html#server-url[`serverUrl`] -* *PHP agent*: {apm-php-ref-v}/configuration-reference.html#config-server-url[`server_url`] -* *Python agent*: {apm-py-ref}/[`server_url`] -* *Ruby agent*: {apm-ruby-ref}/configuration.html#config-server-url[`server_url`] - -Some agents also allow you to specify a custom certificate authority for connecting to APM Server. - -* *Go agent*: certificate pinning through {apm-go-ref}/configuration.html#config-server-cert[`ELASTIC_APM_SERVER_CERT`] -* *Python agent*: certificate pinning through {apm-py-ref}/configuration.html#config-server-cert[`server_cert`] -* *Ruby agent*: certificate pinning through {apm-ruby-ref}/configuration.html#config-ssl-ca-cert[`server_ca_cert`] -* *.NET agent*: {apm-dotnet-ref}/config-reporter.html#config-server-cert[`ServerCert`] -* *Node.js agent*: custom CA setting through {apm-node-ref}/configuration.html#server-ca-cert-file[`serverCaCertFile`] -* *Java agent*: adding the certificate to the JVM `trustStore`. -See {apm-java-ref}/ssl-configuration.html#ssl-server-authentication[APM Server authentication] for more details. - -Agents that don't allow you specify a custom certificate will allow you to -disable verification of the SSL certificate. -This ensures encryption, but does not verify that you are sending data to the correct APM Server. - -* *Go agent*: {apm-go-ref}/configuration.html#config-verify-server-cert[`ELASTIC_APM_VERIFY_SERVER_CERT`] -* *.NET agent*: {apm-dotnet-ref}/config-reporter.html#config-verify-server-cert[`VerifyServerCert`] -* *Java agent*: {apm-java-ref}/config-reporter.html#config-verify-server-cert[`verify_server_cert`] -* *PHP agent*: {apm-php-ref-v}/configuration-reference.html#config-verify-server-cert[`verify_server_cert`] -* *Python agent*: {apm-py-ref}/configuration.html#config-verify-server-cert[`verify_server_cert`] -* *Ruby agent*: {apm-ruby-ref}/configuration.html#config-verify-server-cert[`verify_server_cert`] -* *Node.js agent*: {apm-node-ref}/configuration.html#validate-server-cert[`verifyServerCert`] - -[[secure-communication-unauthenticated]] -=== Anonymous authentication - -IMPORTANT: {deprecation-notice-config} -If you've already upgraded, see <>. - -Elastic APM agents can send unauthenticated (anonymous) events to the APM Server. -An event is considered to be anonymous if no authentication token can be extracted from the incoming request. -The APM Server's default response to these these requests depends on its configuration: - -[options="header"] -|==== -|Configuration |Default -|An <> or <> is configured | Anonymous requests are rejected and an authentication error is returned. -|No API key or secret token is configured | Anonymous requests are accepted by the APM Server. -|==== - -In some cases, however, it makes sense to allow both authenticated and anonymous requests. -For example, it isn't possible to authenticate requests from front-end services as -the secret token or API key can't be protected. This is the case with the Real User Monitoring (RUM) -agent running in a browser, or the iOS/Swift agent running in a user application. -However, you still likely want to authenticate requests from back-end services. -To solve this problem, you can enable anonymous authentication in the APM Server to allow the -ingestion of unauthenticated client-side APM data while still requiring authentication for server-side services. - -When an <> or <> is configured, -anonymous authentication must be enabled to collect RUM data. -To enable anonymous access, set either <> or -<> to `true`. - -Because anyone can send anonymous events to the APM Server, -additional configuration variables are available to rate limit the number anonymous events the APM Server processes; -throughput is equal to the `rate_limit.ip_limit` times the `rate_limit.event_limit`. - -See <> for a complete list of options and a sample configuration file. diff --git a/docs/legacy/security.asciidoc b/docs/legacy/security.asciidoc deleted file mode 100644 index 57b6a8767a3..00000000000 --- a/docs/legacy/security.asciidoc +++ /dev/null @@ -1,9 +0,0 @@ -A reference of all available <> is also available. - -[float] -[[security-overview]] -== Security Overview - -APM Server exposes an HTTP endpoint, and as with anything that opens ports on your servers, -you should be careful about who can connect to it. -Firewall rules are recommended to ensure only authorized systems can connect. diff --git a/docs/legacy/server-info.asciidoc b/docs/legacy/server-info.asciidoc deleted file mode 100644 index f734a05e22e..00000000000 --- a/docs/legacy/server-info.asciidoc +++ /dev/null @@ -1,45 +0,0 @@ -[[server-info]] -== Server Information API - -++++ -Server information -++++ - -IMPORTANT: {deprecation-notice-api} -If you've already upgraded, see <>. - -The APM Server exposes an API endpoint to query general server information. -This lightweight endpoint is useful as a server up/down health check. - -[[server-info-endpoint]] -[float] -=== Server Information endpoint -Send an `HTTP GET` request to the server information endpoint: - -[source,bash] ------------------------------------------------------------- -http(s)://{hostname}:{port}/ ------------------------------------------------------------- - -This endpoint always returns an HTTP 200. - -If an <> or <> is set, only requests including <> will receive server details. - -[[server-info-examples]] -[float] -==== Example - -Example APM Server information request: - -["source","sh",subs="attributes"] ---------------------------------------------------------------------------- -curl -X POST http://127.0.0.1:8200/ \ - -H "Authorization: Bearer secret_token" - -{ - "build_date": "2021-12-18T19:59:06Z", - "build_sha": "24fe620eeff5a19e2133c940c7e5ce1ceddb1445", - "publish_ready": true, - "version": "{version}" -} ---------------------------------------------------------------------------- diff --git a/docs/legacy/setting-up-and-running.asciidoc b/docs/legacy/setting-up-and-running.asciidoc index aaedceea193..14940df4694 100644 --- a/docs/legacy/setting-up-and-running.asciidoc +++ b/docs/legacy/setting-up-and-running.asciidoc @@ -1,30 +1,27 @@ [[setting-up-and-running]] -== Set up APM Server +== APM Server advanced setup ++++ -Set up +Advanced setup ++++ -IMPORTANT: {deprecation-notice-installation} - Before reading this section, see the <> for basic installation and running instructions. This section includes additional information on how to set up and run APM Server, including: * <> -* <> * <> * <> * <> include::{libbeat-dir}/shared-directory-layout.asciidoc[] -include::{libbeat-dir}/keystore.asciidoc[] - include::{libbeat-dir}/command-reference.asciidoc[] +include::./data-ingestion.asciidoc[] + include::./high-availability.asciidoc[] include::{libbeat-dir}/shared-systemd.asciidoc[] diff --git a/docs/legacy/sourcemap-api.asciidoc b/docs/legacy/sourcemap-api.asciidoc deleted file mode 100644 index 2c54bf48d55..00000000000 --- a/docs/legacy/sourcemap-api.asciidoc +++ /dev/null @@ -1,88 +0,0 @@ -[[sourcemap-api]] -== Source map upload API - -++++ -Source map upload -++++ - -IMPORTANT: {deprecation-notice-api} -If you've already upgraded, see <>. - -IMPORTANT: You must <> in the APM Server for this endpoint to work. - -The APM Server exposes an API endpoint to upload source maps for real user monitoring (RUM). -See the <> guide to get started. - -If you're using the <>, -you must use the {kib} {kibana-ref}/rum-sourcemap-api.html[source map upload API] instead. - -[[sourcemap-endpoint]] -[float] -=== Upload endpoint -Send a `HTTP POST` request with the `Content-Type` header set to `multipart/form-data` to the source map endpoint: - -[source,bash] ------------------------------------------------------------- -http(s)://{hostname}:{port}/assets/v1/sourcemaps ------------------------------------------------------------- - -[[sourcemap-request-fields]] -[float] -==== Request Fields -The request must include some fields needed to identify `source map` correctly later on: - -* `service_name` -* `service_version` -* `sourcemap` - must follow the https://docs.google.com/document/d/1U1RGAehQwRypUTovF1KRlpiOFze0b-_2gc6fAH0KY0k[Source map revision 3 proposal] -spec and be attached as a `file upload`. -* `bundle_filepath` - the absolute path of the final bundle as it is used in the web application - -You can configure an <> or <> to restrict source map uploads. - -[float] -[[sourcemap-apply]] -==== How source maps are applied - -APM Server attempts to find the correct source map for each `stack trace frame` in an event. -To do this, it tries the following: - -* Compare the event's `service.name` with the source map's `service_name` -* Compare the event's `service.version` with the source map's `service_version` -* Compare the stack trace frame's `abs_path` with the source map's `bundle_filepath` - -While comparing the stack trace frame's `abs_path` with the source map's `bundle_filepath`, the search logic will prioritize `abs_path` full matching: -[source,console] ---------------------------------------------------------------------------- -{ - "sourcemap.bundle_filepath": "http://localhost/static/js/bundle.js" -} ---------------------------------------------------------------------------- - -But if there is no full match, it also accepts source maps that match only the URLs path (without the host). -[source,console] ---------------------------------------------------------------------------- -{ - "sourcemap.bundle_filepath": "/static/js/bundle.js" -} ---------------------------------------------------------------------------- - -If a source map is found, the `stack trace frame` attributes `filename`, `function`, `line number`, and `column number` are overwritten, -and `abs path` is https://golang.org/pkg/path/#Clean[cleaned] to be the shortest path name equivalent to the given path name. -If multiple source maps are found, -the one with the latest upload timestamp is used. - -[[sourcemap-api-examples]] -[float] -==== Example - -Example source map request including an optional <> "mysecret": - -["source","sh",subs="attributes"] ---------------------------------------------------------------------------- -curl -X POST http://127.0.0.1:8200/assets/v1/sourcemaps \ - -H "Authorization: Bearer mysecret" \ - -F service_name="test-service" \ - -F service_version="1.0" \ - -F bundle_filepath="http://localhost/static/js/bundle.js" \ - -F sourcemap=@bundle.js.map ---------------------------------------------------------------------------- diff --git a/docs/legacy/sourcemap-indices.asciidoc b/docs/legacy/sourcemap-indices.asciidoc deleted file mode 100644 index ed1dfc0a683..00000000000 --- a/docs/legacy/sourcemap-indices.asciidoc +++ /dev/null @@ -1,13 +0,0 @@ -[[sourcemap-indices]] -== Example source map document - -++++ -Source map document -++++ - -This example shows what a source map document can look like when indexed in {es}: - -[source,json] ----- -include::../data/intake-api/generated/sourcemap/bundle.js.map[] ----- diff --git a/docs/legacy/sourcemaps.asciidoc b/docs/legacy/sourcemaps.asciidoc deleted file mode 100644 index 4e3ae6b96d2..00000000000 --- a/docs/legacy/sourcemaps.asciidoc +++ /dev/null @@ -1,183 +0,0 @@ -[[sourcemaps]] -== How to apply source maps to error stack traces when using minified bundles - -++++ -Create and upload source maps (RUM) -++++ - -IMPORTANT: {deprecation-notice-data} -If you've already upgraded, please see <> instead. - -Minifying JavaScript bundles in production is a common practice; -it can greatly improve the load time and network latency of your applications. -The problem with minifying code is that it can be hard to debug. - -For best results, uploading source maps should become a part of your deployment procedure, -and not something you only do when you see unhelpful errors. -That's because uploading source maps after errors happen won't make old errors magically readable — -errors must occur again for source mapping to occur. - -Here's an example of an exception stack trace in the {apm-app} when using minified code. -As you can see, it's not very helpful. - -[role="screenshot"] -image::images/source-map-before.png[{apm-app} without source mapping] - -With a source map, minified files are mapped back to the original source code, -allowing you to maintain the speed advantage of minified code, -without losing the ability to quickly and easily debug your application. -Here's the same example as before, but with a source map uploaded and applied: - -[role="screenshot"] -image::images/source-map-after.png[{apm-app} with source mapping] - -Follow the steps below to enable source mapping your error stack traces in the {apm-app}: - -* <> -* <> -* <> -* <> - -[float] -[[sourcemap-rum-initialize]] -=== Initialize the RUM Agent - -Set the service name and version of your application when initializing the RUM Agent. -To make uploading subsequent source maps easier, the `serviceVersion` you choose might be the -`version` from your `package.json`. For example: - -[source,js] ----- -import { init as initApm } from '@elastic/apm-rum' -const serviceVersion = require("./package.json").version - -const apm = initApm({ - serviceName: 'myService', - serviceVersion: serviceVersion -}) ----- - -Or, `serviceVersion` could be a git commit reference. For example: - -[source,js] ----- -const git = require('git-rev-sync') -const serviceVersion = git.short() ----- - -It can also be any other unique string that indicates a specific version of your application. -The APM integration uses the service name and version to match the correct source map file to each stack trace. - -[float] -[role="child_attributes"] -[[sourcemap-rum-generate]] -=== Generate a source map - -To be compatible with Elastic APM, source maps must follow the -https://sourcemaps.info/spec.html[source map revision 3 proposal spec]. - -Source maps can be generated and configured in many different ways. -For example, parcel automatically generates source maps by default. -If you're using webpack, some configuration may be needed to generate a source map: - -[source,js] ----- -const webpack = require('webpack') -const serviceVersion = require("./package.json").version <1> -const TerserPlugin = require('terser-webpack-plugin'); -module.exports = { - entry: 'app.js', - output: { - filename: 'app.min.js', - path: './dist' - }, - devtool: 'source-map', - plugins: [ - new webpack.DefinePlugin({'serviceVersion': JSON.stringify(serviceVersion)}), - new TerserPlugin({ - sourceMap: true - }) - ] -} ----- -<1> If you're using a different method of defining `serviceVersion`, you can set it here. - -[float] -[[sourcemap-rum-configure]] -=== Configure the {kib} endpoint in APM Server - -include::./tab-widgets/kibana-endpoint-widget.asciidoc[] - -[float] -[[sourcemap-rum-upload]] -=== Upload the source map to {kib} - -{kib} exposes a {kibana-ref}/rum-sourcemap-api.html[source map endpoint] for uploading source maps. -Source maps can be uploaded as a string, or as a file upload. - -Let's look at two different ways to upload a source map: curl and a custom application. -Each example includes the four fields necessary for APM Server to later map minified code to its source: - -* `service_name` - Should match the `serviceName` from step one -* `service_version` - Should match the `serviceVersion` from step one -* `bundle_filepath` - The absolute path of the final bundle as used in the web application -* `sourcemap` - The location of the source map. -If you have multiple source maps, you'll need to upload each individually. - -[float] -[[sourcemap-curl]] -==== Upload via curl - -Here’s an example curl request that uploads the source map file created in the previous step. -This request uses an API key for authentication. - -[source,console] ----- -SERVICEVERSION=`node -e "console.log(require('./package.json').version);"` && \ <1> -curl -X POST "http://localhost:5601/api/apm/sourcemaps" \ --H 'Content-Type: multipart/form-data' \ --H 'kbn-xsrf: true' \ --H 'Authorization: ApiKey ${YOUR_API_KEY}' \ <2> --F 'service_name="foo"' \ --F 'service_version="$SERVICEVERSION"' \ --F 'bundle_filepath="/test/e2e/general-usecase/bundle.js.map"' \ --F 'sourcemap="@./dist/app.min.js.map"' ----- -<1> This example uses the version from `package.json` -<2> The API key used here needs to have appropriate {kibana-ref}/rum-sourcemap-api.html[privileges] - -[float] -[[sourcemap-custom-app]] -==== Upload via a custom app - -To ensure uploading source maps become a part of your deployment process, -consider automating the process with a custom application. -Here's an example Node.js application that uploads the source map file created in the previous step: - -[source,js] ----- -console.log('Uploading sourcemaps!') -var request = require('request') -var filepath = './dist/app.min.js.map' -var formData = { - headers: { - Content-Type: 'multipart/form-data', - kbn-xsrf: 'true', - Authorization: 'ApiKey ${YOUR_API_KEY}' - }, - service_name: 'service-name’, - service_version: require("./package.json").version, // Or use 'git-rev-sync' for git commit hash - bundle_filepath: 'http://localhost/app.min.js', - sourcemap: fs.createReadStream(filepath) -} -request.post({url: 'http://localhost:5601/api/apm/sourcemaps',formData: formData}, function (err, resp, body) { - if (err) { - console.log('Error while uploading sourcemaps!', err) - } else { - console.log('Sourcemaps uploaded!') - } -}) ----- - -That's it! New exception stack traces should now be correctly mapped to your source code. -Don't forget to enable RUM support in the APM integration if you haven't already. diff --git a/docs/legacy/span-api.asciidoc b/docs/legacy/span-api.asciidoc deleted file mode 100644 index 3b42cf25fed..00000000000 --- a/docs/legacy/span-api.asciidoc +++ /dev/null @@ -1,16 +0,0 @@ -[[span-api]] -=== Spans - -Spans are events captured by an agent occurring in a monitored service. - -[[span-schema]] -[float] -==== Span Schema - -APM Server uses JSON Schema to validate requests. The specification for spans is defined on -{github_repo_link}/docs/spec/v2/span.json[GitHub] and included below: - -[source,json] ----- -include::../spec/v2/span.json[] ----- diff --git a/docs/legacy/span-indices.asciidoc b/docs/legacy/span-indices.asciidoc deleted file mode 100644 index 4a82cebf1fe..00000000000 --- a/docs/legacy/span-indices.asciidoc +++ /dev/null @@ -1,13 +0,0 @@ -[[span-indices]] -== Example span documents - -++++ -Span documents -++++ - -This example shows what span documents can look like when indexed in {es}: - -[source,json] ----- -include::../data/elasticsearch/generated/spans.json[] ----- diff --git a/docs/legacy/ssl-input-settings.asciidoc b/docs/legacy/ssl-input-settings.asciidoc index 11fed09e222..9f4cdfb5f1a 100644 --- a/docs/legacy/ssl-input-settings.asciidoc +++ b/docs/legacy/ssl-input-settings.asciidoc @@ -1,9 +1,6 @@ [[agent-server-ssl]] === SSL input settings -IMPORTANT: {deprecation-notice-config} -If you've already upgraded, please see <> instead. - You can specify the following options in the `apm-server.ssl` section of the +{beatname_lc}.yml+ config file. They apply to SSL/TLS communication between the APM Server and APM Agents. @@ -29,7 +26,6 @@ Required if `apm-server.ssl.enabled` is `true`. ==== `key_passphrase` The passphrase used to decrypt an encrypted key stored in the configured `key` file. -We recommend saving the `key_passphrase` in the APM Server <>. [float] ==== `supported_protocols` diff --git a/docs/legacy/storage-management.asciidoc b/docs/legacy/storage-management.asciidoc deleted file mode 100644 index 844b9d51fae..00000000000 --- a/docs/legacy/storage-management.asciidoc +++ /dev/null @@ -1,299 +0,0 @@ -[[storage-management]] -== Storage Management - -++++ -Manage Storage -++++ - -IMPORTANT: {deprecation-notice-data} -If you've already upgraded, please see <> instead. - -* <> -* <> -* <> -* <> -* <> - -[[sizing-guide]] -=== Storage and sizing guide - -IMPORTANT: {deprecation-notice-data} -If you've already upgraded, please see <> instead. - -APM processing and storage costs are largely dominated by transactions, spans, and stack frames. - -* {apm-overview-ref-v}/transactions.html[*Transactions*] describe an event captured by an Elastic {apm-agent} instrumenting a service. -They are the highest level of work being measuring within a service. -* {apm-overview-ref-v}/transaction-spans.html[*Spans*] belong to transactions. They measure from the start to end of an activity, -and contain information about a specific code path that has been executed. -* *Stack frames* belong to spans. Stack frames represent a function call on the call stack, -and include attributes like function name, file name and path, line number, etc. -Stack frames can heavily influence the size of a span. - -[float] -[[typical-transactions]] -==== Typical transactions - -Due to the high variability of APM data, it's difficult to classify a transaction as typical. -Regardless, this guide will attempt to classify Transactions as _Small_, _Medium_, or _Large_, -and make recommendations based on those classifications. - -The size of a transaction depends on the language, agent settings, and what services the agent instruments. -For instance, an agent auto-instrumenting a service with a popular tech stack -(web framework, database, caching library, etc.) is more likely to generate bigger transactions. - -In addition, all agents support manual instrumentation. -How little or much you use these APIs will also impact what a typical transaction looks like. - -If your sampling rate is very small, transactions will be the dominate storage cost. - -Here's a speculative reference: - -[options="header"] -|======================================================================= -|Transaction size |Number of Spans |Number of stack frames -|_Small_ |5-10 |5-10 -|_Medium_ |15-20 |15-20 -|_Large_ |30-40 |30-40 -|======================================================================= - -There will always be transaction outliers with hundreds of spans or stack frames, but those are very rare. -Small transactions are the most common. - -[float] -[[typical-storage]] -==== Typical storage - -Consider the following typical storage reference. -These numbers do not account for {es} compression. - -* 1 unsampled transaction is **~1 KB** -* 1 span with 10 stack frames is **~4 KB** -* 1 span with 50 stack frames is **~20 KB** -* 1 transaction with 10 spans, each with 10 stack frames is **~50 KB** -* 1 transaction with 25 spans, each with 25 spans is **250-300 KB** -* 100 transactions with 10 spans, each with 10 stack frames, sampled at 90% is **600 KB** - -APM data compresses quite well, so the storage cost in {es} will be considerably less: - -* Indexing 100 unsampled transactions per second for 1 hour results in 360,000 documents. These documents use around **50 MB** of disk space. -* Indexing 10 transactions per second for 1 hour, each transaction with 10 spans, each span with 10 stack frames, results in 396,000 documents. These documents use around **200 MB** of disk space. -* Indexing 25 transactions per second for 1 hour, each transaction with 25 spans, each span with 25 stack frames, results in 2,340,000 documents. These documents use around **1.2 GB** of disk space. - -NOTE: These examples were indexing the same data over and over with minimal variation. Because of that, the compression ratios observed of 80-90% are somewhat optimistic. - -[[processing-performance]] -=== Processing and performance - -IMPORTANT: {deprecation-notice-data} -If you've already upgraded, please see <> instead. - -APM Server performance depends on a number of factors: memory and CPU available, -network latency, transaction sizes, workload patterns, -agent and server settings, versions, and protocol. - -Let's look at a simple example that makes the following assumptions: - -* The load is generated in the same region as where APM Server and {es} are deployed. -* We're using the default settings in cloud. -* A small number of agents are reporting. - -This leaves us with relevant variables like payload and instance sizes. -See the table below for approximations. -As a reminder, events are -{apm-overview-ref-v}/transactions.html[transactions] and -{apm-overview-ref-v}/transaction-spans.html[spans]. - -[options="header"] -|======================================================================= -|Transaction/Instance |512 MB Instance |2 GB Instance |8 GB Instance -|Small transactions - -_5 spans with 5 stack frames each_ |600 events/second |1200 events/second |4800 events/second -|Medium transactions - -_15 spans with 15 stack frames each_ |300 events/second |600 events/second |2400 events/second -|Large transactions - -_30 spans with 30 stack frames each_ |150 events/second |300 events/second |1400 events/second -|======================================================================= - -In other words, a 512 MB instance can process \~3 MB per second, -while an 8 GB instance can process ~20 MB per second. - -APM Server is CPU bound, so it scales better from 2 GB to 8 GB than it does from 512 MB to 2 GB. -This is because larger instance types in {ecloud} come with much more computing power. - -Don't forget that the APM Server is stateless. -Several instances running do not need to know about each other. -This means that with a properly sized {es} instance, APM Server scales out linearly. - -NOTE: RUM deserves special consideration. The RUM agent runs in browsers, and there can be many thousands reporting to an APM Server with very variable network latency. - -[[reduce-storage]] -=== Reduce storage - -IMPORTANT: {deprecation-notice-data} -If you've already upgraded, please see <> instead. - -The amount of storage for APM data depends on several factors: -the number of services you are instrumenting, how much traffic the services see, agent and server settings, -and the length of time you store your data. - -[float] -[[reduce-sample-rate]] -==== Reduce the sample rate - -The transaction sample rate directly influences the number of documents (more precisely, spans) to be indexed. -It is the easiest way to reduce storage. - -The transaction sample rate is a configuration setting of each agent. -Reducing it does not affect the collection of metrics such as _Transactions per second_. - -[float] -[[reduce-stacktrace]] -==== Reduce collected stack trace information - -Elastic APM agents collect `stacktrace` information under certain circumstances. -This can be very helpful in identifying issues in your code, -but it also comes with an overhead at collection time and increases the storage usage. - -Stack trace collection settings are managed in each agent. - -[float] -[[delete-data]] -==== Delete data - -You might want to only keep data for a defined time period. -This might mean deleting old documents periodically, -deleting data collected for specific services or customers, -or deleting specific indices. - -Depending on your use case, -you can delete data periodically with <>, -{curator-ref-current}[Curator], the {ref}/docs-delete-by-query.html[Delete By Query API], -or in the {kibana-ref}/managing-indices.html[{kib} Index Management UI]. - -[float] -[[delete-data-ilm]] -===== Delete data with {ilm-init} - -Index Lifecycle management ({ilm-init}) enables you to automate how you want to manage your indices over time. -You can base actions on factors such as shard size and performance requirements. -See <> to learn more. - -[float] -[[delete-data-periodically]] -===== Delete data periodically - -To delete data periodically you can use {curator-ref-current}[Curator] and set up a cron job to run it. - -By default, APM indices have the pattern `apm-%{[observer.version]}-{type}-%{+yyyy.MM.dd}`. -With the curator command line interface you can, for instance, see all your existing indices: - -["source","sh",subs="attributes"] ------------------------------------------------------------- -curator_cli --host localhost show_indices --filter_list '[{"filtertype":"pattern","kind":"prefix","value":"apm-"}]' - -apm-{version}-error-{sample_date_0} -apm-{version}-error-{sample_date_1} -apm-{version}-error-{sample_date_2} -apm-{version}-sourcemap -apm-{version}-span-{sample_date_0} -apm-{version}-span-{sample_date_1} -apm-{version}-span-{sample_date_2} -apm-{version}-transaction-{sample_date_0} -apm-{version}-transaction-{sample_date_1} -apm-{version}-transaction-{sample_date_2} ------------------------------------------------------------- - -And then delete any span indices older than 1 day: - -["source","sh",subs="attributes"] ------------------------------------------------------------- -curator_cli --host localhost delete_indices --filter_list '[{"filtertype":"pattern","kind":"prefix","value":"apm-{version}-span-"}, {"filtertype":"age","source":"name","timestring":"%Y.%m.%d","unit":"days","unit_count":1,"direction":"older"}]' - -INFO Deleting selected indices: [apm-{version}-span-{sample_date_0}, apm-{version}-span-{sample_date_1}] -INFO ---deleting index apm-{version}-span-{sample_date_0} -INFO ---deleting index apm-{version}-span-{sample_date_1} -INFO "delete_indices" action completed. ------------------------------------------------------------- - -[float] -[[delete-data-by-query]] -===== Delete data matching a query - -You can delete all APM documents matching a specific query. -For example, to delete all documents with a given `service.name`, use the following request: - -["source","console"] ------------------------------------------------------------- -POST /apm-*/_delete_by_query -{ - "query": { - "term": { - "service.name": { - "value": "old-service-name" - } - } - } -} ------------------------------------------------------------- - -See {ref}/docs-delete-by-query.html[delete by query] for further information on this topic. - -[float] -[[delete-data-kibana]] -===== Delete data via {kib} Index Management UI - -Select the indices you want to delete, and click **Manage indices** to see the available actions. -Then click **delete indices**. - -[[manage-indices-kibana]] -=== Manage Indices via {kib} - -IMPORTANT: {deprecation-notice-data} -If you've already upgraded, please see <> instead. - -The {kib} UI for {kibana-ref}/managing-indices.html[managing indices] allows you to view indices, -index settings, mappings, document counts, used storage per index, and much more. -You can also perform management operations, like deleting indices directly via the {kib} UI. -Finally, the UI supports applying bulk operations on several indices at once. - -[[update-existing-data]] -=== Update existing data - -IMPORTANT: {deprecation-notice-data} -If you've already upgraded, please see <> instead. - -You might want to update documents that are already indexed. -For example, if you your service name was set incorrectly. - -To do this, you can use the {ref}/docs-update-by-query.html[Update By Query API]. - -[float] -[[update-data-rename-a-service]] -==== Rename a service - -To rename a service, send the following request: - -["source","sh"] ------------------------------------------------------------- -POST /apm-*/_update_by_query -{ - "query": { - "term": { - "service.name": { - "value": "old-service-name" - } - } - }, - "script": { - "source": "ctx._source.service.name = 'new-service-name'", - "lang": "painless" - } -} ------------------------------------------------------------- -// CONSOLE - -TIP: Remember to also change the service name in the {apm-agents-ref}/index.html[{apm-agent} configuration]. diff --git a/docs/legacy/tab-widgets/jaeger.asciidoc b/docs/legacy/tab-widgets/jaeger.asciidoc index ad2cabe34cb..a2e2fd82395 100644 --- a/docs/legacy/tab-widgets/jaeger.asciidoc +++ b/docs/legacy/tab-widgets/jaeger.asciidoc @@ -42,7 +42,7 @@ When <> is enabled in APM Server, Jaeger agents must also ena . (Optional) Enable token-based authorization + -A <> or <> can be used to ensure only authorized +A <> or <> can be used to ensure only authorized Jaeger agents can send data to the APM Server. When enabled, use an agent level tag to authorize Jaeger agent communication with the APM Server: + diff --git a/docs/legacy/tab-widgets/spin-up-stack.asciidoc b/docs/legacy/tab-widgets/spin-up-stack.asciidoc index 2c65a88d027..19dc1b9aa5a 100644 --- a/docs/legacy/tab-widgets/spin-up-stack.asciidoc +++ b/docs/legacy/tab-widgets/spin-up-stack.asciidoc @@ -30,7 +30,7 @@ Next, install, set up, and run APM Server: Use the config file if you need to change the default configuration that APM Server uses to connect to {es}, or if you need to specify credentials: -* {apm-server-ref-v}/configuring-howto-apm-server.html[Configuring APM Server] +* {apm-guide-ref}/configuring-howto-apm-server.html[Configuring APM Server] ** {apm-server-ref-v}/configuration-process.html[General configuration options] ** {apm-server-ref-v}/configuring-output.html[Configure the {es} output] diff --git a/docs/legacy/transaction-api.asciidoc b/docs/legacy/transaction-api.asciidoc deleted file mode 100644 index 95691741677..00000000000 --- a/docs/legacy/transaction-api.asciidoc +++ /dev/null @@ -1,16 +0,0 @@ -[[transaction-api]] -=== Transactions - -Transactions are events corresponding to an incoming request or similar task occurring in a monitored service. - -[[transaction-schema]] -[float] -==== Transaction Schema - -APM Server uses JSON Schema to validate requests. The specification for transactions is defined on -{github_repo_link}/docs/spec/v2/transaction.json[GitHub] and included below: - -[source,json] ----- -include::../spec/v2/transaction.json[] ----- diff --git a/docs/legacy/transaction-indices.asciidoc b/docs/legacy/transaction-indices.asciidoc deleted file mode 100644 index e5db23db4e7..00000000000 --- a/docs/legacy/transaction-indices.asciidoc +++ /dev/null @@ -1,13 +0,0 @@ -[[transaction-indices]] -== Example transaction documents - -++++ -Transaction documents -++++ - -This example shows what transaction documents can look like when indexed in {es}: - -[source,json] ----- -include::../data/elasticsearch/generated/transactions.json[] ----- diff --git a/docs/legacy/troubleshooting.asciidoc b/docs/legacy/troubleshooting.asciidoc deleted file mode 100644 index 0529b803b84..00000000000 --- a/docs/legacy/troubleshooting.asciidoc +++ /dev/null @@ -1,45 +0,0 @@ -[[troubleshooting]] -= Troubleshoot - -IMPORTANT: {deprecation-notice-data} - -If you have issues installing or running APM Server, -read the following tips: - -* <> -* <> -* <> - -Other sections in the documentation may also be helpful: - -* <> -* <> -* <> -* <> -* {apm-overview-ref-v}/agent-server-compatibility.html[Agent/Server compatibility matrix] - -If your issue is potentially related to other components of the APM ecosystem, -don't forget to check the relevant troubleshooting guides: - -* {kibana-ref}/troubleshooting.html[{apm-app} troubleshooting] -* {apm-dotnet-ref-v}/troubleshooting.html[.NET agent troubleshooting] -* {apm-go-ref-v}/troubleshooting.html[Go agent troubleshooting] -* {apm-ios-ref-v}/troubleshooting.html[iOS agent troubleshooting] -* {apm-java-ref-v}/trouble-shooting.html[Java agent troubleshooting] -* {apm-node-ref-v}/troubleshooting.html[Node.js agent troubleshooting] -* {apm-php-ref-v}/troubleshooting.html[PHP agent troubleshooting] -* {apm-py-ref-v}/troubleshooting.html[Python agent troubleshooting] -* {apm-ruby-ref-v}/debugging.html[Ruby agent troubleshooting] -* {apm-rum-ref-v}/troubleshooting.html[RUM troubleshooting] - -include::common-problems.asciidoc[] - -[[enable-apm-server-debugging]] -== Debug - -include::{libbeat-dir}/debugging.asciidoc[] - -[[getting-help]] -== Get help - -include::{libbeat-dir}/getting-help.asciidoc[] \ No newline at end of file diff --git a/docs/manage-storage.asciidoc b/docs/manage-storage.asciidoc index e348e9b1684..706076c9f3c 100644 --- a/docs/manage-storage.asciidoc +++ b/docs/manage-storage.asciidoc @@ -92,6 +92,7 @@ Here are some ways you can reduce either the amount of APM data you're ingesting or the amount of data you're retaining. [float] +[[reduce-sample-rate]] ==== Reduce the sample rate Distributed tracing can generate a substantial amount of data. @@ -111,6 +112,7 @@ retaining important information but reducing processing and storage overhead. See <> to learn more. [float] +[[reduce-stacktrace]] ==== Reduce collected stack trace information Elastic APM agents collect `stacktrace` information under certain circumstances. @@ -205,4 +207,4 @@ POST /.ds-*-apm*/_update_by_query?expand_wildcards=all TIP: Remember to also change the service name in the {apm-agents-ref}/index.html[{apm-agent} configuration]. -include::./apm-tune-elasticsearch.asciidoc[] +include::./legacy/exploring-es-data.asciidoc[leveloffset=+2] diff --git a/docs/monitor-apm-server.asciidoc b/docs/monitor-apm-server.asciidoc new file mode 100644 index 00000000000..90629b5e02d --- /dev/null +++ b/docs/monitor-apm-server.asciidoc @@ -0,0 +1,31 @@ +[[monitor-apm]] +== Monitor APM Server + +++++ +Monitor +++++ + +Use the {stack} {monitor-features} to gain insight into the real-time health and performance of APM Server. +Stack monitoring exposes key metrics, like intake response count, intake error rate, output event rate, +output failed event rate, and more. + +Select your deployment method to get started: + +* <> +* <> +* <> + +[float] +[[monitor-apm-cloud]] +=== {ecloud} + +{ecloud} manages the installation and configuration of a monitoring agent for you -- so +all you have to do is flip a switch and watch the data pour in. + +* **{ess}** user? See {ece-ref}/ece-enable-logging-and-monitoring.html[ESS: Enable logging and monitoring]. +* **{ece}** user? See {cloud}/ec-enable-logging-and-monitoring.html[ECE: Enable logging and monitoring]. + + +include::./monitor.asciidoc[] + +include::{libbeat-dir}/monitoring/monitoring-beats.asciidoc[leveloffset=+2] \ No newline at end of file diff --git a/docs/monitor.asciidoc b/docs/monitor.asciidoc index 64463e12416..fa74864e8b9 100644 --- a/docs/monitor.asciidoc +++ b/docs/monitor.asciidoc @@ -1,23 +1,9 @@ -[[monitor-apm]] -=== Monitor APM Server - -Use the {stack} {monitor-features} to gain insight into the real-time health and performance of APM Server. -Stack monitoring exposes key metrics, like intake response count, intake error rate, output event rate, -output failed event rate, and more. - -[float] -[[monitor-apm-cloud]] -=== Monitor APM running on {ecloud} - -{ecloud} manages the installation and configuration of a monitoring agent for you -- so -all you have to do is flip a switch and watch the data pour in. - -* **{ess}** user? See {ece-ref}/ece-enable-logging-and-monitoring.html[ESS: Enable logging and monitoring]. -* **{ece}** user? See {cloud}/ec-enable-logging-and-monitoring.html[ECE: Enable logging and monitoring]. - -[float] [[monitor-apm-self-install]] -=== Monitor a self-installation of APM +=== Monitor a Fleet-managed APM Server + +++++ +Fleet-managed +++++ NOTE: This guide assumes you are already ingesting APM data into the {stack}. @@ -54,22 +40,15 @@ agent.monitoring: <3> The port to expose logs/metrics on -- -. Stop {agent} +. Enroll {agent} + -If {agent} is already running, you must stop it. -Use the command that work with your system: +After editing `elastic-agent.yml`, you must re-enroll {agent} for the changes to take effect. + -- -include::{ingest-docs-root}/docs/en/ingest-management/tab-widgets/stop-widget.asciidoc[] +include::{ingest-docs-root}/docs/en/ingest-management/commands.asciidoc[tag=enroll] -- -. Start {agent} -+ -Use the command that work with your system: -+ --- -include::{ingest-docs-root}/docs/en/ingest-management/tab-widgets/start-widget.asciidoc[] --- +See the {fleet-guide}/elastic-agent-cmd-options.html[{agent} command reference] for more information on the enroll command. [float] [[install-config-metricbeat]] diff --git a/docs/notices.asciidoc b/docs/notices.asciidoc deleted file mode 100644 index 9fc0ea9ec0a..00000000000 --- a/docs/notices.asciidoc +++ /dev/null @@ -1,15 +0,0 @@ -// For installation, get started, and setup docs -:deprecation-notice-installation: This method of installing APM Server will be deprecated and removed in a future release. Please consider getting started with the <> instead. - -// Generic "running" message -// Usually followed by a link to the corresponding APM integration docs -:deprecation-notice-data: This documentation refers to the standalone (legacy) method of running APM Server. This method of running APM Server will be deprecated and removed in a future release. Please consider <>. - -// For monitoring docs -:deprecation-notice-monitor: This documentation refers to monitoring the standalone (legacy) APM Server. This method of running APM Server will be deprecated and removed in a future release. Please consider <>. - -// For configuration docs -:deprecation-notice-config: This documentation refers to configuring the standalone (legacy) APM Server. This method of running APM Server will be deprecated and removed in a future release. Please consider <>. - -// For API docs -:deprecation-notice-api: This documentation refers to the API of the standalone (legacy) APM Server. This method of running APM Server will be deprecated and removed in a future release. Please consider <>. diff --git a/docs/overview.asciidoc b/docs/overview.asciidoc deleted file mode 100644 index 472f600d65a..00000000000 --- a/docs/overview.asciidoc +++ /dev/null @@ -1,28 +0,0 @@ -[[apm-overview]] -== Free and open application performance monitoring - -++++ -What is APM? -++++ - -Elastic APM is an application performance monitoring system built on the {stack}. -It allows you to monitor software services and applications in real-time, by -collecting detailed performance information on response time for incoming requests, -database queries, calls to caches, external HTTP requests, and more. -This makes it easy to pinpoint and fix performance problems quickly. - -Elastic APM also automatically collects unhandled errors and exceptions. -Errors are grouped based primarily on the stack trace, -so you can identify new errors as they appear and keep an eye on how many times specific errors happen. - -Metrics are another vital source of information when debugging production systems. -Elastic APM agents automatically pick up basic host-level metrics and agent-specific metrics, -like JVM metrics in the Java Agent, and Go runtime metrics in the Go Agent. - -[float] -=== Give Elastic APM a try - -Learn more about the <> that make up Elastic APM, -or jump right into the <>. - -NOTE: These docs will indiscriminately use the word "service" for both services and applications. diff --git a/docs/sampling.asciidoc b/docs/sampling.asciidoc index 3c1f119c4a1..7db04caecb4 100644 --- a/docs/sampling.asciidoc +++ b/docs/sampling.asciidoc @@ -151,7 +151,7 @@ See the relevant agent's documentation for more details: [[configure-tail-based-sampling]] ==== Configure tail-based sampling -Enable tail-based sampling in the <>. +Enable tail-based sampling with <>. When enabled, trace events are mapped to sampling policies. Each sampling policy must specify a sample rate, and can optionally specify other conditions. All of the policy conditions must be true for a trace event to match it. @@ -188,27 +188,12 @@ or traces with any other name ===== Configuration reference -:input-type: tbs **Top-level tail-based sampling settings:** -// This looks like the root service name/env, trace name/env, and trace outcome - -[cols="2*>. + +When defined, secret tokens are used to authorize requests to the APM Server. +Both the {apm-agent} and APM Server must be configured with the same secret token for the request to be accepted. + +To secure the communication between APM agents and the APM Server with a secret token: + +. Make sure <> is enabled +. <> +. <> + +NOTE: Secret tokens are not applicable for the RUM Agent, +as there is no way to prevent them from being publicly exposed. + +[float] +[[create-secret-token]] +=== Create a secret token + +// lint ignore fleet +NOTE: {ess} and {ece} deployments provision a secret token when the deployment is created. +The secret token can be found and reset in the {ecloud} console under **Deployments** -- **APM & Fleet**. + +include::./tab-widgets/secret-token-widget.asciidoc[] + +[[configure-secret-token]] +[float] +=== Configure the secret token in your APM agents + +Each Elastic {apm-agent} has a configuration option to set the value of the secret token: + +* *Go agent*: {apm-go-ref}/configuration.html#config-secret-token[`ELASTIC_APM_SECRET_TOKEN`] +* *iOS agent*: {apm-ios-ref-v}/configuration.html#secretToken[`secretToken`] +* *Java agent*: {apm-java-ref}/config-reporter.html#config-secret-token[`secret_token`] +* *.NET agent*: {apm-dotnet-ref}/config-reporter.html#config-secret-token[`ELASTIC_APM_SECRET_TOKEN`] +* *Node.js agent*: {apm-node-ref}/configuration.html#secret-token[`Secret Token`] +* *PHP agent*: {apm-php-ref-v}/configuration-reference.html#config-secret-token[`secret_token`] +* *Python agent*: {apm-py-ref}/configuration.html#config-secret-token[`secret_token`] +* *Ruby agent*: {apm-ruby-ref}/configuration.html#config-secret-token[`secret_token`] + +In addition to setting the secret token, ensure the configured server URL uses `HTTPS` instead of `HTTP`: + +* *Go agent*: {apm-go-ref}/configuration.html#config-server-url[`ELASTIC_APM_SERVER_URL`] +* *Java agent*: {apm-java-ref}/config-reporter.html#config-server-urls[`server_urls`] +* *.NET agent*: {apm-dotnet-ref}/config-reporter.html#config-server-url[`ServerUrl`] +* *Node.js agent*: {apm-node-ref}/configuration.html#server-url[`serverUrl`] +* *PHP agent*: {apm-php-ref-v}/configuration-reference.html#config-server-url[`server_url`] +* *Python agent*: {apm-py-ref}/[`server_url`] +* *Ruby agent*: {apm-ruby-ref}/configuration.html#config-server-url[`server_url`] \ No newline at end of file diff --git a/docs/secure-agent-communication.asciidoc b/docs/secure-agent-communication.asciidoc index 156d78f9d64..e2fca3bee18 100644 --- a/docs/secure-agent-communication.asciidoc +++ b/docs/secure-agent-communication.asciidoc @@ -1,6 +1,10 @@ [[secure-agent-communication]] == Secure communication with APM agents +++++ +With APM agents +++++ + Communication between APM agents and {agent} can be both encrypted and authenticated. It is strongly recommended to use both TLS encryption and authentication as secrets are sent as plain text. @@ -15,307 +19,10 @@ If both API keys and a secret token are enabled, APM agents can choose whichever In some use-cases, like when an {apm-agent} is running on the client side, authentication is not possible. See <> for more information. -[[agent-tls]] -=== {apm-agent} TLS communication - -TLS is disabled by default. -When TLS is enabled for APM Server inbound communication, agents will verify the identity -of the APM Server by authenticating its certificate. - -Enable TLS in the <>; a certificate and corresponding private key are required. -The certificate and private key can either be issued by a trusted certificate authority (CA) -or be <>. - -[float] -[[agent-self-sign]] -=== Use a self-signed certificate - -[float] -[[agent-self-sign-1]] -==== Step 1: Create a self-signed certificate - -The {es} distribution offers the `certutil` tool for the creation of self-signed certificates: - -1. Create a CA: `./bin/elasticsearch-certutil ca --pem`. You'll be prompted to enter the desired -location of the output zip archive containing the certificate and the private key. -2. Extract the contents of the CA archive. -3. Create the self-signed certificate: `./bin/elasticsearch-certutil cert --ca-cert -/ca.crt --ca-key /ca.key --pem --name localhost` -4. Extract the certificate and key from the resulted zip archive. - -[float] -[[agent-self-sign-2]] -==== Step 2: Configure the APM integration - -Configure the APM integration to point to the extracted certificate and key. - -[float] -[[agent-self-sign-3]] -==== Step 3: Configure APM agents - -When the APM server uses a certificate that is not chained to a publicly-trusted certificate -(e.g. self-signed), additional configuration is required in the {apm-agent}: - -* *Go agent*: certificate pinning through {apm-go-ref}/configuration.html#config-server-cert[`ELASTIC_APM_SERVER_CERT`] -* *Python agent*: certificate pinning through {apm-py-ref}/configuration.html#config-server-cert[`server_cert`] -* *Ruby agent*: certificate pinning through {apm-ruby-ref}/configuration.html#config-ssl-ca-cert[`server_ca_cert`] -* *.NET agent*: {apm-dotnet-ref}/config-reporter.html#config-server-cert[`ServerCert`] -* *Node.js agent*: custom CA setting through {apm-node-ref}/configuration.html#server-ca-cert-file[`serverCaCertFile`] -* *Java agent*: adding the certificate to the JVM `trustStore`. -See {apm-java-ref}/ssl-configuration.html#ssl-server-authentication[APM Server authentication] for more details. - -We do not recommend disabling {apm-agent} verification of the server's certificate, but it is possible: - -* *Go agent*: {apm-go-ref}/configuration.html#config-verify-server-cert[`ELASTIC_APM_VERIFY_SERVER_CERT`] -* *.NET agent*: {apm-dotnet-ref}/config-reporter.html#config-verify-server-cert[`VerifyServerCert`] -* *Java agent*: {apm-java-ref}/config-reporter.html#config-verify-server-cert[`verify_server_cert`] -* *PHP agent*: {apm-php-ref-v}/configuration-reference.html#config-verify-server-cert[`verify_server_cert`] -* *Python agent*: {apm-py-ref}/configuration.html#config-verify-server-cert[`verify_server_cert`] -* *Ruby agent*: {apm-ruby-ref}/configuration.html#config-verify-server-cert[`verify_server_cert`] -* *Node.js agent*: {apm-node-ref}/configuration.html#validate-server-cert[`verifyServerCert`] - -[float] -[[agent-client-cert]] -=== Client certificate authentication - -APM Server does not require agents to provide a certificate for authentication, -and there is no dedicated support for SSL/TLS client certificate authentication in Elastic’s backend agents. - -[[api-key]] -=== API keys - -IMPORTANT: API keys are sent as plain-text, -so they only provide security when used in combination with <>. - -Enable API key authorization in the <>. -When enabled, API keys are used to authorize requests to the APM Server. - -You can assign one or more unique privileges to each API key: - -* *Agent configuration* (`config_agent:read`): Required for agents to read -{kibana-ref}/agent-configuration.html[Agent configuration remotely]. -* *Ingest* (`event:write`): Required for ingesting agent events. - -To secure the communication between APM Agents and the APM Server with API keys, -make sure <> is enabled, then complete these steps: - -. <> -. <> -. <> -. <> - -[[enable-api-key]] -[float] -=== Enable API keys - -Enable API key authorization in the <>. -You should also set a limit on the number of unique API keys that APM Server allows per minute; -this value should be the number of unique API keys configured in your monitored services. - -[[create-api-key-user]] -[float] -=== Create an API key user in {kib} - -API keys can only have the same or lower access rights than the user that creates them. -Instead of using a superuser account to create API keys, you can create a role with the minimum required -privileges. - -The user creating an {apm-agent} API key must have at least the `manage_own_api_key` cluster privilege -and the APM application-level privileges that it wishes to grant. -In addition, when creating an API key from the {apm-app}, -you'll need the appropriate {kib} Space and Feature privileges. - -The example below uses the {kib} {kibana-ref}/role-management-api.html[role management API] -to create a role named `apm_agent_key_role`. - -[source,js] ----- -POST /_security/role/apm_agent_key_role -{ - "cluster": [ "manage_own_api_key" ], - "applications": [ - { - "application":"apm", - "privileges":[ - "event:write", - "config_agent:read" - ], - "resources":[ "*" ] - }, - { - "application":"kibana-.kibana", - "privileges":[ "feature_apm.all" ], - "resources":[ "space:default" ] <1> - } - ] -} ----- -<1> This example assigns privileges for the default space. - -Assign the newly created `apm_agent_key_role` role to any user that wishes to create {apm-agent} API keys. - -[[create-an-api-key]] -[float] -=== Create an API key in the {apm-app} - -The {apm-app} has a built-in workflow that you can use to easily create and view {apm-agent} API keys. -Only API keys created in the {apm-app} will show up here. - -Using a superuser account, or a user with the role created in the previous step, -open {kib} and navigate to **{observability}** > **APM** > **Settings** > **Agent keys**. -Enter a name for your API key and select at least one privilege. - -For example, to create an API key that can be used to ingest APM events -and read agent central configuration, select `config_agent:read` and `event:write`. - -// lint ignore apm-agent -Click **Create APM Agent key** and copy the Base64 encoded API key. -You will need this for the next step, and you will not be able to view it again. - -[role="screenshot"] -image::images/apm-ui-api-key.png[{apm-app} API key] - -[[agent-api-key]] -[float] -=== Set the API key in your APM agents - -You can now apply your newly created API keys in the configuration of each of your APM agents. -See the relevant agent documentation for additional information: - -// Not relevant for RUM and iOS -* *Go agent*: {apm-go-ref}/configuration.html#config-api-key[`ELASTIC_APM_API_KEY`] -* *.NET agent*: {apm-dotnet-ref}/config-reporter.html#config-api-key[`ApiKey`] -* *Java agent*: {apm-java-ref}/config-reporter.html#config-api-key[`api_key`] -* *Node.js agent*: {apm-node-ref}/configuration.html#api-key[`apiKey`] -* *PHP agent*: {apm-php-ref-v}/configuration-reference.html#config-api-key[`api_key`] -* *Python agent*: {apm-py-ref}/configuration.html#config-api-key[`api_key`] -* *Ruby agent*: {apm-ruby-ref}/configuration.html#config-api-key[`api_key`] - -[[secret-token]] -=== Secret token - -IMPORTANT: Secret tokens are sent as plain-text, -so they only provide security when used in combination with <>. - -Define a secret token in the <>. -When defined, secret tokens are used to authorize requests to the APM Server. -Both the {apm-agent} and APM integration must be configured with the same secret token for the request to be accepted. - -To secure the communication between APM agents and the APM Server with a secret token: - -. Make sure <> is enabled -. <> -. <> - -NOTE: Secret tokens are not applicable for the RUM Agent, -as there is no way to prevent them from being publicly exposed. - -[float] -[[create-secret-token]] -=== Create a secret token - -Create or update a secret token in {fleet}. - -include::./input-apm.asciidoc[tag=edit-integration-settings] -+ -. Navigate to **Agent authorization** > **Secret token** and set the value of your token. -. Click **Save integration**. The APM Server will restart before the change takes effect. - -[[configure-secret-token]] -[float] -=== Configure the secret token in your APM agents - -Each Elastic {apm-agent} has a configuration option to set the value of the secret token: - -* *Go agent*: {apm-go-ref}/configuration.html#config-secret-token[`ELASTIC_APM_SECRET_TOKEN`] -* *iOS agent*: {apm-ios-ref-v}/configuration.html#secretToken[`secretToken`] -* *Java agent*: {apm-java-ref}/config-reporter.html#config-secret-token[`secret_token`] -* *.NET agent*: {apm-dotnet-ref}/config-reporter.html#config-secret-token[`ELASTIC_APM_SECRET_TOKEN`] -* *Node.js agent*: {apm-node-ref}/configuration.html#secret-token[`Secret Token`] -* *PHP agent*: {apm-php-ref-v}/configuration-reference.html#config-secret-token[`secret_token`] -* *Python agent*: {apm-py-ref}/configuration.html#config-secret-token[`secret_token`] -* *Ruby agent*: {apm-ruby-ref}/configuration.html#config-secret-token[`secret_token`] - -In addition to setting the secret token, ensure the configured server URL uses `HTTPS` instead of `HTTP`: - -* *Go agent*: {apm-go-ref}/configuration.html#config-server-url[`ELASTIC_APM_SERVER_URL`] -* *Java agent*: {apm-java-ref}/config-reporter.html#config-server-urls[`server_urls`] -* *.NET agent*: {apm-dotnet-ref}/config-reporter.html#config-server-url[`ServerUrl`] -* *Node.js agent*: {apm-node-ref}/configuration.html#server-url[`serverUrl`] -* *PHP agent*: {apm-php-ref-v}/configuration-reference.html#config-server-url[`server_url`] -* *Python agent*: {apm-py-ref}/[`server_url`] -* *Ruby agent*: {apm-ruby-ref}/configuration.html#config-server-url[`server_url`] - - -[[anonymous-auth]] -=== Anonymous authentication - -Elastic APM agents can send unauthenticated (anonymous) events to the APM Server. -An event is considered to be anonymous if no authentication token can be extracted from the incoming request. -The APM Server's default response to these these requests depends on its configuration: - -[options="header"] -|==== -|Configuration |Default -|An <> or <> is configured | Anonymous requests are rejected and an authentication error is returned. -|No API key or secret token is configured | Anonymous requests are accepted by the APM Server. -|==== - -In some cases, however, it makes sense to allow both authenticated and anonymous requests. -For example, it isn't possible to authenticate requests from front-end services as -the secret token or API key can't be protected. This is the case with the Real User Monitoring (RUM) -agent running in a browser, or the iOS/Swift agent running in a user application. -However, you still likely want to authenticate requests from back-end services. -To solve this problem, you can enable anonymous authentication in the APM Server to allow the -ingestion of unauthenticated client-side APM data while still requiring authentication for server-side services. - -[float] -[[anonymous-auth-config]] -=== Configuring anonymous auth for client-side services - -[NOTE] -==== -You can only enable and configure anonymous authentication if an <> or -<> is configured. If neither are configured, these settings will be ignored. -==== - -When configuring anonymous authentication for client-side services, -there are a few configuration variables that can mitigate the impact of malicious requests to an -unauthenticated APM Server endpoint. - -Use the **Allowed anonymous agents** and **Allowed anonymous services** configs to ensure that the -`agent.name` and `service.name` of each incoming request match a specified list. - -Additionally, the APM Server can rate-limit unauthenticated requests based on the client IP address -(`client.ip`) of the request. -This allows you to specify the maximum number of requests allowed per unique IP address, per second. - -[float] -[[derive-client-ip]] -=== Deriving an incoming request's `client.ip` address - -The remote IP address of an incoming request might be different -from the end-user's actual IP address, for example, because of a proxy. For this reason, -the APM Server attempts to derive the IP address of an incoming request from HTTP headers. -The supported headers are parsed in the following order: - -1. `Forwarded` -2. `X-Real-Ip` -3. `X-Forwarded-For` - -If none of these headers are present, the remote address for the incoming request is used. +include::./tls-comms.asciidoc[] -[float] -[[derive-client-ip-concerns]] -==== Using a reverse proxy or load balancer +include::./api-keys.asciidoc[] -HTTP headers are easily modified; -it's possible for anyone to spoof the derived `client.ip` value by changing or setting, -for example, the value of the `X-Forwarded-For` header. -For this reason, if any of your clients are not trusted, -we recommend setting up a reverse proxy or load balancer in front of the APM Server. +include::./secret-token.asciidoc[] -Using a proxy allows you to clear any existing IP-forwarding HTTP headers, -and replace them with one set by the proxy. -This prevents malicious users from cycling spoofed IP addresses to bypass the -APM Server's rate limiting feature. +include::./anonymous-auth.asciidoc[] diff --git a/docs/secure-comms.asciidoc b/docs/secure-comms.asciidoc new file mode 100644 index 00000000000..60b665936f5 --- /dev/null +++ b/docs/secure-comms.asciidoc @@ -0,0 +1,22 @@ +[[securing-apm-server]] +== Secure communication with the {stack} + +++++ +Secure communication +++++ + +The following topics provide information about securing the APM Server +process and connecting securely to APM agents and the {stack}. + +* <> +* <> + +:leveloffset: +1 +include::secure-agent-communication.asciidoc[] + +// APM privileges +include::{docdir}/legacy/feature-roles.asciidoc[] + +// APM API keys +include::{docdir}/legacy/api-keys.asciidoc[] +:leveloffset: -1 \ No newline at end of file diff --git a/docs/legacy/shared-kibana-endpoint.asciidoc b/docs/shared-kibana-endpoint.asciidoc similarity index 100% rename from docs/legacy/shared-kibana-endpoint.asciidoc rename to docs/shared-kibana-endpoint.asciidoc diff --git a/docs/tab-widgets/anonymous-auth-widget.asciidoc b/docs/tab-widgets/anonymous-auth-widget.asciidoc new file mode 100644 index 00000000000..9c4b0c06d90 --- /dev/null +++ b/docs/tab-widgets/anonymous-auth-widget.asciidoc @@ -0,0 +1,40 @@ +++++ +
+
+ + +
+
+++++ + +include::anonymous-auth.asciidoc[tag=fleet-managed] + +++++ +
+ +
+++++ \ No newline at end of file diff --git a/docs/tab-widgets/anonymous-auth.asciidoc b/docs/tab-widgets/anonymous-auth.asciidoc new file mode 100644 index 00000000000..73f6db156bf --- /dev/null +++ b/docs/tab-widgets/anonymous-auth.asciidoc @@ -0,0 +1,29 @@ +// tag::fleet-managed[] +When an <> or <> is configured, +anonymous authentication must be enabled to collect RUM data. +Set **Anonymous Agent access** to true to enable anonymous authentication. + +When configuring anonymous authentication for client-side services, +there are a few configuration variables that can mitigate the impact of malicious requests to an +unauthenticated APM Server endpoint. + +Use the **Allowed anonymous agents** and **Allowed anonymous services** configs to ensure that the +`agent.name` and `service.name` of each incoming request match a specified list. + +Additionally, the APM Server can rate-limit unauthenticated requests based on the client IP address +(`client.ip`) of the request. +This allows you to specify the maximum number of requests allowed per unique IP address, per second. +// end::fleet-managed[] + +// tag::binary[] +When an <> or <> is configured, +anonymous authentication must be enabled to collect RUM data. +To enable anonymous access, set either <> or +<> to `true`. + +Because anyone can send anonymous events to the APM Server, +additional configuration variables are available to rate limit the number anonymous events the APM Server processes; +throughput is equal to the `rate_limit.ip_limit` times the `rate_limit.event_limit`. + +See <> for a complete list of options and a sample configuration file. +// end::binary[] \ No newline at end of file diff --git a/docs/tab-widgets/api-key-widget.asciidoc b/docs/tab-widgets/api-key-widget.asciidoc new file mode 100644 index 00000000000..ab74e730025 --- /dev/null +++ b/docs/tab-widgets/api-key-widget.asciidoc @@ -0,0 +1,40 @@ +++++ +
+
+ + +
+
+++++ + +include::api-key.asciidoc[tag=fleet-managed] + +++++ +
+ +
+++++ \ No newline at end of file diff --git a/docs/tab-widgets/api-key.asciidoc b/docs/tab-widgets/api-key.asciidoc new file mode 100644 index 00000000000..a83ed72c201 --- /dev/null +++ b/docs/tab-widgets/api-key.asciidoc @@ -0,0 +1,25 @@ +// tag::fleet-managed[] +Enable API key authorization in the <>. +You should also set a limit on the number of unique API keys that APM Server allows per minute; +this value should be the number of unique API keys configured in your monitored services. +// end::fleet-managed[] + +// tag::binary[] +API keys are disabled by default. Enable and configure this feature in the `apm-server.auth.api_key` +section of the +{beatname_lc}.yml+ configuration file. + +At a minimum, you must enable API keys, +and should set a limit on the number of unique API keys that APM Server allows per minute. +Here's an example `apm-server.auth.api_key` config using 50 unique API keys: + +[source,yaml] +---- +apm-server.auth.api_key.enabled: true <1> +apm-server.auth.api_key.limit: 50 <2> +---- +<1> Enables API keys +<2> Restricts the number of unique API keys that {es} allows each minute. +This value should be the number of unique API keys configured in your monitored services. + +All other configuration options are described in <>. +// end::binary[] \ No newline at end of file diff --git a/docs/tab-widgets/directory-layout-widget.asciidoc b/docs/tab-widgets/directory-layout-widget.asciidoc new file mode 100644 index 00000000000..e92c7169342 --- /dev/null +++ b/docs/tab-widgets/directory-layout-widget.asciidoc @@ -0,0 +1,59 @@ +++++ +
+
+ + + +
+ +
+++++ + +include::directory-layout.asciidoc[tag=docker] + +++++ +
+ +
+++++ \ No newline at end of file diff --git a/docs/tab-widgets/directory-layout.asciidoc b/docs/tab-widgets/directory-layout.asciidoc new file mode 100644 index 00000000000..9586af2f626 --- /dev/null +++ b/docs/tab-widgets/directory-layout.asciidoc @@ -0,0 +1,58 @@ +// tag::zip[] + +[cols=" +
+ + +
+
+++++ + +include::no-data-indexed.asciidoc[tag=fleet-managed] + +++++ +
+ + +++++ \ No newline at end of file diff --git a/docs/tab-widgets/no-data-indexed.asciidoc b/docs/tab-widgets/no-data-indexed.asciidoc new file mode 100644 index 00000000000..57a841a6483 --- /dev/null +++ b/docs/tab-widgets/no-data-indexed.asciidoc @@ -0,0 +1,63 @@ +// tag::fleet-managed[] +**Is {agent} healthy?** + +In {kib} open **{fleet}** and find the host that is running the APM integration; +confirm that its status is **Healthy**. +If it isn't, check the {agent} logs to diagnose potential causes. +See {fleet-guide}/monitor-elastic-agent.html[Monitor {agent}s] to learn more. + +**Is APM Server happy?** + +In {kib}, open **{fleet}** and select the host that is running the APM integration. +Open the **Logs** tab and select the `elastic_agent.apm_server` dataset. +Look for any APM Server errors that could help diagnose the problem. + +**Can the {apm-agent} connect to APM Server** + +To determine if the {apm-agent} can connect to the APM Server, send requests to the instrumented service and look for lines +containing `[request]` in the APM Server logs. + +If no requests are logged, confirm that: + +. SSL isn't <>. +. The host is correct. For example, if you're using Docker, ensure a bind to the right interface (for example, set +`apm-server.host = 0.0.0.0:8200` to match any IP) and set the `SERVER_URL` setting in the {apm-agent} accordingly. + +If you see requests coming through the APM Server but they are not accepted (a response code other than `202`), +see <> to narrow down the possible causes. + +**Instrumentation gaps** + +APM agents provide auto-instrumentation for many popular frameworks and libraries. +If the {apm-agent} is not auto-instrumenting something that you were expecting, data won't be sent to the {stack}. +Reference the relevant {apm-agents-ref}/index.html[{apm-agent} documentation] for details on what is automatically instrumented. +// end::fleet-managed[] + +// tag::binary[] +If no data shows up in {es}, first check that the APM components are properly connected. + +To ensure that APM Server configuration is valid and it can connect to the configured output, {es} by default, +run the following commands: + +["source","sh"] +------------------------------------------------------------ +apm-server test config +apm-server test output +------------------------------------------------------------ + +To see if the agent can connect to the APM Server, send requests to the instrumented service and look for lines +containing `[request]` in the APM Server logs. + +If no requests are logged, it might be that SSL is <> or that the host is wrong. +Particularly, if you are using Docker, ensure to bind to the right interface (for example, set +`apm-server.host = 0.0.0.0:8200` to match any IP) and set the `SERVER_URL` setting in the agent accordingly. + +If you see requests coming through the APM Server but they are not accepted (response code other than `202`), consider +the response code to narrow down the possible causes (see sections below). + +Another reason for data not showing up is that the agent is not auto-instrumenting something you were expecting, check +the {apm-agents-ref}/index.html[agent documentation] for details on what is automatically instrumented. + +APM Server currently relies on {es} to create indices that do not exist. +As a result, {es} must be configured to allow {ref}/docs-index_.html#index-creation[automatic index creation] for APM indices. +// end::binary[] diff --git a/docs/tab-widgets/secret-token-widget.asciidoc b/docs/tab-widgets/secret-token-widget.asciidoc new file mode 100644 index 00000000000..aea6373e194 --- /dev/null +++ b/docs/tab-widgets/secret-token-widget.asciidoc @@ -0,0 +1,40 @@ +++++ +
+
+ + +
+
+++++ + +include::secret-token.asciidoc[tag=fleet-managed] + +++++ +
+ +
+++++ \ No newline at end of file diff --git a/docs/tab-widgets/secret-token.asciidoc b/docs/tab-widgets/secret-token.asciidoc new file mode 100644 index 00000000000..a986a73e16f --- /dev/null +++ b/docs/tab-widgets/secret-token.asciidoc @@ -0,0 +1,17 @@ +// tag::fleet-managed[] +Create or update a secret token in {fleet}. + +include::../configure/shared/input-apm.asciidoc[tag=fleet-managed-settings] ++ +. Navigate to **Agent authorization** > **Secret token** and set the value of your token. +. Click **Save integration**. The APM Server will restart before the change takes effect. +// end::fleet-managed[] + +// tag::binary[] +Set the secret token in `apm-server.yaml`: + +[source,yaml] +---- +apm-server.auth.secret_token: +---- +// end::binary[] \ No newline at end of file diff --git a/docs/tab-widgets/tls-widget.asciidoc b/docs/tab-widgets/tls-widget.asciidoc new file mode 100644 index 00000000000..b20b9b81fa0 --- /dev/null +++ b/docs/tab-widgets/tls-widget.asciidoc @@ -0,0 +1,40 @@ +++++ +
+
+ + +
+
+++++ + +include::tls.asciidoc[tag=fleet-managed] + +++++ +
+ +
+++++ \ No newline at end of file diff --git a/docs/tab-widgets/tls.asciidoc b/docs/tab-widgets/tls.asciidoc new file mode 100644 index 00000000000..11ce2247bfa --- /dev/null +++ b/docs/tab-widgets/tls.asciidoc @@ -0,0 +1,21 @@ +// tag::fleet-managed[] +Enable TLS in the APM integration settings and use the <> to set the path to the server certificate and key. +// end::fleet-managed[] + +// tag::binary[] +The following is a basic APM Server SSL config with secure communication enabled. +This will make APM Server serve HTTPS requests instead of HTTP. + +[source,yaml] +---- +apm-server.ssl.enabled: true +apm-server.ssl.certificate: "/path/to/apm-server.crt" +apm-server.ssl.key: "/path/to/apm-server.key" +---- + +A full list of configuration options is available in <>. + +TIP: If APM agents are authenticating themselves using a certificate that cannot be authenticated through known CAs (e.g. self signed certificates), use the `ssl.certificate_authorities` to set a custom CA. +This will automatically modify the `ssl.client_authentication` configuration to require authentication. + +// end::binary[] \ No newline at end of file diff --git a/docs/legacy/ssl-input.asciidoc b/docs/tls-comms.asciidoc similarity index 51% rename from docs/legacy/ssl-input.asciidoc rename to docs/tls-comms.asciidoc index f6ad9cda7ef..5e22aa9e10a 100644 --- a/docs/legacy/ssl-input.asciidoc +++ b/docs/tls-comms.asciidoc @@ -1,26 +1,21 @@ -SSL/TLS is disabled by default. Besides enabling it, you need to provide a certificate and a corresponding -private key as well. +[[agent-tls]] +=== {apm-agent} TLS communication -The following is a basic APM Server SSL config with secure communication enabled. -This will make APM Server serve HTTPS requests instead of HTTP. - -[source,yaml] ----- -apm-server.ssl.enabled: true -apm-server.ssl.certificate: "/path/to/apm-server.crt" -apm-server.ssl.key: "/path/to/apm-server.key" ----- - -A full list of configuration options is available in <>. +TLS is disabled by default. +When TLS is enabled for APM Server inbound communication, agents will verify the identity +of the APM Server by authenticating its certificate. -Certificate and private key can be issued by a trusted certificate authority (CA) -or <>. +When TLS is enabled, a certificate and corresponding private key are required. +The certificate and private key can either be issued by a trusted certificate authority (CA) +or be <>. -NOTE: When using a self-signed (or custom CA) certificate, communication from APM Agents will require -additional settings due to <> +[float] +[[agent-self-sign]] +=== Use a self-signed certificate -[[self-signed-cert]] -==== Creating a self-signed certificate +[float] +[[agent-self-sign-1]] +==== Step 1: Create a self-signed certificate The {es} distribution offers the `certutil` tool for the creation of self-signed certificates: @@ -31,14 +26,20 @@ location of the output zip archive containing the certificate and the private ke /ca.crt --ca-key /ca.key --pem --name localhost` 4. Extract the certificate and key from the resulted zip archive. -[[ssl-server-authentication]] -==== Server certificate authentication +[float] +[[agent-self-sign-2]] +==== Step 2: Configure the APM Server -By default, when SSL is enabled for APM Server inbound communication, agents will verify the identity -of the APM Server by authenticating its certificate. +Enable TLS and configure the APM Server to point to the extracted certificate and key: + +include::./tab-widgets/tls-widget.asciidoc[] + +[float] +[[agent-self-sign-3]] +==== Step 3: Configure APM agents When the APM server uses a certificate that is not chained to a publicly-trusted certificate -(e.g. self-signed), additional setting will be required on the agent side: +(e.g. self-signed), additional configuration is required in the {apm-agent}: * *Go agent*: certificate pinning through {apm-go-ref}/configuration.html#config-server-cert[`ELASTIC_APM_SERVER_CERT`] * *Python agent*: certificate pinning through {apm-py-ref}/configuration.html#config-server-cert[`server_cert`] @@ -48,8 +49,7 @@ When the APM server uses a certificate that is not chained to a publicly-trusted * *Java agent*: adding the certificate to the JVM `trustStore`. See {apm-java-ref}/ssl-configuration.html#ssl-server-authentication[APM Server authentication] for more details. -It is not recommended to disable APM Server authentication, -however it is possible through agents configuration: +We do not recommend disabling {apm-agent} verification of the server's certificate, but it is possible: * *Go agent*: {apm-go-ref}/configuration.html#config-verify-server-cert[`ELASTIC_APM_VERIFY_SERVER_CERT`] * *.NET agent*: {apm-dotnet-ref}/config-reporter.html#config-verify-server-cert[`VerifyServerCert`] @@ -59,16 +59,9 @@ however it is possible through agents configuration: * *Ruby agent*: {apm-ruby-ref}/configuration.html#config-verify-server-cert[`verify_server_cert`] * *Node.js agent*: {apm-node-ref}/configuration.html#validate-server-cert[`verifyServerCert`] -[[ssl-client-authentication]] -==== Client certificate authentication - -By default, the APM Server does not require agents to provide a certificate for authentication. -This can be changed through the `ssl.client_authentication` configuration. - -There is no dedicated support for SSL/TLS client certificate authentication in Elastic's backend agents, -so setting it up may require some additional effort. For example - see -{apm-java-ref}/ssl-configuration.html#ssl-client-authentication[Java Agent authentication]. +[float] +[[agent-client-cert]] +=== Client certificate authentication -If agents are authenticating themselves using a certificate that cannot be authenticated through known -CAs (e.g. self signed certificates), use the `ssl.certificate_authorities` to set a custom CA. -This will automatically modify the `ssl.client_authentication` configuration to require authentication. +APM Server does not require agents to provide a certificate for authentication, +and there is no dedicated support for SSL/TLS client certificate authentication in Elastic’s backend agents. \ No newline at end of file diff --git a/docs/troubleshoot-apm.asciidoc b/docs/troubleshoot-apm.asciidoc index ae32f63f1da..01869e762b4 100644 --- a/docs/troubleshoot-apm.asciidoc +++ b/docs/troubleshoot-apm.asciidoc @@ -1,9 +1,16 @@ [[troubleshoot-apm]] -== Troubleshooting +== Troubleshoot -This section provides solutions to <> -and <> guidance. -For additional help, see the links below. +This section provides solutions to common questions and problems, +and processing and performance guidance. + +* <> +* <> +* <> +* <> +* <> + +For additional help with other APM components, see the links below. [float] [[troubleshooting-docs]] @@ -40,4 +47,10 @@ visit our https://discuss.elastic.co/c/apm[discussion forum]. include::common-problems.asciidoc[] -include::processing-performance.asciidoc[] \ No newline at end of file +include::apm-server-down.asciidoc[] + +include::apm-response-codes.asciidoc[] + +include::processing-performance.asciidoc[] + +include::./legacy/copied-from-beats/docs/debugging.asciidoc[] \ No newline at end of file diff --git a/docs/upgrading-to-integration.asciidoc b/docs/upgrading-to-integration.asciidoc index be78ddb6d59..1b3a8745236 100644 --- a/docs/upgrading-to-integration.asciidoc +++ b/docs/upgrading-to-integration.asciidoc @@ -131,7 +131,7 @@ If you're adding the APM integration to a {fleet}-managed {agent}, you can use t If you're adding the APM integration to the {fleet-server}, use the policy that the {fleet-server} is running on. TIP: You'll configure the APM integration in this step. -See <> for a reference of all available settings. +See <> for a reference of all available settings. As long as the APM integration is configured with the same secret token or you have API keys enabled on the same host, no reconfiguration is required in your APM agents. @@ -193,7 +193,7 @@ Within minutes your data should begin appearing in the {apm-app} again. ==== Configure the APM integration You can now update settings that were removed during the upgrade. -See <> for a reference of all available settings. +See <> for a reference of all available settings. // lint ignore fleet elastic-cloud In {kib}, navigate to **Management** > **Fleet**.