Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -9,13 +9,13 @@ The `CHANGE_POINT` command requires a [platinum license](https://www.elastic.co/

`CHANGE_POINT` detects spikes, dips, and change points in a metric.

**Syntax**
## Syntax

```esql
CHANGE_POINT value [ON key] [AS type_name, pvalue_name]
```

**Parameters**
## Parameters

`value`
: The column with the metric in which you want to detect a change point.
Expand All @@ -29,7 +29,7 @@ CHANGE_POINT value [ON key] [AS type_name, pvalue_name]
`pvalue_name`
: The name of the output column with the p-value that indicates how extreme the change point is. If not specified, `pvalue` is used.

**Description**
## Description

`CHANGE_POINT` detects spikes, dips, and change points in a metric. The command adds columns to
the table with the change point type and p-value, that indicates how extreme the change point is
Expand All @@ -46,9 +46,9 @@ The possible change point types are:
There must be at least 22 values for change point detection. Fewer than 1,000 is preferred.
::::

**Examples**
## Examples

The following example shows the detection of a step change:
The following example detects a step change in a metric:

:::{include} ../examples/change_point.csv-spec/changePointForDocs.md
:::
Original file line number Diff line number Diff line change
Expand Up @@ -51,7 +51,7 @@ Best practices:
::::
:::::

**Syntax**
## Syntax

::::{applies-switch}

Expand All @@ -73,7 +73,7 @@ COMPLETION [column =] prompt WITH my_inference_endpoint

::::

**Parameters**
## Parameters

`column`
: (Optional) The name of the output column containing the LLM's response.
Expand All @@ -88,7 +88,7 @@ COMPLETION [column =] prompt WITH my_inference_endpoint
: The ID of the [inference endpoint](docs-content://explore-analyze/elastic-inference/inference-api.md) to use for the task.
The inference endpoint must be configured with the `completion` task type.

**Description**
## Description

The `COMPLETION` command provides a general-purpose interface for
text generation tasks using a Large Language Model (LLM) in ES|QL.
Expand All @@ -103,13 +103,13 @@ including:
- Content rewriting
- Creative generation

**Requirements**
## Requirements

To use this command, you must deploy your LLM model in Elasticsearch as
an [inference endpoint](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-inference-put) with the
task type `completion`.

#### Handling timeouts
### Handling timeouts

`COMPLETION` commands may time out when processing large datasets or complex prompts. The default timeout is 10 minutes, but you can increase this limit if necessary.

Expand Down Expand Up @@ -141,9 +141,13 @@ If you don't want to increase the timeout limit, try the following:
* Configure your HTTP client's response timeout (Refer to [HTTP client configuration](/reference/elasticsearch/configuration-reference/networking-settings.md#_http_client_configuration))


**Examples**
## Examples

Use the default column name (results stored in `completion` column):
The following examples show common `COMPLETION` patterns.

### Use the default output column name

If no column name is specified, the response is stored in `completion`:

```esql
ROW question = "What is Elasticsearch?"
Expand All @@ -155,7 +159,9 @@ ROW question = "What is Elasticsearch?"
|------------------------|-------------------------------------------|
| What is Elasticsearch? | A distributed search and analytics engine |

Specify the output column (results stored in `answer` column):
### Specify the output column name

Use `column =` to assign the response to a named column:

```esql
ROW question = "What is Elasticsearch?"
Expand All @@ -167,7 +173,9 @@ ROW question = "What is Elasticsearch?"
| --- | --- |
| What is Elasticsearch? | A distributed search and analytics engine |

Summarize the top 10 highest-rated movies using a prompt:
### Summarize documents with a prompt

Use `CONCAT` to build a prompt from field values before calling `COMPLETION`:

```esql
FROM movies
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -5,17 +5,17 @@ stack: ga

`DISSECT` enables you to [extract structured data out of a string](/reference/query-languages/esql/esql-process-data-with-dissect-grok.md).

**Syntax**
## Syntax

```esql
DISSECT input "pattern" [APPEND_SEPARATOR="<separator>"]
```

**Parameters**
## Parameters

`input`
: The column that contains the string you want to structure. If the column has
multiple values, `DISSECT` will process each value.
: The column that contains the string you want to structure.
If the column has multiple values, `DISSECT` will process each value.

`pattern`
: A [dissect pattern](/reference/query-languages/esql/esql-process-data-with-dissect-grok.md#esql-dissect-patterns).
Expand All @@ -25,21 +25,26 @@ multiple values, `DISSECT` will process each value.
`<separator>`
: A string used as the separator between appended values, when using the [append modifier](/reference/query-languages/esql/esql-process-data-with-dissect-grok.md#esql-append-modifier).

**Description**
## Description

`DISSECT` enables you to [extract structured data out of a string](/reference/query-languages/esql/esql-process-data-with-dissect-grok.md).
`DISSECT` matches the string against a delimiter-based pattern, and extracts the specified keys as columns.

Refer to [Process data with `DISSECT`](/reference/query-languages/esql/esql-process-data-with-dissect-grok.md#esql-process-data-with-dissect) for the syntax of dissect patterns.

**Examples**
## Examples

The following example parses a string that contains a timestamp, some text, and
an IP address:
The following examples show how to parse and convert structured strings with `DISSECT`.

### Parse a structured string

Parse a string that contains a timestamp, some text, and an IP address:

:::{include} ../examples/docs.csv-spec/basicDissect.md
:::

### Convert output to a non-string type

By default, `DISSECT` outputs keyword string columns. To convert to another
type, use [Type conversion functions](/reference/query-languages/esql/functions-operators/type-conversion-functions.md):

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -5,21 +5,28 @@ stack: ga

The `DROP` processing command removes one or more columns.

**Syntax**
## Syntax

```esql
DROP columns
```

**Parameters**
## Parameters

`columns`
: A comma-separated list of columns to remove. Supports wildcards.

**Examples**
## Examples

The following examples show how to remove columns by name and by pattern.

### Drop a column by name

:::{include} ../examples/drop.csv-spec/height.md
:::

### Drop columns matching a wildcard pattern

Rather than specify each column by name, you can use wildcards to drop all
columns with a name that matches a pattern:

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -6,13 +6,22 @@ stack: ga
`ENRICH` enables you to add data from existing indices as new columns using an
enrich policy.

**Syntax**
::::{tip}
Consider using `LOOKUP JOIN` instead of `ENRICH` for your use case.

Learn more:

- [`LOOKUP JOIN` overview](/reference/query-languages/esql/esql-lookup-join.md)
- [`LOOKUP JOIN` command reference](/reference/query-languages/esql/commands/lookup-join.md)
::::

## Syntax

```esql
ENRICH policy [ON match_field] [WITH [new_name1 = ]field1, [new_name2 = ]field2, ...]
```

**Parameters**
## Parameters

`policy`
: The name of the enrich policy.
Expand All @@ -38,16 +47,17 @@ ENRICH policy [ON match_field] [WITH [new_name1 = ]field1, [new_name2 = ]field2,
enrich field is renamed.

`new_nameX`
: Enables you to change the name of the column thats added for each of the enrich
: Enables you to change the name of the column that's added for each of the enrich
fields. Defaults to the enrich field name.
If a column has the same name as the new name, it will be discarded.
If a name (new or original) occurs more than once, only the rightmost duplicate
creates a new column.

**Description**
## Description

`ENRICH` enables you to add data from existing indices as new columns using an
enrich policy. Refer to [Data enrichment](/reference/query-languages/esql/esql-enrich-data.md)
enrich policy.
Refer to [Data enrichment](/reference/query-languages/esql/esql-enrich-data.md)
for information about setting up a policy.

:::{image} /reference/query-languages/images/esql-enrich.png
Expand All @@ -58,37 +68,41 @@ for information about setting up a policy.
Before you can use `ENRICH`, you need to [create and execute an enrich policy](/reference/query-languages/esql/esql-enrich-data.md#esql-set-up-enrich-policy).
::::

In case of name collisions, the newly created columns will override existing columns.

**Examples**
## Examples

The following example uses the `languages_policy` enrich policy to add a new
column for each enrich field defined in the policy. The match is performed using
the `match_field` defined in the [enrich policy](/reference/query-languages/esql/esql-enrich-data.md#esql-enrich-policy) and
requires that the input table has a column with the same name (`language_code`
in this example). `ENRICH` will look for records in th
[enrich index](/reference/query-languages/esql/esql-enrich-data.md#esql-enrich-index)
based on the match field value.
The following examples show common `ENRICH` patterns.

### Use the default match field

`ENRICH` looks for records in the [enrich index](/reference/query-languages/esql/esql-enrich-data.md#esql-enrich-index)
using the `match_field` defined in the [enrich policy](/reference/query-languages/esql/esql-enrich-data.md#esql-enrich-policy).
The input table must have a column with the same name (`language_code` in this example):

:::{include} ../examples/enrich.csv-spec/enrich.md
:::

### Match on a different field using ON

To use a column with a different name than the `match_field` defined in the
policy as the match field, use `ON <column-name>`:

:::{include} ../examples/enrich.csv-spec/enrich_on.md
:::

### Select specific enrich fields using WITH

By default, each of the enrich fields defined in the policy is added as a
column. To explicitly select the enrich fields that are added, use
`WITH <field1>, <field2>, ...`:

:::{include} ../examples/enrich.csv-spec/enrich_with.md
:::

You can rename the columns that are added using `WITH new_name=<field1>`:
### Rename enrich fields using WITH

Rename the columns that are added using `WITH new_name=<field1>`:

:::{include} ../examples/enrich.csv-spec/enrich_rename.md
:::

In case of name collisions, the newly created columns will override existing
columns.
Original file line number Diff line number Diff line change
Expand Up @@ -6,13 +6,13 @@ stack: ga
The `EVAL` processing command enables you to append new columns with calculated
values.

**Syntax**
## Syntax

```esql
EVAL [column1 =] value1[, ..., [columnN =] valueN]
```

**Parameters**
## Parameters

`columnX`
: The column name.
Expand All @@ -24,30 +24,40 @@ EVAL [column1 =] value1[, ..., [columnN =] valueN]
[function](/reference/query-languages/esql/esql-functions-operators.md#esql-functions).
Can use columns defined left of this one.

**Description**
## Description

The `EVAL` processing command enables you to append new columns with calculated
values. `EVAL` supports various functions for calculating values. Refer to
[Functions](/reference/query-languages/esql/esql-functions-operators.md#esql-functions) for more information.

**Examples**
## Examples

The following examples show common `EVAL` patterns.

### Append a calculated column

:::{include} ../examples/eval.csv-spec/eval.md
:::

### Overwrite an existing column

If the specified column already exists, the existing column will be dropped, and
the new column will be appended to the table:

:::{include} ../examples/eval.csv-spec/evalReplace.md
:::

### Use an expression as the column name

Specifying the output column name is optional. If not specified, the new column
name is equal to the expression. The following query adds a column named
`height*3.281`:

:::{include} ../examples/eval.csv-spec/evalUnnamedColumn.md
:::

### Reference an auto-named column in a subsequent command

Because this name contains special characters,
[it needs to be quoted](/reference/query-languages/esql/esql-syntax.md#esql-identifiers)
with backticks (```) when using it in subsequent commands:
Expand Down
Loading
Loading