Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
47 commits
Select commit Hold shift + click to select a range
2bf6f48
Moved ingest models into separate file
olksdr Feb 9, 2022
b5e91e8
Add ingest append processor data source
olksdr Feb 9, 2022
be3a0ee
Merge branch 'main' into feat/data-processors-1
olksdr Feb 10, 2022
f511448
Add test for ingest append processor data source
olksdr Feb 10, 2022
c0099aa
Add bytes ingest processor data source
olksdr Feb 10, 2022
0795af4
Add ID to existing data sources as computed value
olksdr Feb 10, 2022
527538b
Update docs for existing ingest processors: add URLs
olksdr Feb 10, 2022
1575dfb
Add circle ingest processort data source
olksdr Feb 10, 2022
975b5c4
Add community id ingest processor data source
olksdr Feb 11, 2022
0dfb8da
Add convert ingest processor data source
olksdr Feb 11, 2022
92c6aca
Add CSV ingest processor data source
olksdr Feb 11, 2022
e3032a5
Add date ingest processor data source
olksdr Feb 11, 2022
746056b
Add date index name ingest processor data source
olksdr Feb 11, 2022
83d5984
Add dissect ingest processor data source
olksdr Feb 11, 2022
2f171d8
Add dot expander ingest processor data source
olksdr Feb 11, 2022
94cccb0
Add drop ingest processor data source
olksdr Feb 11, 2022
23782eb
Add enrich ingest processor data source
olksdr Feb 11, 2022
2282a32
Add fail ingest processor data source
olksdr Feb 11, 2022
080998c
Add fingerprint ingest processor data source
olksdr Feb 12, 2022
23cd292
Add foreach ingest processor data source
olksdr Feb 12, 2022
6c35dcb
Add geoip ingest processor data source
olksdr Feb 12, 2022
325020f
Add grok ingest processor data source
olksdr Feb 12, 2022
acfd901
Add gsub ingest processor data source
olksdr Feb 13, 2022
1b63c9d
Add html_strip ingest processor data source
olksdr Feb 13, 2022
1f6be07
Add join ingest processor data source
olksdr Feb 13, 2022
6d06447
Add json ingest processor data source
olksdr Feb 13, 2022
8e3046f
Add kv ingest processor data source
olksdr Feb 13, 2022
664dcbd
Fix geoip processor: handle list as a Set data type
olksdr Feb 13, 2022
5758438
Add lowercase ingest processor data source
olksdr Feb 13, 2022
2eb42d5
Add network direction ingest processor data source
olksdr Feb 13, 2022
86edc07
Add pipeline ingest processor data source
olksdr Feb 13, 2022
ea45adc
Add registered domain ingest processor data source
olksdr Feb 13, 2022
893678a
Add remove ingest processor data source
olksdr Feb 13, 2022
431c6f2
Add rename ingest processor data source
olksdr Feb 13, 2022
6a5ddb7
Add script ingest processor data source
olksdr Feb 13, 2022
f9fa9f4
Add set ingest processor data source
olksdr Feb 14, 2022
7501001
Add set security user ingest processor data source
olksdr Feb 14, 2022
0e8db5c
Add sort ingest processor data source
olksdr Feb 14, 2022
d89fa58
Add split ingest processor data source
olksdr Feb 14, 2022
add83a0
Add trim ingest processor data source
olksdr Feb 14, 2022
bf18c84
Add uppercase ingest processor data source
olksdr Feb 14, 2022
5415061
Add urldecode ingest processor data source
olksdr Feb 14, 2022
a168862
Add uri parts ingest processor data source
olksdr Feb 14, 2022
24e9606
Add user agent ingest processor data source
olksdr Feb 14, 2022
9e6a046
Add record to CHANGELOG about new data sources
olksdr Feb 14, 2022
cda7068
Update ingest pipeline examples to include usage of data sources
olksdr Feb 14, 2022
61cebd7
Update docs: use links to the `master` docs
olksdr Feb 15, 2022
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,7 @@
- New resource `elasticstack_elasticsearch_data_stream` to manage Elasticsearch [data streams](https://www.elastic.co/guide/en/elasticsearch/reference/current/data-streams.html) ([#45](https://github.com/elastic/terraform-provider-elasticstack/pull/45))
- New resource `elasticstack_elasticsearch_ingest_pipeline` to manage Elasticsearch [ingest pipelines](https://www.elastic.co/guide/en/elasticsearch/reference/7.16/ingest.html) ([#56](https://github.com/elastic/terraform-provider-elasticstack/issues/56))
- New resource `elasticstack_elasticsearch_component_template` to manage Elasticsearch [component templates](https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-component-template.html) ([#39](https://github.com/elastic/terraform-provider-elasticstack/pull/39))
- New helper data sources to create [processorts](https://www.elastic.co/guide/en/elasticsearch/reference/current/processors.html) for ingest pipelines ([#67](https://github.com/elastic/terraform-provider-elasticstack/pull/67))

### Fixed
- Update only changed index settings ([#52](https://github.com/elastic/terraform-provider-elasticstack/issues/52))
Expand Down
59 changes: 59 additions & 0 deletions docs/data-sources/elasticsearch_ingest_processor_append.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,59 @@
---
subcategory: "Ingest"
layout: ""
page_title: "Elasticstack: elasticstack_elasticsearch_ingest_processor_append Data Source"
description: |-
Helper data source to create a processor which appends one or more values to an existing array if the field already exists and it is an array.
---

# Data Source: elasticstack_elasticsearch_ingest_processor_append

Helper data source to which can be used to create a processor to append one or more values to an existing array if the field already exists and it is an array.
Converts a scalar to an array and appends one or more values to it if the field exists and it is a scalar. Creates an array containing the provided values if the field doesn’t exist.

See: https://www.elastic.co/guide/en/elasticsearch/reference/current/append-processor.html

## Example Usage

```terraform
provider "elasticstack" {
elasticsearch {}
}

data "elasticstack_elasticsearch_ingest_processor_append" "tags" {
field = "tags"
value = ["production", "{{{app}}}", "{{{owner}}}"]
}

resource "elasticstack_elasticsearch_ingest_pipeline" "my_ingest_pipeline" {
name = "append-ingest"

processors = [
data.elasticstack_elasticsearch_ingest_processor_append.tags.json
]
}
```

<!-- schema generated by tfplugindocs -->
## Schema

### Required

- **field** (String) The field to be appended to.
- **value** (List of String) The value to be appended.

### Optional

- **allow_duplicates** (Boolean) If `false`, the processor does not append values already present in the field.
- **description** (String) Description of the processor.
- **if** (String) Conditionally execute the processor
- **ignore_failure** (Boolean) Ignore failures for the processor.
- **media_type** (String) The media type for encoding value. Applies only when value is a template snippet. Must be one of `application/json`, `text/plain`, or `application/x-www-form-urlencoded`.
- **on_failure** (List of String) Handle failures for the processor.
- **tag** (String) Identifier for the processor.

### Read-Only

- **id** (String) Internal identifier of the resource
- **json** (String) JSON representation of this data source.

58 changes: 58 additions & 0 deletions docs/data-sources/elasticsearch_ingest_processor_bytes.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,58 @@
---
subcategory: "Ingest"
layout: ""
page_title: "Elasticstack: elasticstack_elasticsearch_ingest_processor_bytes Data Source"
description: |-
Helper data source to create a processor which converts a human readable byte value (e.g. 1kb) to its value in bytes (e.g. 1024).
---

# Data Source: elasticstack_elasticsearch_ingest_processor_bytes

Helper data source to which can be used to create a processor to convert a human readable byte value (e.g. 1kb) to its value in bytes (e.g. 1024). If the field is an array of strings, all members of the array will be converted.

Supported human readable units are "b", "kb", "mb", "gb", "tb", "pb" case insensitive. An error will occur if the field is not a supported format or resultant value exceeds 2^63.

See: https://www.elastic.co/guide/en/elasticsearch/reference/current/bytes-processor.html

## Example Usage

```terraform
provider "elasticstack" {
elasticsearch {}
}

data "elasticstack_elasticsearch_ingest_processor_bytes" "bytes" {
field = "file.size"
}

resource "elasticstack_elasticsearch_ingest_pipeline" "my_ingest_pipeline" {
name = "bytes-ingest"

processors = [
data.elasticstack_elasticsearch_ingest_processor_bytes.bytes.json
]
}
```

<!-- schema generated by tfplugindocs -->
## Schema

### Required

- **field** (String) The field to convert

### Optional

- **description** (String) Description of the processor.
- **if** (String) Conditionally execute the processor
- **ignore_failure** (Boolean) Ignore failures for the processor.
- **ignore_missing** (Boolean) If `true` and `field` does not exist or is `null`, the processor quietly exits without modifying the document.
- **on_failure** (List of String) Handle failures for the processor.
- **tag** (String) Identifier for the processor.
- **target_field** (String) The field to assign the converted value to, by default `field` is updated in-place

### Read-Only

- **id** (String) Internal identifier of the resource
- **json** (String) JSON representation of this data source.

60 changes: 60 additions & 0 deletions docs/data-sources/elasticsearch_ingest_processor_circle.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,60 @@
---
subcategory: "Ingest"
layout: ""
page_title: "Elasticstack: elasticstack_elasticsearch_ingest_processor_circle Data Source"
description: |-
Helper data source to create a processor which converts circle definitions of shapes to regular polygons which approximate them.
---

# Data Source: elasticstack_elasticsearch_ingest_processor_circle

Helper data source to which can be used to create a processor to convert circle definitions of shapes to regular polygons which approximate them.

See: https://www.elastic.co/guide/en/elasticsearch/reference/current/ingest-circle-processor.html

## Example Usage

```terraform
provider "elasticstack" {
elasticsearch {}
}

data "elasticstack_elasticsearch_ingest_processor_circle" "circle" {
field = "circle"
error_distance = 28.1
shape_type = "geo_shape"
}

resource "elasticstack_elasticsearch_ingest_pipeline" "my_ingest_pipeline" {
name = "circle-ingest"

processors = [
data.elasticstack_elasticsearch_ingest_processor_circle.circle.json
]
}
```

<!-- schema generated by tfplugindocs -->
## Schema

### Required

- **error_distance** (Number) The difference between the resulting inscribed distance from center to side and the circle’s radius (measured in meters for `geo_shape`, unit-less for `shape`)
- **field** (String) The string-valued field to trim whitespace from.
- **shape_type** (String) Which field mapping type is to be used when processing the circle.

### Optional

- **description** (String) Description of the processor.
- **if** (String) Conditionally execute the processor
- **ignore_failure** (Boolean) Ignore failures for the processor.
- **ignore_missing** (Boolean) If `true` and `field` does not exist or is `null`, the processor quietly exits without modifying the document.
- **on_failure** (List of String) Handle failures for the processor.
- **tag** (String) Identifier for the processor.
- **target_field** (String) The field to assign the converted value to, by default `field` is updated in-place

### Read-Only

- **id** (String) Internal identifier of the resource
- **json** (String) JSON representation of this data source.

62 changes: 62 additions & 0 deletions docs/data-sources/elasticsearch_ingest_processor_community_id.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,62 @@
---
subcategory: "Ingest"
layout: ""
page_title: "Elasticstack: elasticstack_elasticsearch_ingest_processor_community_id Data Source"
description: |-
Helper data source to create a processor which computes the Community ID for network flow data as defined in the Community ID Specification.
---

# Data Source: elasticstack_elasticsearch_ingest_processor_community_id

Helper data source to which can be used to create a processor to compute the Community ID for network flow data as defined in the [Community ID Specification](https://github.com/corelight/community-id-spec).
You can use a community ID to correlate network events related to a single flow.

The community ID processor reads network flow data from related [Elastic Common Schema (ECS)](https://www.elastic.co/guide/en/ecs/1.12) fields by default. If you use the ECS, no configuration is required.

See: https://www.elastic.co/guide/en/elasticsearch/reference/current/community-id-processor.html

## Example Usage

```terraform
provider "elasticstack" {
elasticsearch {}
}

data "elasticstack_elasticsearch_ingest_processor_community_id" "community" {}

resource "elasticstack_elasticsearch_ingest_pipeline" "my_ingest_pipeline" {
name = "community-ingest"

processors = [
data.elasticstack_elasticsearch_ingest_processor_community_id.community.json
]
}
```

<!-- schema generated by tfplugindocs -->
## Schema

### Optional

- **description** (String) Description of the processor.
- **destination_ip** (String) Field containing the destination IP address.
- **destination_port** (Number) Field containing the destination port.
- **iana_number** (Number) Field containing the IANA number.
- **icmp_code** (Number) Field containing the ICMP code.
- **icmp_type** (Number) Field containing the ICMP type.
- **if** (String) Conditionally execute the processor
- **ignore_failure** (Boolean) Ignore failures for the processor.
- **ignore_missing** (Boolean) If `true` and `field` does not exist or is `null`, the processor quietly exits without modifying the document.
- **on_failure** (List of String) Handle failures for the processor.
- **seed** (Number) Seed for the community ID hash. Must be between 0 and 65535 (inclusive). The seed can prevent hash collisions between network domains, such as a staging and production network that use the same addressing scheme.
- **source_ip** (String) Field containing the source IP address.
- **source_port** (Number) Field containing the source port.
- **tag** (String) Identifier for the processor.
- **target_field** (String) Output field for the community ID.
- **transport** (String) Field containing the transport protocol. Used only when the `iana_number` field is not present.

### Read-Only

- **id** (String) Internal identifier of the resource
- **json** (String) JSON representation of this data source.

67 changes: 67 additions & 0 deletions docs/data-sources/elasticsearch_ingest_processor_convert.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,67 @@
---
subcategory: "Ingest"
layout: ""
page_title: "Elasticstack: elasticstack_elasticsearch_ingest_processor_convert Data Source"
description: |-
Helper data source to create a processor which converts a field in the currently ingested document to a different type, such as converting a string to an integer.
---

# Data Source: elasticstack_elasticsearch_ingest_processor_convert

Helper data source to which can be used to convert a field in the currently ingested document to a different type, such as converting a string to an integer. If the field value is an array, all members will be converted.

The supported types include: `integer`, `long`, `float`, `double`, `string`, `boolean`, `ip`, and `auto`.

Specifying `boolean` will set the field to true if its string value is equal to true (ignore case), to false if its string value is equal to false (ignore case), or it will throw an exception otherwise.

Specifying `ip` will set the target field to the value of `field` if it contains a valid IPv4 or IPv6 address that can be indexed into an IP field type.

Specifying `auto` will attempt to convert the string-valued `field` into the closest non-string, non-IP type. For example, a field whose value is "true" will be converted to its respective boolean type: true. Do note that float takes precedence of double in auto. A value of "242.15" will "automatically" be converted to 242.15 of type `float`. If a provided field cannot be appropriately converted, the processor will still process successfully and leave the field value as-is. In such a case, `target_field` will be updated with the unconverted field value.

See: https://www.elastic.co/guide/en/elasticsearch/reference/current/convert-processor.html

## Example Usage

```terraform
provider "elasticstack" {
elasticsearch {}
}

data "elasticstack_elasticsearch_ingest_processor_convert" "convert" {
description = "converts the content of the id field to an integer"
field = "id"
type = "integer"
}

resource "elasticstack_elasticsearch_ingest_pipeline" "my_ingest_pipeline" {
name = "convert-ingest"

processors = [
data.elasticstack_elasticsearch_ingest_processor_convert.convert.json
]
}
```

<!-- schema generated by tfplugindocs -->
## Schema

### Required

- **field** (String) The field whose value is to be converted.
- **type** (String) The type to convert the existing value to

### Optional

- **description** (String) Description of the processor.
- **if** (String) Conditionally execute the processor
- **ignore_failure** (Boolean) Ignore failures for the processor.
- **ignore_missing** (Boolean) If `true` and `field` does not exist or is `null`, the processor quietly exits without modifying the document.
- **on_failure** (List of String) Handle failures for the processor.
- **tag** (String) Identifier for the processor.
- **target_field** (String) The field to assign the converted value to.

### Read-Only

- **id** (String) Internal identifier of the resource
- **json** (String) JSON representation of this data source.

63 changes: 63 additions & 0 deletions docs/data-sources/elasticsearch_ingest_processor_csv.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,63 @@
---
subcategory: "Ingest"
layout: ""
page_title: "Elasticstack: elasticstack_elasticsearch_ingest_processor_csv Data Source"
description: |-
Helper data source to create a processor which extracts fields from CSV line out of a single text field within a document.
---

# Data Source: elasticstack_elasticsearch_ingest_processor_csv

Helper data source to which can be used to extract fields from CSV line out of a single text field within a document. Any empty field in CSV will be skipped.

See: https://www.elastic.co/guide/en/elasticsearch/reference/current/csv-processor.html

## Example Usage

```terraform
provider "elasticstack" {
elasticsearch {}
}

data "elasticstack_elasticsearch_ingest_processor_csv" "csv" {
field = "my_field"
target_fields = ["field1", "field2"]
}

resource "elasticstack_elasticsearch_ingest_pipeline" "my_ingest_pipeline" {
name = "csv-ingest"

processors = [
data.elasticstack_elasticsearch_ingest_processor_csv.csv.json
]
}
```

If the `trim` option is enabled then any whitespace in the beginning and in the end of each unquoted field will be trimmed. For example with configuration above, a value of A, B will result in field field2 having value {nbsp}B (with space at the beginning). If trim is enabled A, B will result in field field2 having value B (no whitespace). Quoted fields will be left untouched.

<!-- schema generated by tfplugindocs -->
## Schema

### Required

- **field** (String) The field to extract data from.
- **target_fields** (List of String) The array of fields to assign extracted values to.

### Optional

- **description** (String) Description of the processor.
- **empty_value** (String) Value used to fill empty fields, empty fields will be skipped if this is not provided.
- **if** (String) Conditionally execute the processor
- **ignore_failure** (Boolean) Ignore failures for the processor.
- **ignore_missing** (Boolean) If `true` and `field` does not exist or is `null`, the processor quietly exits without modifying the document.
- **on_failure** (List of String) Handle failures for the processor.
- **quote** (String) Quote used in CSV, has to be single character string
- **separator** (String) Separator used in CSV, has to be single character string.
- **tag** (String) Identifier for the processor.
- **trim** (Boolean) Trim whitespaces in unquoted fields.

### Read-Only

- **id** (String) Internal identifier of the resource
- **json** (String) JSON representation of this data source.

Loading