diff --git a/CHANGELOG.md b/CHANGELOG.md index 6986696ee..654220e54 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -3,6 +3,7 @@ - New resource `elasticstack_elasticsearch_data_stream` to manage Elasticsearch [data streams](https://www.elastic.co/guide/en/elasticsearch/reference/current/data-streams.html) ([#45](https://github.com/elastic/terraform-provider-elasticstack/pull/45)) - New resource `elasticstack_elasticsearch_ingest_pipeline` to manage Elasticsearch [ingest pipelines](https://www.elastic.co/guide/en/elasticsearch/reference/7.16/ingest.html) ([#56](https://github.com/elastic/terraform-provider-elasticstack/issues/56)) - New resource `elasticstack_elasticsearch_component_template` to manage Elasticsearch [component templates](https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-component-template.html) ([#39](https://github.com/elastic/terraform-provider-elasticstack/pull/39)) +- New helper data sources to create [processorts](https://www.elastic.co/guide/en/elasticsearch/reference/current/processors.html) for ingest pipelines ([#67](https://github.com/elastic/terraform-provider-elasticstack/pull/67)) ### Fixed - Update only changed index settings ([#52](https://github.com/elastic/terraform-provider-elasticstack/issues/52)) diff --git a/docs/data-sources/elasticsearch_ingest_processor_append.md b/docs/data-sources/elasticsearch_ingest_processor_append.md new file mode 100644 index 000000000..c0ff6bbb5 --- /dev/null +++ b/docs/data-sources/elasticsearch_ingest_processor_append.md @@ -0,0 +1,59 @@ +--- +subcategory: "Ingest" +layout: "" +page_title: "Elasticstack: elasticstack_elasticsearch_ingest_processor_append Data Source" +description: |- + Helper data source to create a processor which appends one or more values to an existing array if the field already exists and it is an array. +--- + +# Data Source: elasticstack_elasticsearch_ingest_processor_append + +Helper data source to which can be used to create a processor to append one or more values to an existing array if the field already exists and it is an array. +Converts a scalar to an array and appends one or more values to it if the field exists and it is a scalar. Creates an array containing the provided values if the field doesn’t exist. + +See: https://www.elastic.co/guide/en/elasticsearch/reference/current/append-processor.html + +## Example Usage + +```terraform +provider "elasticstack" { + elasticsearch {} +} + +data "elasticstack_elasticsearch_ingest_processor_append" "tags" { + field = "tags" + value = ["production", "{{{app}}}", "{{{owner}}}"] +} + +resource "elasticstack_elasticsearch_ingest_pipeline" "my_ingest_pipeline" { + name = "append-ingest" + + processors = [ + data.elasticstack_elasticsearch_ingest_processor_append.tags.json + ] +} +``` + + +## Schema + +### Required + +- **field** (String) The field to be appended to. +- **value** (List of String) The value to be appended. + +### Optional + +- **allow_duplicates** (Boolean) If `false`, the processor does not append values already present in the field. +- **description** (String) Description of the processor. +- **if** (String) Conditionally execute the processor +- **ignore_failure** (Boolean) Ignore failures for the processor. +- **media_type** (String) The media type for encoding value. Applies only when value is a template snippet. Must be one of `application/json`, `text/plain`, or `application/x-www-form-urlencoded`. +- **on_failure** (List of String) Handle failures for the processor. +- **tag** (String) Identifier for the processor. + +### Read-Only + +- **id** (String) Internal identifier of the resource +- **json** (String) JSON representation of this data source. + diff --git a/docs/data-sources/elasticsearch_ingest_processor_bytes.md b/docs/data-sources/elasticsearch_ingest_processor_bytes.md new file mode 100644 index 000000000..7dadc4091 --- /dev/null +++ b/docs/data-sources/elasticsearch_ingest_processor_bytes.md @@ -0,0 +1,58 @@ +--- +subcategory: "Ingest" +layout: "" +page_title: "Elasticstack: elasticstack_elasticsearch_ingest_processor_bytes Data Source" +description: |- + Helper data source to create a processor which converts a human readable byte value (e.g. 1kb) to its value in bytes (e.g. 1024). +--- + +# Data Source: elasticstack_elasticsearch_ingest_processor_bytes + +Helper data source to which can be used to create a processor to convert a human readable byte value (e.g. 1kb) to its value in bytes (e.g. 1024). If the field is an array of strings, all members of the array will be converted. + +Supported human readable units are "b", "kb", "mb", "gb", "tb", "pb" case insensitive. An error will occur if the field is not a supported format or resultant value exceeds 2^63. + +See: https://www.elastic.co/guide/en/elasticsearch/reference/current/bytes-processor.html + +## Example Usage + +```terraform +provider "elasticstack" { + elasticsearch {} +} + +data "elasticstack_elasticsearch_ingest_processor_bytes" "bytes" { + field = "file.size" +} + +resource "elasticstack_elasticsearch_ingest_pipeline" "my_ingest_pipeline" { + name = "bytes-ingest" + + processors = [ + data.elasticstack_elasticsearch_ingest_processor_bytes.bytes.json + ] +} +``` + + +## Schema + +### Required + +- **field** (String) The field to convert + +### Optional + +- **description** (String) Description of the processor. +- **if** (String) Conditionally execute the processor +- **ignore_failure** (Boolean) Ignore failures for the processor. +- **ignore_missing** (Boolean) If `true` and `field` does not exist or is `null`, the processor quietly exits without modifying the document. +- **on_failure** (List of String) Handle failures for the processor. +- **tag** (String) Identifier for the processor. +- **target_field** (String) The field to assign the converted value to, by default `field` is updated in-place + +### Read-Only + +- **id** (String) Internal identifier of the resource +- **json** (String) JSON representation of this data source. + diff --git a/docs/data-sources/elasticsearch_ingest_processor_circle.md b/docs/data-sources/elasticsearch_ingest_processor_circle.md new file mode 100644 index 000000000..69c180af4 --- /dev/null +++ b/docs/data-sources/elasticsearch_ingest_processor_circle.md @@ -0,0 +1,60 @@ +--- +subcategory: "Ingest" +layout: "" +page_title: "Elasticstack: elasticstack_elasticsearch_ingest_processor_circle Data Source" +description: |- + Helper data source to create a processor which converts circle definitions of shapes to regular polygons which approximate them. +--- + +# Data Source: elasticstack_elasticsearch_ingest_processor_circle + +Helper data source to which can be used to create a processor to convert circle definitions of shapes to regular polygons which approximate them. + +See: https://www.elastic.co/guide/en/elasticsearch/reference/current/ingest-circle-processor.html + +## Example Usage + +```terraform +provider "elasticstack" { + elasticsearch {} +} + +data "elasticstack_elasticsearch_ingest_processor_circle" "circle" { + field = "circle" + error_distance = 28.1 + shape_type = "geo_shape" +} + +resource "elasticstack_elasticsearch_ingest_pipeline" "my_ingest_pipeline" { + name = "circle-ingest" + + processors = [ + data.elasticstack_elasticsearch_ingest_processor_circle.circle.json + ] +} +``` + + +## Schema + +### Required + +- **error_distance** (Number) The difference between the resulting inscribed distance from center to side and the circle’s radius (measured in meters for `geo_shape`, unit-less for `shape`) +- **field** (String) The string-valued field to trim whitespace from. +- **shape_type** (String) Which field mapping type is to be used when processing the circle. + +### Optional + +- **description** (String) Description of the processor. +- **if** (String) Conditionally execute the processor +- **ignore_failure** (Boolean) Ignore failures for the processor. +- **ignore_missing** (Boolean) If `true` and `field` does not exist or is `null`, the processor quietly exits without modifying the document. +- **on_failure** (List of String) Handle failures for the processor. +- **tag** (String) Identifier for the processor. +- **target_field** (String) The field to assign the converted value to, by default `field` is updated in-place + +### Read-Only + +- **id** (String) Internal identifier of the resource +- **json** (String) JSON representation of this data source. + diff --git a/docs/data-sources/elasticsearch_ingest_processor_community_id.md b/docs/data-sources/elasticsearch_ingest_processor_community_id.md new file mode 100644 index 000000000..cd1e83d78 --- /dev/null +++ b/docs/data-sources/elasticsearch_ingest_processor_community_id.md @@ -0,0 +1,62 @@ +--- +subcategory: "Ingest" +layout: "" +page_title: "Elasticstack: elasticstack_elasticsearch_ingest_processor_community_id Data Source" +description: |- + Helper data source to create a processor which computes the Community ID for network flow data as defined in the Community ID Specification. +--- + +# Data Source: elasticstack_elasticsearch_ingest_processor_community_id + +Helper data source to which can be used to create a processor to compute the Community ID for network flow data as defined in the [Community ID Specification](https://github.com/corelight/community-id-spec). +You can use a community ID to correlate network events related to a single flow. + +The community ID processor reads network flow data from related [Elastic Common Schema (ECS)](https://www.elastic.co/guide/en/ecs/1.12) fields by default. If you use the ECS, no configuration is required. + +See: https://www.elastic.co/guide/en/elasticsearch/reference/current/community-id-processor.html + +## Example Usage + +```terraform +provider "elasticstack" { + elasticsearch {} +} + +data "elasticstack_elasticsearch_ingest_processor_community_id" "community" {} + +resource "elasticstack_elasticsearch_ingest_pipeline" "my_ingest_pipeline" { + name = "community-ingest" + + processors = [ + data.elasticstack_elasticsearch_ingest_processor_community_id.community.json + ] +} +``` + + +## Schema + +### Optional + +- **description** (String) Description of the processor. +- **destination_ip** (String) Field containing the destination IP address. +- **destination_port** (Number) Field containing the destination port. +- **iana_number** (Number) Field containing the IANA number. +- **icmp_code** (Number) Field containing the ICMP code. +- **icmp_type** (Number) Field containing the ICMP type. +- **if** (String) Conditionally execute the processor +- **ignore_failure** (Boolean) Ignore failures for the processor. +- **ignore_missing** (Boolean) If `true` and `field` does not exist or is `null`, the processor quietly exits without modifying the document. +- **on_failure** (List of String) Handle failures for the processor. +- **seed** (Number) Seed for the community ID hash. Must be between 0 and 65535 (inclusive). The seed can prevent hash collisions between network domains, such as a staging and production network that use the same addressing scheme. +- **source_ip** (String) Field containing the source IP address. +- **source_port** (Number) Field containing the source port. +- **tag** (String) Identifier for the processor. +- **target_field** (String) Output field for the community ID. +- **transport** (String) Field containing the transport protocol. Used only when the `iana_number` field is not present. + +### Read-Only + +- **id** (String) Internal identifier of the resource +- **json** (String) JSON representation of this data source. + diff --git a/docs/data-sources/elasticsearch_ingest_processor_convert.md b/docs/data-sources/elasticsearch_ingest_processor_convert.md new file mode 100644 index 000000000..61ebd63ab --- /dev/null +++ b/docs/data-sources/elasticsearch_ingest_processor_convert.md @@ -0,0 +1,67 @@ +--- +subcategory: "Ingest" +layout: "" +page_title: "Elasticstack: elasticstack_elasticsearch_ingest_processor_convert Data Source" +description: |- + Helper data source to create a processor which converts a field in the currently ingested document to a different type, such as converting a string to an integer. +--- + +# Data Source: elasticstack_elasticsearch_ingest_processor_convert + +Helper data source to which can be used to convert a field in the currently ingested document to a different type, such as converting a string to an integer. If the field value is an array, all members will be converted. + +The supported types include: `integer`, `long`, `float`, `double`, `string`, `boolean`, `ip`, and `auto`. + +Specifying `boolean` will set the field to true if its string value is equal to true (ignore case), to false if its string value is equal to false (ignore case), or it will throw an exception otherwise. + +Specifying `ip` will set the target field to the value of `field` if it contains a valid IPv4 or IPv6 address that can be indexed into an IP field type. + +Specifying `auto` will attempt to convert the string-valued `field` into the closest non-string, non-IP type. For example, a field whose value is "true" will be converted to its respective boolean type: true. Do note that float takes precedence of double in auto. A value of "242.15" will "automatically" be converted to 242.15 of type `float`. If a provided field cannot be appropriately converted, the processor will still process successfully and leave the field value as-is. In such a case, `target_field` will be updated with the unconverted field value. + +See: https://www.elastic.co/guide/en/elasticsearch/reference/current/convert-processor.html + +## Example Usage + +```terraform +provider "elasticstack" { + elasticsearch {} +} + +data "elasticstack_elasticsearch_ingest_processor_convert" "convert" { + description = "converts the content of the id field to an integer" + field = "id" + type = "integer" +} + +resource "elasticstack_elasticsearch_ingest_pipeline" "my_ingest_pipeline" { + name = "convert-ingest" + + processors = [ + data.elasticstack_elasticsearch_ingest_processor_convert.convert.json + ] +} +``` + + +## Schema + +### Required + +- **field** (String) The field whose value is to be converted. +- **type** (String) The type to convert the existing value to + +### Optional + +- **description** (String) Description of the processor. +- **if** (String) Conditionally execute the processor +- **ignore_failure** (Boolean) Ignore failures for the processor. +- **ignore_missing** (Boolean) If `true` and `field` does not exist or is `null`, the processor quietly exits without modifying the document. +- **on_failure** (List of String) Handle failures for the processor. +- **tag** (String) Identifier for the processor. +- **target_field** (String) The field to assign the converted value to. + +### Read-Only + +- **id** (String) Internal identifier of the resource +- **json** (String) JSON representation of this data source. + diff --git a/docs/data-sources/elasticsearch_ingest_processor_csv.md b/docs/data-sources/elasticsearch_ingest_processor_csv.md new file mode 100644 index 000000000..4bb12e395 --- /dev/null +++ b/docs/data-sources/elasticsearch_ingest_processor_csv.md @@ -0,0 +1,63 @@ +--- +subcategory: "Ingest" +layout: "" +page_title: "Elasticstack: elasticstack_elasticsearch_ingest_processor_csv Data Source" +description: |- + Helper data source to create a processor which extracts fields from CSV line out of a single text field within a document. +--- + +# Data Source: elasticstack_elasticsearch_ingest_processor_csv + +Helper data source to which can be used to extract fields from CSV line out of a single text field within a document. Any empty field in CSV will be skipped. + +See: https://www.elastic.co/guide/en/elasticsearch/reference/current/csv-processor.html + +## Example Usage + +```terraform +provider "elasticstack" { + elasticsearch {} +} + +data "elasticstack_elasticsearch_ingest_processor_csv" "csv" { + field = "my_field" + target_fields = ["field1", "field2"] +} + +resource "elasticstack_elasticsearch_ingest_pipeline" "my_ingest_pipeline" { + name = "csv-ingest" + + processors = [ + data.elasticstack_elasticsearch_ingest_processor_csv.csv.json + ] +} +``` + +If the `trim` option is enabled then any whitespace in the beginning and in the end of each unquoted field will be trimmed. For example with configuration above, a value of A, B will result in field field2 having value {nbsp}B (with space at the beginning). If trim is enabled A, B will result in field field2 having value B (no whitespace). Quoted fields will be left untouched. + + +## Schema + +### Required + +- **field** (String) The field to extract data from. +- **target_fields** (List of String) The array of fields to assign extracted values to. + +### Optional + +- **description** (String) Description of the processor. +- **empty_value** (String) Value used to fill empty fields, empty fields will be skipped if this is not provided. +- **if** (String) Conditionally execute the processor +- **ignore_failure** (Boolean) Ignore failures for the processor. +- **ignore_missing** (Boolean) If `true` and `field` does not exist or is `null`, the processor quietly exits without modifying the document. +- **on_failure** (List of String) Handle failures for the processor. +- **quote** (String) Quote used in CSV, has to be single character string +- **separator** (String) Separator used in CSV, has to be single character string. +- **tag** (String) Identifier for the processor. +- **trim** (Boolean) Trim whitespaces in unquoted fields. + +### Read-Only + +- **id** (String) Internal identifier of the resource +- **json** (String) JSON representation of this data source. + diff --git a/docs/data-sources/elasticsearch_ingest_processor_date.md b/docs/data-sources/elasticsearch_ingest_processor_date.md new file mode 100644 index 000000000..67ed32878 --- /dev/null +++ b/docs/data-sources/elasticsearch_ingest_processor_date.md @@ -0,0 +1,64 @@ +--- +subcategory: "Ingest" +layout: "" +page_title: "Elasticstack: elasticstack_elasticsearch_ingest_processor_date Data Source" +description: |- + Helper data source to create a processor which parses dates from fields, and then uses the date or timestamp as the timestamp for the document. +--- + +# Data Source: elasticstack_elasticsearch_ingest_processor_date + +Helper data source to which can be used to parse dates from fields, and then uses the date or timestamp as the timestamp for the document. +By default, the date processor adds the parsed date as a new field called `@timestamp`. You can specify a different field by setting the `target_field` configuration parameter. Multiple date formats are supported as part of the same date processor definition. They will be used sequentially to attempt parsing the date field, in the same order they were defined as part of the processor definition. + +See: https://www.elastic.co/guide/en/elasticsearch/reference/current/date-processor.html + +## Example Usage + +Here is an example that adds the parsed date to the `timestamp` field based on the `initial_date` field: + +```terraform +provider "elasticstack" { + elasticsearch {} +} + +data "elasticstack_elasticsearch_ingest_processor_date" "date" { + field = "initial_date" + target_field = "timestamp" + formats = ["dd/MM/yyyy HH:mm:ss"] + timezone = "Europe/Amsterdam" +} + +resource "elasticstack_elasticsearch_ingest_pipeline" "my_ingest_pipeline" { + name = "date-ingest" + + processors = [ + data.elasticstack_elasticsearch_ingest_processor_date.date.json + ] +} +``` + + +## Schema + +### Required + +- **field** (String) The field to get the date from. +- **formats** (List of String) An array of the expected date formats. + +### Optional + +- **description** (String) Description of the processor. +- **if** (String) Conditionally execute the processor +- **ignore_failure** (Boolean) Ignore failures for the processor. +- **locale** (String) The locale to use when parsing the date, relevant when parsing month names or week days. +- **on_failure** (List of String) Handle failures for the processor. +- **output_format** (String) The format to use when writing the date to `target_field`. +- **tag** (String) Identifier for the processor. +- **target_field** (String) The field that will hold the parsed date. +- **timezone** (String) The timezone to use when parsing the date. + +### Read-Only + +- **id** (String) Internal identifier of the resource +- **json** (String) JSON representation of this data source. diff --git a/docs/data-sources/elasticsearch_ingest_processor_date_index_name.md b/docs/data-sources/elasticsearch_ingest_processor_date_index_name.md new file mode 100644 index 000000000..ab4fc947e --- /dev/null +++ b/docs/data-sources/elasticsearch_ingest_processor_date_index_name.md @@ -0,0 +1,66 @@ +--- +subcategory: "Ingest" +layout: "" +page_title: "Elasticstack: elasticstack_elasticsearch_ingest_processor_date_index_name Data Source" +description: |- + Helper data source to create a processor which helps to point documents to the right time based index based on a date or timestamp field in a document by using the date math index name support. +--- + +# Data Source: elasticstack_elasticsearch_ingest_processor_date_index_name + +The purpose of this processor is to point documents to the right time based index based on a date or timestamp field in a document by using the date math index name support. + +The processor sets the _index metadata field with a date math index name expression based on the provided index name prefix, a date or timestamp field in the documents being processed and the provided date rounding. + +First, this processor fetches the date or timestamp from a field in the document being processed. Optionally, date formatting can be configured on how the field’s value should be parsed into a date. Then this date, the provided index name prefix and the provided date rounding get formatted into a date math index name expression. Also here optionally date formatting can be specified on how the date should be formatted into a date math index name expression. + +See: https://www.elastic.co/guide/en/elasticsearch/reference/current/date-index-name-processor.html + +## Example Usage + +```terraform +provider "elasticstack" { + elasticsearch {} +} + +data "elasticstack_elasticsearch_ingest_processor_date_index_name" "date_index_name" { + description = "monthly date-time index naming" + field = "date1" + index_name_prefix = "my-index-" + date_rounding = "M" +} + +resource "elasticstack_elasticsearch_ingest_pipeline" "my_ingest_pipeline" { + name = "date-index-name-ingest" + + processors = [ + data.elasticstack_elasticsearch_ingest_processor_date_index_name.date_index_name.json + ] +} +``` + + +## Schema + +### Required + +- **date_rounding** (String) How to round the date when formatting the date into the index name. +- **field** (String) The field to get the date or timestamp from. + +### Optional + +- **date_formats** (List of String) An array of the expected date formats for parsing dates / timestamps in the document being preprocessed. +- **description** (String) Description of the processor. +- **if** (String) Conditionally execute the processor +- **ignore_failure** (Boolean) Ignore failures for the processor. +- **index_name_format** (String) The format to be used when printing the parsed date into the index name. +- **index_name_prefix** (String) A prefix of the index name to be prepended before the printed date. +- **locale** (String) The locale to use when parsing the date from the document being preprocessed, relevant when parsing month names or week days. +- **on_failure** (List of String) Handle failures for the processor. +- **tag** (String) Identifier for the processor. +- **timezone** (String) The timezone to use when parsing the date and when date math index supports resolves expressions into concrete index names. + +### Read-Only + +- **id** (String) Internal identifier of the resource +- **json** (String) JSON representation of this data source. diff --git a/docs/data-sources/elasticsearch_ingest_processor_dissect.md b/docs/data-sources/elasticsearch_ingest_processor_dissect.md new file mode 100644 index 000000000..6ead39063 --- /dev/null +++ b/docs/data-sources/elasticsearch_ingest_processor_dissect.md @@ -0,0 +1,60 @@ +--- +subcategory: "Ingest" +layout: "" +page_title: "Elasticstack: elasticstack_elasticsearch_ingest_processor_dissect Data Source" +description: |- + Helper data source to create a processor which extracts structured fields out of a single text field within a document. +--- + +# Data Source: elasticstack_elasticsearch_ingest_processor_dissect + +Similar to the Grok Processor, dissect also extracts structured fields out of a single text field within a document. However unlike the Grok Processor, dissect does not use Regular Expressions. This allows dissect’s syntax to be simple and for some cases faster than the Grok Processor. + +Dissect matches a single text field against a defined pattern. + + +See: https://www.elastic.co/guide/en/elasticsearch/reference/current/dissect-processor.html + +## Example Usage + +```terraform +provider "elasticstack" { + elasticsearch {} +} + +data "elasticstack_elasticsearch_ingest_processor_dissect" "dissect" { + field = "message" + pattern = "%%{clientip} %%{ident} %%{auth} [%%{@timestamp}] \"%%{verb} %%{request} HTTP/%%{httpversion}\" %%{status} %%{size}" +} + +resource "elasticstack_elasticsearch_ingest_pipeline" "my_ingest_pipeline" { + name = "dissect-ingest" + + processors = [ + data.elasticstack_elasticsearch_ingest_processor_dissect.dissect.json + ] +} +``` + + +## Schema + +### Required + +- **field** (String) The field to dissect. +- **pattern** (String) The pattern to apply to the field. + +### Optional + +- **append_separator** (String) The character(s) that separate the appended fields. +- **description** (String) Description of the processor. +- **if** (String) Conditionally execute the processor +- **ignore_failure** (Boolean) Ignore failures for the processor. +- **ignore_missing** (Boolean) If `true` and `field` does not exist or is `null`, the processor quietly exits without modifying the document. +- **on_failure** (List of String) Handle failures for the processor. +- **tag** (String) Identifier for the processor. + +### Read-Only + +- **id** (String) Internal identifier of the resource +- **json** (String) JSON representation of this data source. diff --git a/docs/data-sources/elasticsearch_ingest_processor_dot_expander.md b/docs/data-sources/elasticsearch_ingest_processor_dot_expander.md new file mode 100644 index 000000000..833e37de3 --- /dev/null +++ b/docs/data-sources/elasticsearch_ingest_processor_dot_expander.md @@ -0,0 +1,56 @@ +--- +subcategory: "Ingest" +layout: "" +page_title: "Elasticstack: elasticstack_elasticsearch_ingest_processor_dot_expander Data Source" +description: |- + Helper data source to create a processor which expands a field with dots into an object field. +--- + +# Data Source: elasticstack_elasticsearch_ingest_processor_dot_expander + +Expands a field with dots into an object field. This processor allows fields with dots in the name to be accessible by other processors in the pipeline. Otherwise these fields can’t be accessed by any processor. + +See: elastic.co/guide/en/elasticsearch/reference/current/dot-expand-processor.html + + +## Example Usage + +```terraform +provider "elasticstack" { + elasticsearch {} +} + +data "elasticstack_elasticsearch_ingest_processor_dot_expander" "dot_expander" { + field = "foo.bar" +} + +resource "elasticstack_elasticsearch_ingest_pipeline" "my_ingest_pipeline" { + name = "dot-expander-ingest" + + processors = [ + data.elasticstack_elasticsearch_ingest_processor_dot_expander.dot_expander.json + ] +} +``` + + +## Schema + +### Required + +- **field** (String) The field to expand into an object field. If set to *, all top-level fields will be expanded. + +### Optional + +- **description** (String) Description of the processor. +- **if** (String) Conditionally execute the processor +- **ignore_failure** (Boolean) Ignore failures for the processor. +- **on_failure** (List of String) Handle failures for the processor. +- **override** (Boolean) Controls the behavior when there is already an existing nested object that conflicts with the expanded field. +- **path** (String) The field that contains the field to expand. +- **tag** (String) Identifier for the processor. + +### Read-Only + +- **id** (String) Internal identifier of the resource +- **json** (String) JSON representation of this data source. diff --git a/docs/data-sources/elasticsearch_ingest_processor_drop.md b/docs/data-sources/elasticsearch_ingest_processor_drop.md new file mode 100644 index 000000000..b66ab5448 --- /dev/null +++ b/docs/data-sources/elasticsearch_ingest_processor_drop.md @@ -0,0 +1,50 @@ +--- +subcategory: "Ingest" +layout: "" +page_title: "Elasticstack: elasticstack_elasticsearch_ingest_processor_drop Data Source" +description: |- + Helper data source to create a processor which drops the document without raising any errors. +--- + +# Data Source: elasticstack_elasticsearch_ingest_processor_drop + +Drops the document without raising any errors. This is useful to prevent the document from getting indexed based on some condition. + +See: https://www.elastic.co/guide/en/elasticsearch/reference/current/drop-processor.html + + +## Example Usage + +```terraform +provider "elasticstack" { + elasticsearch {} +} + +data "elasticstack_elasticsearch_ingest_processor_drop" "drop" { + if = "ctx.network_name == 'Guest'" +} + +resource "elasticstack_elasticsearch_ingest_pipeline" "my_ingest_pipeline" { + name = "drop-ingest" + + processors = [ + data.elasticstack_elasticsearch_ingest_processor_drop.drop.json + ] +} +``` + + +## Schema + +### Optional + +- **description** (String) Description of the processor. +- **if** (String) Conditionally execute the processor +- **ignore_failure** (Boolean) Ignore failures for the processor. +- **on_failure** (List of String) Handle failures for the processor. +- **tag** (String) Identifier for the processor. + +### Read-Only + +- **id** (String) Internal identifier of the resource +- **json** (String) JSON representation of this data source. diff --git a/docs/data-sources/elasticsearch_ingest_processor_enrich.md b/docs/data-sources/elasticsearch_ingest_processor_enrich.md new file mode 100644 index 000000000..41ee0d5da --- /dev/null +++ b/docs/data-sources/elasticsearch_ingest_processor_enrich.md @@ -0,0 +1,64 @@ +--- +subcategory: "Ingest" +layout: "" +page_title: "Elasticstack: elasticstack_elasticsearch_ingest_processor_enrich Data Source" +description: |- + Helper data source to create a processor which enriches documents with data from another index. +--- + +# Data Source: elasticstack_elasticsearch_ingest_processor_enrich + +The enrich processor can enrich documents with data from another index. See enrich data section for more information about how to set this up. + +See: https://www.elastic.co/guide/en/elasticsearch/reference/current/ingest-enriching-data.html and https://www.elastic.co/guide/en/elasticsearch/reference/current/enrich-processor.html + +## Example Usage + +```terraform +provider "elasticstack" { + elasticsearch {} +} + +// the policy must exist before using this processor +// See example at: https://www.elastic.co/guide/en/elasticsearch/reference/current/match-enrich-policy-type.html +data "elasticstack_elasticsearch_ingest_processor_enrich" "enrich" { + policy_name = "users-policy" + field = "email" + target_field = "user" + max_matches = 1 +} + +resource "elasticstack_elasticsearch_ingest_pipeline" "my_ingest_pipeline" { + name = "enrich-ingest" + + processors = [ + data.elasticstack_elasticsearch_ingest_processor_enrich.enrich.json + ] +} +``` + + +## Schema + +### Required + +- **field** (String) The field in the input document that matches the policies match_field used to retrieve the enrichment data. +- **policy_name** (String) The name of the enrich policy to use. +- **target_field** (String) Field added to incoming documents to contain enrich data. + +### Optional + +- **description** (String) Description of the processor. +- **if** (String) Conditionally execute the processor +- **ignore_failure** (Boolean) Ignore failures for the processor. +- **ignore_missing** (Boolean) If `true` and `field` does not exist or is `null`, the processor quietly exits without modifying the document. +- **max_matches** (Number) The maximum number of matched documents to include under the configured target field. +- **on_failure** (List of String) Handle failures for the processor. +- **override** (Boolean) If processor will update fields with pre-existing non-null-valued field. +- **shape_relation** (String) A spatial relation operator used to match the geoshape of incoming documents to documents in the enrich index. +- **tag** (String) Identifier for the processor. + +### Read-Only + +- **id** (String) Internal identifier of the resource +- **json** (String) JSON representation of this data source. diff --git a/docs/data-sources/elasticsearch_ingest_processor_fail.md b/docs/data-sources/elasticsearch_ingest_processor_fail.md new file mode 100644 index 000000000..77eb88eab --- /dev/null +++ b/docs/data-sources/elasticsearch_ingest_processor_fail.md @@ -0,0 +1,55 @@ +--- +subcategory: "Ingest" +layout: "" +page_title: "Elasticstack: elasticstack_elasticsearch_ingest_processor_fail Data Source" +description: |- + Helper data source to create a processor which raises an exception. +--- + +# Data Source: elasticstack_elasticsearch_ingest_processor_fail + +Raises an exception. This is useful for when you expect a pipeline to fail and want to relay a specific message to the requester. + +See: https://www.elastic.co/guide/en/elasticsearch/reference/current/fail-processor.html + + +## Example Usage + +```terraform +provider "elasticstack" { + elasticsearch {} +} + +data "elasticstack_elasticsearch_ingest_processor_fail" "fail" { + if = "ctx.tags.contains('production') != true" + message = "The production tag is not present, found tags: {{{tags}}}" +} + +resource "elasticstack_elasticsearch_ingest_pipeline" "my_ingest_pipeline" { + name = "fail-ingest" + + processors = [ + data.elasticstack_elasticsearch_ingest_processor_fail.fail.json + ] +} +``` + + +## Schema + +### Required + +- **message** (String) The error message thrown by the processor. + +### Optional + +- **description** (String) Description of the processor. +- **if** (String) Conditionally execute the processor +- **ignore_failure** (Boolean) Ignore failures for the processor. +- **on_failure** (List of String) Handle failures for the processor. +- **tag** (String) Identifier for the processor. + +### Read-Only + +- **id** (String) Internal identifier of the resource +- **json** (String) JSON representation of this data source. diff --git a/docs/data-sources/elasticsearch_ingest_processor_fingerprint.md b/docs/data-sources/elasticsearch_ingest_processor_fingerprint.md new file mode 100644 index 000000000..11d0774dd --- /dev/null +++ b/docs/data-sources/elasticsearch_ingest_processor_fingerprint.md @@ -0,0 +1,57 @@ +--- +subcategory: "Ingest" +layout: "" +page_title: "Elasticstack: elasticstack_elasticsearch_ingest_processor_fingerprint Data Source" +description: |- + Helper data source to create a processor which computes a hash of the document’s content. +--- + +# Data Source: elasticstack_elasticsearch_ingest_processor_fingerprint + +Computes a hash of the document’s content. You can use this hash for content fingerprinting. + +See: https://www.elastic.co/guide/en/elasticsearch/reference/current/fingerprint-processor.html + +## Example Usage + +```terraform +provider "elasticstack" { + elasticsearch {} +} + +data "elasticstack_elasticsearch_ingest_processor_fingerprint" "fingerprint" { + fields = ["user"] +} + +resource "elasticstack_elasticsearch_ingest_pipeline" "my_ingest_pipeline" { + name = "fingerprint-ingest" + + processors = [ + data.elasticstack_elasticsearch_ingest_processor_fingerprint.fingerprint.json + ] +} +``` + + +## Schema + +### Required + +- **fields** (List of String) Array of fields to include in the fingerprint. + +### Optional + +- **description** (String) Description of the processor. +- **if** (String) Conditionally execute the processor +- **ignore_failure** (Boolean) Ignore failures for the processor. +- **ignore_missing** (Boolean) If `true`, the processor ignores any missing `fields`. If all fields are missing, the processor silently exits without modifying the document. +- **method** (String) The hash method used to compute the fingerprint. +- **on_failure** (List of String) Handle failures for the processor. +- **salt** (String) Salt value for the hash function. +- **tag** (String) Identifier for the processor. +- **target_field** (String) Output field for the fingerprint. + +### Read-Only + +- **id** (String) Internal identifier of the resource +- **json** (String) JSON representation of this data source. diff --git a/docs/data-sources/elasticsearch_ingest_processor_foreach.md b/docs/data-sources/elasticsearch_ingest_processor_foreach.md new file mode 100644 index 000000000..84e9499c2 --- /dev/null +++ b/docs/data-sources/elasticsearch_ingest_processor_foreach.md @@ -0,0 +1,76 @@ +--- +subcategory: "Ingest" +layout: "" +page_title: "Elasticstack: elasticstack_elasticsearch_ingest_processor_foreach Data Source" +description: |- + Helper data source to create a processor which runs an ingest processor on each element of an array or object. +--- + +# Data Source: elasticstack_elasticsearch_ingest_processor_foreach + +Runs an ingest processor on each element of an array or object. + +All ingest processors can run on array or object elements. However, if the number of elements is unknown, it can be cumbersome to process each one in the same way. + +The `foreach` processor lets you specify a `field` containing array or object values and a `processor` to run on each element in the field. + +See: https://www.elastic.co/guide/en/elasticsearch/reference/current/foreach-processor.html + + +### Access keys and values + +When iterating through an array or object, the foreach processor stores the current element’s value in the `_ingest._value` ingest metadata field. `_ingest._value` contains the entire element value, including any child fields. You can access child field values using dot notation on the `_ingest._value` field. + +When iterating through an object, the foreach processor also stores the current element’s key as a string in `_ingest._key`. + +You can access and change `_ingest._key` and `_ingest._value` in the processor. + + + +## Example Usage + +```terraform +provider "elasticstack" { + elasticsearch {} +} + +data "elasticstack_elasticsearch_ingest_processor_convert" "convert" { + field = "_ingest._value" + type = "integer" +} + +data "elasticstack_elasticsearch_ingest_processor_foreach" "foreach" { + field = "values" + processor = data.elasticstack_elasticsearch_ingest_processor_convert.convert.json +} + +resource "elasticstack_elasticsearch_ingest_pipeline" "my_ingest_pipeline" { + name = "foreach-ingest" + + processors = [ + data.elasticstack_elasticsearch_ingest_processor_foreach.foreach.json + ] +} +``` + + +## Schema + +### Required + +- **field** (String) Field containing array or object values. +- **processor** (String) Ingest processor to run on each element. + +### Optional + +- **description** (String) Description of the processor. +- **if** (String) Conditionally execute the processor +- **ignore_failure** (Boolean) Ignore failures for the processor. +- **ignore_missing** (Boolean) If `true`, the processor silently exits without changing the document if the `field` is `null` or missing. +- **on_failure** (List of String) Handle failures for the processor. +- **tag** (String) Identifier for the processor. + +### Read-Only + +- **id** (String) Internal identifier of the resource +- **json** (String) JSON representation of this data source. diff --git a/docs/data-sources/elasticsearch_ingest_processor_geoip.md b/docs/data-sources/elasticsearch_ingest_processor_geoip.md new file mode 100644 index 000000000..ac7eeeac5 --- /dev/null +++ b/docs/data-sources/elasticsearch_ingest_processor_geoip.md @@ -0,0 +1,61 @@ +--- +subcategory: "Ingest" +layout: "" +page_title: "Elasticstack: elasticstack_elasticsearch_ingest_processor_geoip Data Source" +description: |- + Helper data source to create a processor which adds information about the geographical location of an IPv4 or IPv6 address. +--- + +# Data Source: elasticstack_elasticsearch_ingest_processor_geoip + +The geoip processor adds information about the geographical location of an IPv4 or IPv6 address. + +By default, the processor uses the GeoLite2 City, GeoLite2 Country, and GeoLite2 ASN GeoIP2 databases from MaxMind, shared under the CC BY-SA 4.0 license. Elasticsearch automatically downloads updates for these databases from the Elastic GeoIP endpoint: https://geoip.elastic.co/v1/database. To get download statistics for these updates, use the GeoIP stats API. + +If your cluster can’t connect to the Elastic GeoIP endpoint or you want to manage your own updates, [see Manage your own GeoIP2 database updates](https://www.elastic.co/guide/en/elasticsearch/reference/current/geoip-processor.html#manage-geoip-database-updates). + +If Elasticsearch can’t connect to the endpoint for 30 days all updated databases will become invalid. Elasticsearch will stop enriching documents with geoip data and will add tags: ["_geoip_expired_database"] field instead. + + +See: https://www.elastic.co/guide/en/elasticsearch/reference/current/geoip-processor.html + + +## Example Usage + +```terraform +provider "elasticstack" { + elasticsearch {} +} + +data "elasticstack_elasticsearch_ingest_processor_geoip" "geoip" { + field = "ip" +} + +resource "elasticstack_elasticsearch_ingest_pipeline" "my_ingest_pipeline" { + name = "geoip-ingest" + + processors = [ + data.elasticstack_elasticsearch_ingest_processor_geoip.geoip.json + ] +} +``` + + +## Schema + +### Required + +- **field** (String) The field to get the ip address from for the geographical lookup. + +### Optional + +- **database_file** (String) The database filename referring to a database the module ships with (GeoLite2-City.mmdb, GeoLite2-Country.mmdb, or GeoLite2-ASN.mmdb) or a custom database in the `ingest-geoip` config directory. +- **first_only** (Boolean) If `true` only first found geoip data will be returned, even if field contains array. +- **ignore_missing** (Boolean) If `true` and `field` does not exist, the processor quietly exits without modifying the document. +- **properties** (Set of String) Controls what properties are added to the `target_field` based on the geoip lookup. +- **target_field** (String) The field that will hold the geographical information looked up from the MaxMind database. + +### Read-Only + +- **id** (String) Internal identifier of the resource +- **json** (String) JSON representation of this data source. diff --git a/docs/data-sources/elasticsearch_ingest_processor_grok.md b/docs/data-sources/elasticsearch_ingest_processor_grok.md new file mode 100644 index 000000000..ed55c0f99 --- /dev/null +++ b/docs/data-sources/elasticsearch_ingest_processor_grok.md @@ -0,0 +1,69 @@ +--- +subcategory: "Ingest" +layout: "" +page_title: "Elasticstack: elasticstack_elasticsearch_ingest_processor_grok Data Source" +description: |- + Helper data source to create a processor which extracts structured fields out of a single text field within a document. +--- + +# Data Source: elasticstack_elasticsearch_ingest_processor_grok + +Extracts structured fields out of a single text field within a document. You choose which field to extract matched fields from, as well as the grok pattern you expect will match. A grok pattern is like a regular expression that supports aliased expressions that can be reused. + +This processor comes packaged with many [reusable patterns](https://github.com/elastic/elasticsearch/blob/master/libs/grok/src/main/resources/patterns). + +If you need help building patterns to match your logs, you will find the [Grok Debugger](https://www.elastic.co/guide/en/kibana/master/xpack-grokdebugger.html) tool quite useful! [The Grok Constructor](https://grokconstructor.appspot.com/) is also a useful tool. + + +See: https://www.elastic.co/guide/en/elasticsearch/reference/current/grok-processor.html + + +## Example Usage + +```terraform +provider "elasticstack" { + elasticsearch {} +} + +data "elasticstack_elasticsearch_ingest_processor_grok" "grok" { + field = "message" + patterns = ["%%{FAVORITE_DOG:pet}", "%%{FAVORITE_CAT:pet}"] + pattern_definitions = { + FAVORITE_DOG = "beagle" + FAVORITE_CAT = "burmese" + } +} + +resource "elasticstack_elasticsearch_ingest_pipeline" "my_ingest_pipeline" { + name = "grok-ingest" + + processors = [ + data.elasticstack_elasticsearch_ingest_processor_grok.grok.json + ] +} +``` + + +## Schema + +### Required + +- **field** (String) The field to use for grok expression parsing +- **patterns** (List of String) An ordered list of grok expression to match and extract named captures with. Returns on the first expression in the list that matches. + +### Optional + +- **description** (String) Description of the processor. +- **ecs_compatibility** (String) Must be disabled or v1. If v1, the processor uses patterns with Elastic Common Schema (ECS) field names. **NOTE:** Supported only starting from version of Elasticsearch **7.16.x**. +- **if** (String) Conditionally execute the processor +- **ignore_failure** (Boolean) Ignore failures for the processor. +- **ignore_missing** (Boolean) If `true` and `field` does not exist or is `null`, the processor quietly exits without modifying the document +- **on_failure** (List of String) Handle failures for the processor. +- **pattern_definitions** (Map of String) A map of pattern-name and pattern tuples defining custom patterns to be used by the current processor. Patterns matching existing names will override the pre-existing definition. +- **tag** (String) Identifier for the processor. +- **trace_match** (Boolean) when true, `_ingest._grok_match_index` will be inserted into your matched document’s metadata with the index into the pattern found in `patterns` that matched. + +### Read-Only + +- **id** (String) Internal identifier of the resource +- **json** (String) JSON representation of this data source. diff --git a/docs/data-sources/elasticsearch_ingest_processor_gsub.md b/docs/data-sources/elasticsearch_ingest_processor_gsub.md new file mode 100644 index 000000000..513ed153a --- /dev/null +++ b/docs/data-sources/elasticsearch_ingest_processor_gsub.md @@ -0,0 +1,60 @@ +--- +subcategory: "Ingest" +layout: "" +page_title: "Elasticstack: elasticstack_elasticsearch_ingest_processor_gsub Data Source" +description: |- + Helper data source to create a processor which converts a string field by applying a regular expression and a replacement. +--- + +# Data Source: elasticstack_elasticsearch_ingest_processor_gsub + +Converts a string field by applying a regular expression and a replacement. If the field is an array of string, all members of the array will be converted. If any non-string values are encountered, the processor will throw an exception. + +See: https://www.elastic.co/guide/en/elasticsearch/reference/current/gsub-processor.html + + +## Example Usage + +```terraform +provider "elasticstack" { + elasticsearch {} +} + +data "elasticstack_elasticsearch_ingest_processor_gsub" "gsub" { + field = "field1" + pattern = "\\." + replacement = "-" +} + +resource "elasticstack_elasticsearch_ingest_pipeline" "my_ingest_pipeline" { + name = "gsub-ingest" + + processors = [ + data.elasticstack_elasticsearch_ingest_processor_gsub.gsub.json + ] +} +``` + + +## Schema + +### Required + +- **field** (String) The field to apply the replacement to. +- **pattern** (String) The pattern to be replaced. +- **replacement** (String) The string to replace the matching patterns with. + +### Optional + +- **description** (String) Description of the processor. +- **if** (String) Conditionally execute the processor +- **ignore_failure** (Boolean) Ignore failures for the processor. +- **ignore_missing** (Boolean) If `true` and `field` does not exist or is `null`, the processor quietly exits without modifying the document. +- **on_failure** (List of String) Handle failures for the processor. +- **tag** (String) Identifier for the processor. +- **target_field** (String) The field to assign the converted value to, by default `field` is updated in-place. + +### Read-Only + +- **id** (String) Internal identifier of the resource. +- **json** (String) JSON representation of this data source. diff --git a/docs/data-sources/elasticsearch_ingest_processor_html_strip.md b/docs/data-sources/elasticsearch_ingest_processor_html_strip.md new file mode 100644 index 000000000..904dc5266 --- /dev/null +++ b/docs/data-sources/elasticsearch_ingest_processor_html_strip.md @@ -0,0 +1,56 @@ +--- +subcategory: "Ingest" +layout: "" +page_title: "Elasticstack: elasticstack_elasticsearch_ingest_processor_html_strip Data Source" +description: |- + Helper data source to create a processor which removes HTML tags from the field. +--- + +# Data Source: elasticstack_elasticsearch_ingest_processor_html_strip + +Removes HTML tags from the field. If the field is an array of strings, HTML tags will be removed from all members of the array. + +See: templates/data-sources/elasticsearch_ingest_processor_html_strip.md.tmpl + + +## Example Usage + +```terraform +provider "elasticstack" { + elasticsearch {} +} + +data "elasticstack_elasticsearch_ingest_processor_html_strip" "html_strip" { + field = "foo" +} + +resource "elasticstack_elasticsearch_ingest_pipeline" "my_ingest_pipeline" { + name = "strip-ingest" + + processors = [ + data.elasticstack_elasticsearch_ingest_processor_html_strip.html_strip.json + ] +} +``` + + +## Schema + +### Required + +- **field** (String) The field to apply the replacement to. + +### Optional + +- **description** (String) Description of the processor. +- **if** (String) Conditionally execute the processor +- **ignore_failure** (Boolean) Ignore failures for the processor. +- **ignore_missing** (Boolean) If `true` and `field` does not exist or is `null`, the processor quietly exits without modifying the document. +- **on_failure** (List of String) Handle failures for the processor. +- **tag** (String) Identifier for the processor. +- **target_field** (String) The field to assign the converted value to, by default `field` is updated in-place. + +### Read-Only + +- **id** (String) Internal identifier of the resource. +- **json** (String) JSON representation of this data source. diff --git a/docs/data-sources/elasticsearch_ingest_processor_join.md b/docs/data-sources/elasticsearch_ingest_processor_join.md new file mode 100644 index 000000000..a60a699b2 --- /dev/null +++ b/docs/data-sources/elasticsearch_ingest_processor_join.md @@ -0,0 +1,57 @@ +--- +subcategory: "Ingest" +layout: "" +page_title: "Elasticstack: elasticstack_elasticsearch_ingest_processor_join Data Source" +description: |- + Helper data source to create a processor which joins each element of an array into a single string using a separator character between each element. +--- + +# Data Source: elasticstack_elasticsearch_ingest_processor_join + +Joins each element of an array into a single string using a separator character between each element. Throws an error when the field is not an array. + +See: https://www.elastic.co/guide/en/elasticsearch/reference/current/join-processor.html + + +## Example Usage + +```terraform +provider "elasticstack" { + elasticsearch {} +} + +data "elasticstack_elasticsearch_ingest_processor_join" "join" { + field = "joined_array_field" + separator = "-" +} + +resource "elasticstack_elasticsearch_ingest_pipeline" "my_ingest_pipeline" { + name = "join-ingest" + + processors = [ + data.elasticstack_elasticsearch_ingest_processor_join.join.json + ] +} +``` + + +## Schema + +### Required + +- **field** (String) Field containing array values to join. +- **separator** (String) The separator character. + +### Optional + +- **description** (String) Description of the processor. +- **if** (String) Conditionally execute the processor +- **ignore_failure** (Boolean) Ignore failures for the processor. +- **on_failure** (List of String) Handle failures for the processor. +- **tag** (String) Identifier for the processor. +- **target_field** (String) The field to assign the converted value to, by default `field` is updated in-place. + +### Read-Only + +- **id** (String) Internal identifier of the resource. +- **json** (String) JSON representation of this data source. diff --git a/docs/data-sources/elasticsearch_ingest_processor_json.md b/docs/data-sources/elasticsearch_ingest_processor_json.md new file mode 100644 index 000000000..e29d756a6 --- /dev/null +++ b/docs/data-sources/elasticsearch_ingest_processor_json.md @@ -0,0 +1,58 @@ +--- +subcategory: "Ingest" +layout: "" +page_title: "Elasticstack: elasticstack_elasticsearch_ingest_processor_json Data Source" +description: |- + Helper data source to create a processor which converts a JSON string into a structured JSON object. +--- + +# Data Source: elasticstack_elasticsearch_ingest_processor_json + +Converts a JSON string into a structured JSON object. + +See: https://www.elastic.co/guide/en/elasticsearch/reference/current/json-processor.html + +## Example Usage + +```terraform +provider "elasticstack" { + elasticsearch {} +} + +data "elasticstack_elasticsearch_ingest_processor_json" "json_proc" { + field = "string_source" + target_field = "json_target" +} + +resource "elasticstack_elasticsearch_ingest_pipeline" "my_ingest_pipeline" { + name = "json-ingest" + + processors = [ + data.elasticstack_elasticsearch_ingest_processor_json.json_proc.json + ] +} +``` + + +## Schema + +### Required + +- **field** (String) The field to be parsed. + +### Optional + +- **add_to_root** (Boolean) Flag that forces the parsed JSON to be added at the top level of the document. `target_field` must not be set when this option is chosen. +- **add_to_root_conflict_strategy** (String) When set to `replace`, root fields that conflict with fields from the parsed JSON will be overridden. When set to `merge`, conflicting fields will be merged. Only applicable if `add_to_root` is set to `true`. +- **allow_duplicate_keys** (Boolean) When set to `true`, the JSON parser will not fail if the JSON contains duplicate keys. Instead, the last encountered value for any duplicate key wins. +- **description** (String) Description of the processor. +- **if** (String) Conditionally execute the processor +- **ignore_failure** (Boolean) Ignore failures for the processor. +- **on_failure** (List of String) Handle failures for the processor. +- **tag** (String) Identifier for the processor. +- **target_field** (String) The field that the converted structured object will be written into. Any existing content in this field will be overwritten. + +### Read-Only + +- **id** (String) Internal identifier of the resource. +- **json** (String) JSON representation of this data source. diff --git a/docs/data-sources/elasticsearch_ingest_processor_kv.md b/docs/data-sources/elasticsearch_ingest_processor_kv.md new file mode 100644 index 000000000..2f4eca38f --- /dev/null +++ b/docs/data-sources/elasticsearch_ingest_processor_kv.md @@ -0,0 +1,69 @@ +--- +subcategory: "Ingest" +layout: "" +page_title: "Elasticstack: elasticstack_elasticsearch_ingest_processor_kv Data Source" +description: |- + Helper data source to create a processor which helps automatically parse messages (or specific event fields) which are of the `foo=bar` variety. +--- + +# Data Source: elasticstack_elasticsearch_ingest_processor_kv + +This processor helps automatically parse messages (or specific event fields) which are of the `foo=bar` variety. + +See: https://www.elastic.co/guide/en/elasticsearch/reference/current/kv-processor.html + + +## Example Usage + +```terraform +provider "elasticstack" { + elasticsearch {} +} + +data "elasticstack_elasticsearch_ingest_processor_kv" "kv" { + field = "message" + field_split = " " + value_split = "=" + + exclude_keys = ["tags"] + prefix = "setting_" +} + +resource "elasticstack_elasticsearch_ingest_pipeline" "my_ingest_pipeline" { + name = "kv-ingest" + + processors = [ + data.elasticstack_elasticsearch_ingest_processor_kv.kv.json + ] +} +``` + + +## Schema + +### Required + +- **field** (String) The field to be parsed. Supports template snippets. +- **field_split** (String) Regex pattern to use for splitting key-value pairs. +- **value_split** (String) Regex pattern to use for splitting the key from the value within a key-value pair. + +### Optional + +- **description** (String) Description of the processor. +- **exclude_keys** (Set of String) List of keys to exclude from document +- **if** (String) Conditionally execute the processor +- **ignore_failure** (Boolean) Ignore failures for the processor. +- **ignore_missing** (Boolean) If `true` and `field` does not exist or is `null`, the processor quietly exits without modifying the document. +- **include_keys** (Set of String) List of keys to filter and insert into document. Defaults to including all keys +- **on_failure** (List of String) Handle failures for the processor. +- **prefix** (String) Prefix to be added to extracted keys. +- **strip_brackets** (Boolean) If `true` strip brackets `()`, `<>`, `[]` as well as quotes `'` and `"` from extracted values. +- **tag** (String) Identifier for the processor. +- **target_field** (String) The field to insert the extracted keys into. Defaults to the root of the document. +- **trim_key** (String) String of characters to trim from extracted keys. +- **trim_value** (String) String of characters to trim from extracted values. + +### Read-Only + +- **id** (String) Internal identifier of the resource +- **json** (String) JSON representation of this data source. diff --git a/docs/data-sources/elasticsearch_ingest_processor_lowercase.md b/docs/data-sources/elasticsearch_ingest_processor_lowercase.md new file mode 100644 index 000000000..11d98ea82 --- /dev/null +++ b/docs/data-sources/elasticsearch_ingest_processor_lowercase.md @@ -0,0 +1,56 @@ +--- +subcategory: "Ingest" +layout: "" +page_title: "Elasticstack: elasticstack_elasticsearch_ingest_processor_lowercase Data Source" +description: |- + Helper data source to create a processor which converts a string to its lowercase equivalent. +--- + +# Data Source: elasticstack_elasticsearch_ingest_processor_lowercase + +Converts a string to its lowercase equivalent. If the field is an array of strings, all members of the array will be converted. + +See: https://www.elastic.co/guide/en/elasticsearch/reference/current/lowercase-processor.html + + +## Example Usage + +```terraform +provider "elasticstack" { + elasticsearch {} +} + +data "elasticstack_elasticsearch_ingest_processor_lowercase" "lowercase" { + field = "foo" +} + +resource "elasticstack_elasticsearch_ingest_pipeline" "my_ingest_pipeline" { + name = "lowercase-ingest" + + processors = [ + data.elasticstack_elasticsearch_ingest_processor_lowercase.lowercase.json + ] +} +``` + + +## Schema + +### Required + +- **field** (String) The field to make lowercase. + +### Optional + +- **description** (String) Description of the processor. +- **if** (String) Conditionally execute the processor +- **ignore_failure** (Boolean) Ignore failures for the processor. +- **ignore_missing** (Boolean) If `true` and `field` does not exist or is `null`, the processor quietly exits without modifying the document. +- **on_failure** (List of String) Handle failures for the processor. +- **tag** (String) Identifier for the processor. +- **target_field** (String) The field to assign the converted value to, by default `field` is updated in-place. + +### Read-Only + +- **id** (String) Internal identifier of the resource. +- **json** (String) JSON representation of this data source. diff --git a/docs/data-sources/elasticsearch_ingest_processor_network_direction.md b/docs/data-sources/elasticsearch_ingest_processor_network_direction.md new file mode 100644 index 000000000..f693822f0 --- /dev/null +++ b/docs/data-sources/elasticsearch_ingest_processor_network_direction.md @@ -0,0 +1,76 @@ +--- +subcategory: "Ingest" +layout: "" +page_title: "Elasticstack: elasticstack_elasticsearch_ingest_processor_network_direction Data Source" +description: |- + Helper data source to create a processor which calculates the network direction given a source IP address, destination IP address, and a list of internal networks. +--- + +# Data Source: elasticstack_elasticsearch_ingest_processor_network_direction + +Calculates the network direction given a source IP address, destination IP address, and a list of internal networks. + +The network direction processor reads IP addresses from Elastic Common Schema (ECS) fields by default. If you use the ECS, only the `internal_networks` option must be specified. + + +One of either `internal_networks` or `internal_networks_field` must be specified. If `internal_networks_field` is specified, it follows the behavior specified by `ignore_missing`. + +### Supported named network rangese + +The named ranges supported for the internal_networks option are: + +* `loopback` - Matches loopback addresses in the range of 127.0.0.0/8 or ::1/128. +* `unicast` or `global_unicast` - Matches global unicast addresses defined in RFC 1122, RFC 4632, and RFC 4291 with the exception of the IPv4 broadcast address (255.255.255.255). This includes private address ranges. +* `multicast` - Matches multicast addresses. +* `interface_local_multicast` - Matches IPv6 interface-local multicast addresses. +* `link_local_unicast` - Matches link-local unicast addresses. +* `link_local_multicast` - Matches link-local multicast addresses. +* `private` - Matches private address ranges defined in RFC 1918 (IPv4) and RFC 4193 (IPv6). +* `public` - Matches addresses that are not loopback, unspecified, IPv4 broadcast, link local unicast, link local multicast, interface local multicast, or private. +* `unspecified` - Matches unspecified addresses (either the IPv4 address "0.0.0.0" or the IPv6 address "::"). + + +See: https://www.elastic.co/guide/en/elasticsearch/reference/current/network-direction-processor.html + + +## Example Usage + +```terraform +provider "elasticstack" { + elasticsearch {} +} + +data "elasticstack_elasticsearch_ingest_processor_network_direction" "network_direction" { + internal_networks = ["private"] +} + +resource "elasticstack_elasticsearch_ingest_pipeline" "my_ingest_pipeline" { + name = "network-direction-ingest" + + processors = [ + data.elasticstack_elasticsearch_ingest_processor_network_direction.network_direction.json + ] +} +``` + + +## Schema + +### Optional + +- **description** (String) Description of the processor. +- **destination_ip** (String) Field containing the destination IP address. +- **if** (String) Conditionally execute the processor +- **ignore_failure** (Boolean) Ignore failures for the processor. +- **ignore_missing** (Boolean) If `true` and `field` does not exist or is `null`, the processor quietly exits without modifying the document. +- **internal_networks** (Set of String) List of internal networks. +- **internal_networks_field** (String) A field on the given document to read the internal_networks configuration from. +- **on_failure** (List of String) Handle failures for the processor. +- **source_ip** (String) Field containing the source IP address. +- **tag** (String) Identifier for the processor. +- **target_field** (String) Output field for the network direction. + +### Read-Only + +- **id** (String) Internal identifier of the resource. +- **json** (String) JSON representation of this data source. diff --git a/docs/data-sources/elasticsearch_ingest_processor_pipeline.md b/docs/data-sources/elasticsearch_ingest_processor_pipeline.md new file mode 100644 index 000000000..ff91cd567 --- /dev/null +++ b/docs/data-sources/elasticsearch_ingest_processor_pipeline.md @@ -0,0 +1,75 @@ +--- +subcategory: "Ingest" +layout: "" +page_title: "Elasticstack: elasticstack_elasticsearch_ingest_processor_pipeline Data Source" +description: |- + Helper data source to create a processor which executes another pipeline. +--- + +# Data Source: elasticstack_elasticsearch_ingest_processor_pipeline + +Executes another pipeline. + +The name of the current pipeline can be accessed from the `_ingest.pipeline` ingest metadata key. + +See: https://www.elastic.co/guide/en/elasticsearch/reference/current/pipeline-processor.html + + +## Example Usage + +```terraform +provider "elasticstack" { + elasticsearch {} +} + +data "elasticstack_elasticsearch_ingest_processor_append" "append_tags" { + field = "tags" + value = ["production", "{{{app}}}", "{{{owner}}}"] +} + +resource "elasticstack_elasticsearch_ingest_pipeline" "pipeline_a" { + name = "pipeline_a" + + processors = [ + data.elasticstack_elasticsearch_ingest_processor_append.append_tags.json + ] +} + +data "elasticstack_elasticsearch_ingest_processor_fingerprint" "fingerprint" { + fields = ["owner"] +} + +// use the above defined pipeline in our configuration +data "elasticstack_elasticsearch_ingest_processor_pipeline" "pipeline" { + name = elasticstack_elasticsearch_ingest_pipeline.pipeline_a.name +} + +resource "elasticstack_elasticsearch_ingest_pipeline" "pipeline_b" { + name = "pipeline_b" + + processors = [ + data.elasticstack_elasticsearch_ingest_processor_pipeline.pipeline.json, + data.elasticstack_elasticsearch_ingest_processor_fingerprint.fingerprint.json + ] +} +``` + + +## Schema + +### Required + +- **name** (String) The name of the pipeline to execute. + +### Optional + +- **description** (String) Description of the processor. +- **if** (String) Conditionally execute the processor +- **ignore_failure** (Boolean) Ignore failures for the processor. +- **on_failure** (List of String) Handle failures for the processor. +- **tag** (String) Identifier for the processor. + +### Read-Only + +- **id** (String) Internal identifier of the resource. +- **json** (String) JSON representation of this data source. diff --git a/docs/data-sources/elasticsearch_ingest_processor_registered_domain.md b/docs/data-sources/elasticsearch_ingest_processor_registered_domain.md new file mode 100644 index 000000000..b300f8be4 --- /dev/null +++ b/docs/data-sources/elasticsearch_ingest_processor_registered_domain.md @@ -0,0 +1,57 @@ +--- +subcategory: "Ingest" +layout: "" +page_title: "Elasticstack: elasticstack_elasticsearch_ingest_processor_registered_domain Data Source" +description: |- + Helper data source to create a processor which Extracts the registered domain, sub-domain, and top-level domain from a fully qualified domain name. +--- + +# Data Source: elasticstack_elasticsearch_ingest_processor_registered_domain + +Extracts the registered domain (also known as the effective top-level domain or eTLD), sub-domain, and top-level domain from a fully qualified domain name (FQDN). Uses the registered domains defined in the Mozilla Public Suffix List. + +See: https://www.elastic.co/guide/en/elasticsearch/reference/current/registered-domain-processor.html + + +## Example Usage + +```terraform +provider "elasticstack" { + elasticsearch {} +} + +data "elasticstack_elasticsearch_ingest_processor_registered_domain" "domain" { + field = "fqdn" + target_field = "url" +} + +resource "elasticstack_elasticsearch_ingest_pipeline" "my_ingest_pipeline" { + name = "domain-ingest" + + processors = [ + data.elasticstack_elasticsearch_ingest_processor_registered_domain.domain.json + ] +} +``` + + +## Schema + +### Required + +- **field** (String) Field containing the source FQDN. + +### Optional + +- **description** (String) Description of the processor. +- **if** (String) Conditionally execute the processor +- **ignore_failure** (Boolean) Ignore failures for the processor. +- **ignore_missing** (Boolean) If `true` and `field` does not exist or is `null`, the processor quietly exits without modifying the document. +- **on_failure** (List of String) Handle failures for the processor. +- **tag** (String) Identifier for the processor. +- **target_field** (String) Object field containing extracted domain components. If an ``, the processor adds components to the document’s root. + +### Read-Only + +- **id** (String) Internal identifier of the resource. +- **json** (String) JSON representation of this data source. diff --git a/docs/data-sources/elasticsearch_ingest_processor_remove.md b/docs/data-sources/elasticsearch_ingest_processor_remove.md new file mode 100644 index 000000000..271635c5f --- /dev/null +++ b/docs/data-sources/elasticsearch_ingest_processor_remove.md @@ -0,0 +1,55 @@ +--- +subcategory: "Ingest" +layout: "" +page_title: "Elasticstack: elasticstack_elasticsearch_ingest_processor_remove Data Source" +description: |- + Helper data source to create a processor which removes existing fields. +--- + +# Data Source: elasticstack_elasticsearch_ingest_processor_remove + +Removes existing fields. If one field doesn’t exist, an exception will be thrown. + +See: https://www.elastic.co/guide/en/elasticsearch/reference/current/remove-processor.html + + +## Example Usage + +```terraform +provider "elasticstack" { + elasticsearch {} +} + +data "elasticstack_elasticsearch_ingest_processor_remove" "remove" { + field = ["user_agent", "url"] +} + +resource "elasticstack_elasticsearch_ingest_pipeline" "my_ingest_pipeline" { + name = "remove-ingest" + + processors = [ + data.elasticstack_elasticsearch_ingest_processor_remove.remove.json + ] +} +``` + + +## Schema + +### Required + +- **field** (Set of String) Fields to be removed. + +### Optional + +- **description** (String) Description of the processor. +- **if** (String) Conditionally execute the processor +- **ignore_failure** (Boolean) Ignore failures for the processor. +- **ignore_missing** (Boolean) If `true` and `field` does not exist or is `null`, the processor quietly exits without modifying the document. +- **on_failure** (List of String) Handle failures for the processor. +- **tag** (String) Identifier for the processor. + +### Read-Only + +- **id** (String) Internal identifier of the resource. +- **json** (String) JSON representation of this data source. diff --git a/docs/data-sources/elasticsearch_ingest_processor_rename.md b/docs/data-sources/elasticsearch_ingest_processor_rename.md new file mode 100644 index 000000000..9a69b651d --- /dev/null +++ b/docs/data-sources/elasticsearch_ingest_processor_rename.md @@ -0,0 +1,57 @@ +--- +subcategory: "Ingest" +layout: "" +page_title: "Elasticstack: elasticstack_elasticsearch_ingest_processor_rename Data Source" +description: |- + Helper data source to create a processor which renames an existing field. +--- + +# Data Source: elasticstack_elasticsearch_ingest_processor_rename + +Renames an existing field. If the field doesn’t exist or the new name is already used, an exception will be thrown. + +See: https://www.elastic.co/guide/en/elasticsearch/reference/current/rename-processor.html + + +## Example Usage + +```terraform +provider "elasticstack" { + elasticsearch {} +} + +data "elasticstack_elasticsearch_ingest_processor_rename" "rename" { + field = "provider" + target_field = "cloud.provider" +} + +resource "elasticstack_elasticsearch_ingest_pipeline" "my_ingest_pipeline" { + name = "rename-ingest" + + processors = [ + data.elasticstack_elasticsearch_ingest_processor_rename.rename.json + ] +} +``` + + +## Schema + +### Required + +- **field** (String) The field to be renamed. +- **target_field** (String) The new name of the field. + +### Optional + +- **description** (String) Description of the processor. +- **if** (String) Conditionally execute the processor +- **ignore_failure** (Boolean) Ignore failures for the processor. +- **ignore_missing** (Boolean) If `true` and `field` does not exist or is `null`, the processor quietly exits without modifying the document. +- **on_failure** (List of String) Handle failures for the processor. +- **tag** (String) Identifier for the processor. + +### Read-Only + +- **id** (String) Internal identifier of the resource. +- **json** (String) JSON representation of this data source. diff --git a/docs/data-sources/elasticsearch_ingest_processor_script.md b/docs/data-sources/elasticsearch_ingest_processor_script.md new file mode 100644 index 000000000..916f98ddd --- /dev/null +++ b/docs/data-sources/elasticsearch_ingest_processor_script.md @@ -0,0 +1,78 @@ +--- +subcategory: "Ingest" +layout: "" +page_title: "Elasticstack: elasticstack_elasticsearch_ingest_processor_script Data Source" +description: |- + Helper data source to create a processor which runs an inline or stored script on incoming documents. +--- + +# Data Source: elasticstack_elasticsearch_ingest_processor_script + +Runs an inline or stored script on incoming documents. The script runs in the ingest context. + +The script processor uses the script cache to avoid recompiling the script for each incoming document. To improve performance, ensure the script cache is properly sized before using a script processor in production. + +See: https://www.elastic.co/guide/en/elasticsearch/reference/current/script-processor.html + +### Access source fields + +The script processor parses each incoming document’s JSON source fields into a set of maps, lists, and primitives. To access these fields with a Painless script, use the map access operator: `ctx['my-field']`. You can also use the shorthand `ctx.` syntax. + +### Access metadata fields + +You can also use a script processor to access metadata fields. + + +## Example Usage + +```terraform +provider "elasticstack" { + elasticsearch {} +} + +data "elasticstack_elasticsearch_ingest_processor_script" "script" { + description = "Extract 'tags' from 'env' field" + lang = "painless" + + source = < +## Schema + +### Optional + +- **description** (String) Description of the processor. +- **if** (String) Conditionally execute the processor +- **ignore_failure** (Boolean) Ignore failures for the processor. +- **lang** (String) Script language. +- **on_failure** (List of String) Handle failures for the processor. +- **params** (String) Object containing parameters for the script. +- **script_id** (String) ID of a stored script. If no `source` is specified, this parameter is required. +- **source** (String) Inline script. If no id is specified, this parameter is required. +- **tag** (String) Identifier for the processor. + +### Read-Only + +- **id** (String) Internal identifier of the resource. +- **json** (String) JSON representation of this data source. diff --git a/docs/data-sources/elasticsearch_ingest_processor_set.md b/docs/data-sources/elasticsearch_ingest_processor_set.md new file mode 100644 index 000000000..fe8f5cf3b --- /dev/null +++ b/docs/data-sources/elasticsearch_ingest_processor_set.md @@ -0,0 +1,60 @@ +--- +subcategory: "Ingest" +layout: "" +page_title: "Elasticstack: elasticstack_elasticsearch_ingest_processor_set Data Source" +description: |- + Helper data source to create a processor which sets one field and associates it with the specified value. +--- + +# Data Source: elasticstack_elasticsearch_ingest_processor_set + +Sets one field and associates it with the specified value. If the field already exists, its value will be replaced with the provided one. + +See: https://www.elastic.co/guide/en/elasticsearch/reference/current/set-processor.html + + +## Example Usage + +```terraform +provider "elasticstack" { + elasticsearch {} +} + +data "elasticstack_elasticsearch_ingest_processor_set" "set" { + field = "count" + value = 1 +} + +resource "elasticstack_elasticsearch_ingest_pipeline" "my_ingest_pipeline" { + name = "set-ingest" + + processors = [ + data.elasticstack_elasticsearch_ingest_processor_set.set.json + ] +} +``` + + +## Schema + +### Required + +- **field** (String) The field to insert, upsert, or update. + +### Optional + +- **copy_from** (String) The origin field which will be copied to `field`, cannot set `value` simultaneously. +- **description** (String) Description of the processor. +- **if** (String) Conditionally execute the processor +- **ignore_empty_value** (Boolean) If `true` and `value` is a template snippet that evaluates to `null` or the empty string, the processor quietly exits without modifying the document +- **ignore_failure** (Boolean) Ignore failures for the processor. +- **media_type** (String) The media type for encoding value. +- **on_failure** (List of String) Handle failures for the processor. +- **override** (Boolean) If processor will update fields with pre-existing non-null-valued field. +- **tag** (String) Identifier for the processor. +- **value** (String) The value to be set for the field. Supports template snippets. May specify only one of `value` or `copy_from`. + +### Read-Only + +- **id** (String) Internal identifier of the resource. +- **json** (String) JSON representation of this data source. diff --git a/docs/data-sources/elasticsearch_ingest_processor_set_security_user.md b/docs/data-sources/elasticsearch_ingest_processor_set_security_user.md new file mode 100644 index 000000000..6b77d00bb --- /dev/null +++ b/docs/data-sources/elasticsearch_ingest_processor_set_security_user.md @@ -0,0 +1,56 @@ +--- +subcategory: "Ingest" +layout: "" +page_title: "Elasticstack: elasticstack_elasticsearch_ingest_processor_set_security_user Data Source" +description: |- + Helper data source to create a processor which sets user-related details from the current authenticated user to the current document by pre-processing the ingest. +--- + +# Data Source: elasticstack_elasticsearch_ingest_processor_set_security_user + +Sets user-related details (such as `username`, `roles`, `email`, `full_name`, `metadata`, `api_key`, `realm` and `authentication_typ`e) from the current authenticated user to the current document by pre-processing the ingest. The `api_key` property exists only if the user authenticates with an API key. It is an object containing the id, name and metadata (if it exists and is non-empty) fields of the API key. The realm property is also an object with two fields, name and type. When using API key authentication, the realm property refers to the realm from which the API key is created. The `authentication_type property` is a string that can take value from `REALM`, `API_KEY`, `TOKEN` and `ANONYMOUS`. + +See: https://www.elastic.co/guide/en/elasticsearch/reference/current/ingest-node-set-security-user-processor.html + + +## Example Usage + +```terraform +provider "elasticstack" { + elasticsearch {} +} + +data "elasticstack_elasticsearch_ingest_processor_set_security_user" "user" { + field = "user" + properties = ["username", "realm"] +} + +resource "elasticstack_elasticsearch_ingest_pipeline" "my_ingest_pipeline" { + name = "user-ingest" + + processors = [ + data.elasticstack_elasticsearch_ingest_processor_set_security_user.user.json + ] +} +``` + + +## Schema + +### Required + +- **field** (String) The field to store the user information into. + +### Optional + +- **description** (String) Description of the processor. +- **if** (String) Conditionally execute the processor +- **ignore_failure** (Boolean) Ignore failures for the processor. +- **on_failure** (List of String) Handle failures for the processor. +- **properties** (Set of String) Controls what user related properties are added to the `field`. +- **tag** (String) Identifier for the processor. + +### Read-Only + +- **id** (String) Internal identifier of the resource. +- **json** (String) JSON representation of this data source. diff --git a/docs/data-sources/elasticsearch_ingest_processor_sort.md b/docs/data-sources/elasticsearch_ingest_processor_sort.md new file mode 100644 index 000000000..8e6f842a2 --- /dev/null +++ b/docs/data-sources/elasticsearch_ingest_processor_sort.md @@ -0,0 +1,57 @@ +--- +subcategory: "Ingest" +layout: "" +page_title: "Elasticstack: elasticstack_elasticsearch_ingest_processor_sort Data Source" +description: |- + Helper data source to create a processor which sorts the elements of an array ascending or descending. +--- + +# Data Source: elasticstack_elasticsearch_ingest_processor_sort + +Sorts the elements of an array ascending or descending. Homogeneous arrays of numbers will be sorted numerically, while arrays of strings or heterogeneous arrays of strings + numbers will be sorted lexicographically. Throws an error when the field is not an array. + +See: https://www.elastic.co/guide/en/elasticsearch/reference/current/sort-processor.html + + +## Example Usage + +```terraform +provider "elasticstack" { + elasticsearch {} +} + +data "elasticstack_elasticsearch_ingest_processor_sort" "sort" { + field = "array_field_to_sort" + order = "desc" +} + +resource "elasticstack_elasticsearch_ingest_pipeline" "my_ingest_pipeline" { + name = "sort-ingest" + + processors = [ + data.elasticstack_elasticsearch_ingest_processor_sort.sort.json + ] +} +``` + + +## Schema + +### Required + +- **field** (String) The field to be sorted + +### Optional + +- **description** (String) Description of the processor. +- **if** (String) Conditionally execute the processor +- **ignore_failure** (Boolean) Ignore failures for the processor. +- **on_failure** (List of String) Handle failures for the processor. +- **order** (String) The sort order to use. Accepts `asc` or `desc`. +- **tag** (String) Identifier for the processor. +- **target_field** (String) The field to assign the sorted value to, by default `field` is updated in-place + +### Read-Only + +- **id** (String) Internal identifier of the resource. +- **json** (String) JSON representation of this data source. diff --git a/docs/data-sources/elasticsearch_ingest_processor_split.md b/docs/data-sources/elasticsearch_ingest_processor_split.md new file mode 100644 index 000000000..2b86a5e01 --- /dev/null +++ b/docs/data-sources/elasticsearch_ingest_processor_split.md @@ -0,0 +1,59 @@ +--- +subcategory: "Ingest" +layout: "" +page_title: "Elasticstack: elasticstack_elasticsearch_ingest_processor_split Data Source" +description: |- + Helper data source to create a processor which splits a field into an array using a separator character. +--- + +# Data Source: elasticstack_elasticsearch_ingest_processor_split + +Splits a field into an array using a separator character. Only works on string fields. + +See: https://www.elastic.co/guide/en/elasticsearch/reference/current/split-processor.html + + +## Example Usage + +```terraform +provider "elasticstack" { + elasticsearch {} +} + +data "elasticstack_elasticsearch_ingest_processor_split" "split" { + field = "my_field" + separator = "\\s+" +} + +resource "elasticstack_elasticsearch_ingest_pipeline" "my_ingest_pipeline" { + name = "split-ingest" + + processors = [ + data.elasticstack_elasticsearch_ingest_processor_split.split.json + ] +} +``` + + +## Schema + +### Required + +- **field** (String) The field to split +- **separator** (String) A regex which matches the separator, eg `,` or `\s+` + +### Optional + +- **description** (String) Description of the processor. +- **if** (String) Conditionally execute the processor +- **ignore_failure** (Boolean) Ignore failures for the processor. +- **ignore_missing** (Boolean) If `true` and `field` does not exist or is `null`, the processor quietly exits without modifying the document. +- **on_failure** (List of String) Handle failures for the processor. +- **preserve_trailing** (Boolean) Preserves empty trailing fields, if any. +- **tag** (String) Identifier for the processor. +- **target_field** (String) The field to assign the converted value to, by default `field` is updated in-place. + +### Read-Only + +- **id** (String) Internal identifier of the resource. +- **json** (String) JSON representation of this data source. diff --git a/docs/data-sources/elasticsearch_ingest_processor_trim.md b/docs/data-sources/elasticsearch_ingest_processor_trim.md new file mode 100644 index 000000000..2479eee8e --- /dev/null +++ b/docs/data-sources/elasticsearch_ingest_processor_trim.md @@ -0,0 +1,58 @@ +--- +subcategory: "Ingest" +layout: "" +page_title: "Elasticstack: elasticstack_elasticsearch_ingest_processor_trim Data Source" +description: |- + Helper data source to create a processor which trims whitespace from field. +--- + +# Data Source: elasticstack_elasticsearch_ingest_processor_trim + +Trims whitespace from field. If the field is an array of strings, all members of the array will be trimmed. + +**NOTE:** This only works on leading and trailing whitespace. + +See: https://www.elastic.co/guide/en/elasticsearch/reference/current/trim-processor.html + + +## Example Usage + +```terraform +provider "elasticstack" { + elasticsearch {} +} + +data "elasticstack_elasticsearch_ingest_processor_trim" "trim" { + field = "foo" +} + +resource "elasticstack_elasticsearch_ingest_pipeline" "my_ingest_pipeline" { + name = "trim-ingest" + + processors = [ + data.elasticstack_elasticsearch_ingest_processor_trim.trim.json + ] +} +``` + + +## Schema + +### Required + +- **field** (String) The string-valued field to trim whitespace from. + +### Optional + +- **description** (String) Description of the processor. +- **if** (String) Conditionally execute the processor +- **ignore_failure** (Boolean) Ignore failures for the processor. +- **ignore_missing** (Boolean) If `true` and `field` does not exist or is `null`, the processor quietly exits without modifying the document. +- **on_failure** (List of String) Handle failures for the processor. +- **tag** (String) Identifier for the processor. +- **target_field** (String) The field to assign the trimmed value to, by default `field` is updated in-place. + +### Read-Only + +- **id** (String) Internal identifier of the resource. +- **json** (String) JSON representation of this data source. diff --git a/docs/data-sources/elasticsearch_ingest_processor_uppercase.md b/docs/data-sources/elasticsearch_ingest_processor_uppercase.md new file mode 100644 index 000000000..586219a42 --- /dev/null +++ b/docs/data-sources/elasticsearch_ingest_processor_uppercase.md @@ -0,0 +1,56 @@ +--- +subcategory: "Ingest" +layout: "" +page_title: "Elasticstack: elasticstack_elasticsearch_ingest_processor_uppercase Data Source" +description: |- + Helper data source to create a processor which converts a string to its uppercase equivalent. +--- + +# Data Source: elasticstack_elasticsearch_ingest_processor_uppercase + +Converts a string to its uppercase equivalent. If the field is an array of strings, all members of the array will be converted. + +See: https://www.elastic.co/guide/en/elasticsearch/reference/current/uppercase-processor.html + + +## Example Usage + +```terraform +provider "elasticstack" { + elasticsearch {} +} + +data "elasticstack_elasticsearch_ingest_processor_uppercase" "uppercase" { + field = "foo" +} + +resource "elasticstack_elasticsearch_ingest_pipeline" "my_ingest_pipeline" { + name = "uppercase-ingest" + + processors = [ + data.elasticstack_elasticsearch_ingest_processor_uppercase.uppercase.json + ] +} +``` + + +## Schema + +### Required + +- **field** (String) The field to make uppercase. + +### Optional + +- **description** (String) Description of the processor. +- **if** (String) Conditionally execute the processor +- **ignore_failure** (Boolean) Ignore failures for the processor. +- **ignore_missing** (Boolean) If `true` and `field` does not exist or is `null`, the processor quietly exits without modifying the document. +- **on_failure** (List of String) Handle failures for the processor. +- **tag** (String) Identifier for the processor. +- **target_field** (String) The field to assign the converted value to, by default `field` is updated in-place. + +### Read-Only + +- **id** (String) Internal identifier of the resource. +- **json** (String) JSON representation of this data source. diff --git a/docs/data-sources/elasticsearch_ingest_processor_uri_parts.md b/docs/data-sources/elasticsearch_ingest_processor_uri_parts.md new file mode 100644 index 000000000..43edbd7dc --- /dev/null +++ b/docs/data-sources/elasticsearch_ingest_processor_uri_parts.md @@ -0,0 +1,60 @@ +--- +subcategory: "Ingest" +layout: "" +page_title: "Elasticstack: elasticstack_elasticsearch_ingest_processor_uri_parts Data Source" +description: |- + Helper data source to create a processor which parses a Uniform Resource Identifier (URI) string and extracts its components as an object. +--- + +# Data Source: elasticstack_elasticsearch_ingest_processor_uri_parts + +Parses a Uniform Resource Identifier (URI) string and extracts its components as an object. This URI object includes properties for the URI’s domain, path, fragment, port, query, scheme, user info, username, and password. + +See: https://www.elastic.co/guide/en/elasticsearch/reference/current/uri-parts-processor.html + + +## Example Usage + +```terraform +provider "elasticstack" { + elasticsearch {} +} + +data "elasticstack_elasticsearch_ingest_processor_uri_parts" "parts" { + field = "input_field" + target_field = "url" + keep_original = true + remove_if_successful = false +} + +resource "elasticstack_elasticsearch_ingest_pipeline" "my_ingest_pipeline" { + name = "parts-ingest" + + processors = [ + data.elasticstack_elasticsearch_ingest_processor_uri_parts.parts.json + ] +} +``` + + +## Schema + +### Required + +- **field** (String) Field containing the URI string. + +### Optional + +- **description** (String) Description of the processor. +- **if** (String) Conditionally execute the processor +- **ignore_failure** (Boolean) Ignore failures for the processor. +- **keep_original** (Boolean) If true, the processor copies the unparsed URI to `.original.` +- **on_failure** (List of String) Handle failures for the processor. +- **remove_if_successful** (Boolean) If `true`, the processor removes the `field` after parsing the URI string. If parsing fails, the processor does not remove the `field`. +- **tag** (String) Identifier for the processor. +- **target_field** (String) Output field for the URI object. + +### Read-Only + +- **id** (String) Internal identifier of the resource. +- **json** (String) JSON representation of this data source. diff --git a/docs/data-sources/elasticsearch_ingest_processor_urldecode.md b/docs/data-sources/elasticsearch_ingest_processor_urldecode.md new file mode 100644 index 000000000..dffd04a19 --- /dev/null +++ b/docs/data-sources/elasticsearch_ingest_processor_urldecode.md @@ -0,0 +1,56 @@ +--- +subcategory: "Ingest" +layout: "" +page_title: "Elasticstack: elasticstack_elasticsearch_ingest_processor_urldecode Data Source" +description: |- + Helper data source to create a processor which URL-decodes a string. +--- + +# Data Source: elasticstack_elasticsearch_ingest_processor_urldecode + +URL-decodes a string. If the field is an array of strings, all members of the array will be decoded. + +See: https://www.elastic.co/guide/en/elasticsearch/reference/current/urldecode-processor.html + + +## Example Usage + +```terraform +provider "elasticstack" { + elasticsearch {} +} + +data "elasticstack_elasticsearch_ingest_processor_urldecode" "urldecode" { + field = "my_url_to_decode" +} + +resource "elasticstack_elasticsearch_ingest_pipeline" "my_ingest_pipeline" { + name = "urldecode-ingest" + + processors = [ + data.elasticstack_elasticsearch_ingest_processor_urldecode.urldecode.json + ] +} +``` + + +## Schema + +### Required + +- **field** (String) The field to decode + +### Optional + +- **description** (String) Description of the processor. +- **if** (String) Conditionally execute the processor +- **ignore_failure** (Boolean) Ignore failures for the processor. +- **ignore_missing** (Boolean) If `true` and `field` does not exist or is `null`, the processor quietly exits without modifying the document. +- **on_failure** (List of String) Handle failures for the processor. +- **tag** (String) Identifier for the processor. +- **target_field** (String) The field to assign the converted value to, by default `field` is updated in-place. + +### Read-Only + +- **id** (String) Internal identifier of the resource. +- **json** (String) JSON representation of this data source. diff --git a/docs/data-sources/elasticsearch_ingest_processor_user_agent.md b/docs/data-sources/elasticsearch_ingest_processor_user_agent.md new file mode 100644 index 000000000..e738b75da --- /dev/null +++ b/docs/data-sources/elasticsearch_ingest_processor_user_agent.md @@ -0,0 +1,57 @@ +--- +subcategory: "Ingest" +layout: "" +page_title: "Elasticstack: elasticstack_elasticsearch_ingest_processor_user_agent Data Source" +description: |- + Helper data source to create a processor which extracts details from the user agent string a browser sends with its web requests. +--- + +# Data Source: elasticstack_elasticsearch_ingest_processor_user_agent + +The `user_agent` processor extracts details from the user agent string a browser sends with its web requests. This processor adds this information by default under the `user_agent` field. + +The ingest-user-agent module ships by default with the regexes.yaml made available by uap-java with an Apache 2.0 license. For more details see https://github.com/ua-parser/uap-core. + + +See: https://www.elastic.co/guide/en/elasticsearch/reference/current/user-agent-processor.html + + +## Example Usage + +```terraform +provider "elasticstack" { + elasticsearch {} +} + +data "elasticstack_elasticsearch_ingest_processor_user_agent" "agent" { + field = "agent" +} + +resource "elasticstack_elasticsearch_ingest_pipeline" "my_ingest_pipeline" { + name = "agent-ingest" + + processors = [ + data.elasticstack_elasticsearch_ingest_processor_user_agent.agent.json + ] +} +``` + + +## Schema + +### Required + +- **field** (String) The field containing the user agent string. + +### Optional + +- **extract_device_type** (Boolean) Extracts device type from the user agent string on a best-effort basis. Supported only starting from Elasticsearch version **8.0** +- **ignore_missing** (Boolean) If `true` and `field` does not exist or is `null`, the processor quietly exits without modifying the document. +- **properties** (Set of String) Controls what properties are added to `target_field`. +- **regex_file** (String) The name of the file in the `config/ingest-user-agent` directory containing the regular expressions for parsing the user agent string. +- **target_field** (String) The field that will be filled with the user agent details. + +### Read-Only + +- **id** (String) Internal identifier of the resource. +- **json** (String) JSON representation of this data source. diff --git a/docs/data-sources/elasticsearch_security_user.md b/docs/data-sources/elasticsearch_security_user.md index 8becb3c5b..cbad6de57 100644 --- a/docs/data-sources/elasticsearch_security_user.md +++ b/docs/data-sources/elasticsearch_security_user.md @@ -36,13 +36,13 @@ output "user" { ### Optional - **elasticsearch_connection** (Block List, Max: 1) Used to establish connection to Elasticsearch server. Overrides environment variables if present. (see [below for nested schema](#nestedblock--elasticsearch_connection)) -- **id** (String) The ID of this resource. ### Read-Only - **email** (String) The email of the user. - **enabled** (Boolean) Specifies whether the user is enabled. The default value is true. - **full_name** (String) The full name of the user. +- **id** (String) Internal identifier of the resource - **metadata** (String) Arbitrary metadata that you want to associate with the user. - **roles** (Set of String) A set of roles the user has. The roles determine the user’s access permissions. Default is []. diff --git a/docs/data-sources/elasticsearch_snapshot_repository.md b/docs/data-sources/elasticsearch_snapshot_repository.md index a9acc555d..5fb0d3208 100644 --- a/docs/data-sources/elasticsearch_snapshot_repository.md +++ b/docs/data-sources/elasticsearch_snapshot_repository.md @@ -62,7 +62,6 @@ output "repo_url" { ### Optional - **elasticsearch_connection** (Block List, Max: 1) Used to establish connection to Elasticsearch server. Overrides environment variables if present. (see [below for nested schema](#nestedblock--elasticsearch_connection)) -- **id** (String) The ID of this resource. ### Read-Only @@ -70,6 +69,7 @@ output "repo_url" { - **fs** (List of Object) Shared filesystem repository. Set only if the type of the fetched repo is `fs`. (see [below for nested schema](#nestedatt--fs)) - **gcs** (List of Object) Google Cloud Storage service as a repository. Set only if the type of the fetched repo is `gcs`. (see [below for nested schema](#nestedatt--gcs)) - **hdfs** (List of Object) HDFS File System as a repository. Set only if the type of the fetched repo is `hdfs`. (see [below for nested schema](#nestedatt--hdfs)) +- **id** (String) Internal identifier of the resource - **s3** (List of Object) AWS S3 as a repository. Set only if the type of the fetched repo is `s3`. (see [below for nested schema](#nestedatt--s3)) - **type** (String) Repository type. - **url** (List of Object) URL repository. Set only if the type of the fetched repo is `url`. (see [below for nested schema](#nestedatt--url)) diff --git a/docs/resources/elasticsearch_ingest_pipeline.md b/docs/resources/elasticsearch_ingest_pipeline.md index c20ed74bf..521857f3c 100644 --- a/docs/resources/elasticsearch_ingest_pipeline.md +++ b/docs/resources/elasticsearch_ingest_pipeline.md @@ -12,6 +12,8 @@ Use ingest APIs to manage tasks and resources related to ingest pipelines and pr ## Example Usage +You can provide your custom JSON definitions for the ingest processors: + ```terraform provider "elasticstack" { elasticsearch {} @@ -43,6 +45,31 @@ EOF } ``` + +Or you can use data sources and Terraform declarative way of defining the ingest processors: + +```terraform +data "elasticstack_elasticsearch_ingest_processor_set" "set_count" { + field = "count" + value = 1 +} + +data "elasticstack_elasticsearch_ingest_processor_json" "parse_string_source" { + field = "string_source" + target_field = "json_target" +} + +resource "elasticstack_elasticsearch_ingest_pipeline" "ingest" { + name = "set-parse" + + processors = [ + data.elasticstack_elasticsearch_ingest_processor_set.set_count.json, + data.elasticstack_elasticsearch_ingest_processor_json.parse_string_source.json + ] +} +``` + + ## Schema diff --git a/examples/data-sources/elasticstack_elasticsearch_ingest_processor_append/data-source.tf b/examples/data-sources/elasticstack_elasticsearch_ingest_processor_append/data-source.tf new file mode 100644 index 000000000..72cdc2de8 --- /dev/null +++ b/examples/data-sources/elasticstack_elasticsearch_ingest_processor_append/data-source.tf @@ -0,0 +1,16 @@ +provider "elasticstack" { + elasticsearch {} +} + +data "elasticstack_elasticsearch_ingest_processor_append" "tags" { + field = "tags" + value = ["production", "{{{app}}}", "{{{owner}}}"] +} + +resource "elasticstack_elasticsearch_ingest_pipeline" "my_ingest_pipeline" { + name = "append-ingest" + + processors = [ + data.elasticstack_elasticsearch_ingest_processor_append.tags.json + ] +} diff --git a/examples/data-sources/elasticstack_elasticsearch_ingest_processor_bytes/data-source.tf b/examples/data-sources/elasticstack_elasticsearch_ingest_processor_bytes/data-source.tf new file mode 100644 index 000000000..cd37e351c --- /dev/null +++ b/examples/data-sources/elasticstack_elasticsearch_ingest_processor_bytes/data-source.tf @@ -0,0 +1,15 @@ +provider "elasticstack" { + elasticsearch {} +} + +data "elasticstack_elasticsearch_ingest_processor_bytes" "bytes" { + field = "file.size" +} + +resource "elasticstack_elasticsearch_ingest_pipeline" "my_ingest_pipeline" { + name = "bytes-ingest" + + processors = [ + data.elasticstack_elasticsearch_ingest_processor_bytes.bytes.json + ] +} diff --git a/examples/data-sources/elasticstack_elasticsearch_ingest_processor_circle/data-source.tf b/examples/data-sources/elasticstack_elasticsearch_ingest_processor_circle/data-source.tf new file mode 100644 index 000000000..8894348d6 --- /dev/null +++ b/examples/data-sources/elasticstack_elasticsearch_ingest_processor_circle/data-source.tf @@ -0,0 +1,17 @@ +provider "elasticstack" { + elasticsearch {} +} + +data "elasticstack_elasticsearch_ingest_processor_circle" "circle" { + field = "circle" + error_distance = 28.1 + shape_type = "geo_shape" +} + +resource "elasticstack_elasticsearch_ingest_pipeline" "my_ingest_pipeline" { + name = "circle-ingest" + + processors = [ + data.elasticstack_elasticsearch_ingest_processor_circle.circle.json + ] +} diff --git a/examples/data-sources/elasticstack_elasticsearch_ingest_processor_community_id/data-source.tf b/examples/data-sources/elasticstack_elasticsearch_ingest_processor_community_id/data-source.tf new file mode 100644 index 000000000..faaf6ba9b --- /dev/null +++ b/examples/data-sources/elasticstack_elasticsearch_ingest_processor_community_id/data-source.tf @@ -0,0 +1,13 @@ +provider "elasticstack" { + elasticsearch {} +} + +data "elasticstack_elasticsearch_ingest_processor_community_id" "community" {} + +resource "elasticstack_elasticsearch_ingest_pipeline" "my_ingest_pipeline" { + name = "community-ingest" + + processors = [ + data.elasticstack_elasticsearch_ingest_processor_community_id.community.json + ] +} diff --git a/examples/data-sources/elasticstack_elasticsearch_ingest_processor_convert/data-source.tf b/examples/data-sources/elasticstack_elasticsearch_ingest_processor_convert/data-source.tf new file mode 100644 index 000000000..c405d7e09 --- /dev/null +++ b/examples/data-sources/elasticstack_elasticsearch_ingest_processor_convert/data-source.tf @@ -0,0 +1,17 @@ +provider "elasticstack" { + elasticsearch {} +} + +data "elasticstack_elasticsearch_ingest_processor_convert" "convert" { + description = "converts the content of the id field to an integer" + field = "id" + type = "integer" +} + +resource "elasticstack_elasticsearch_ingest_pipeline" "my_ingest_pipeline" { + name = "convert-ingest" + + processors = [ + data.elasticstack_elasticsearch_ingest_processor_convert.convert.json + ] +} diff --git a/examples/data-sources/elasticstack_elasticsearch_ingest_processor_csv/data-source.tf b/examples/data-sources/elasticstack_elasticsearch_ingest_processor_csv/data-source.tf new file mode 100644 index 000000000..6a6137439 --- /dev/null +++ b/examples/data-sources/elasticstack_elasticsearch_ingest_processor_csv/data-source.tf @@ -0,0 +1,16 @@ +provider "elasticstack" { + elasticsearch {} +} + +data "elasticstack_elasticsearch_ingest_processor_csv" "csv" { + field = "my_field" + target_fields = ["field1", "field2"] +} + +resource "elasticstack_elasticsearch_ingest_pipeline" "my_ingest_pipeline" { + name = "csv-ingest" + + processors = [ + data.elasticstack_elasticsearch_ingest_processor_csv.csv.json + ] +} diff --git a/examples/data-sources/elasticstack_elasticsearch_ingest_processor_date/data-source.tf b/examples/data-sources/elasticstack_elasticsearch_ingest_processor_date/data-source.tf new file mode 100644 index 000000000..bbfafe1f8 --- /dev/null +++ b/examples/data-sources/elasticstack_elasticsearch_ingest_processor_date/data-source.tf @@ -0,0 +1,18 @@ +provider "elasticstack" { + elasticsearch {} +} + +data "elasticstack_elasticsearch_ingest_processor_date" "date" { + field = "initial_date" + target_field = "timestamp" + formats = ["dd/MM/yyyy HH:mm:ss"] + timezone = "Europe/Amsterdam" +} + +resource "elasticstack_elasticsearch_ingest_pipeline" "my_ingest_pipeline" { + name = "date-ingest" + + processors = [ + data.elasticstack_elasticsearch_ingest_processor_date.date.json + ] +} diff --git a/examples/data-sources/elasticstack_elasticsearch_ingest_processor_date_index_name/data-source.tf b/examples/data-sources/elasticstack_elasticsearch_ingest_processor_date_index_name/data-source.tf new file mode 100644 index 000000000..6d0122dea --- /dev/null +++ b/examples/data-sources/elasticstack_elasticsearch_ingest_processor_date_index_name/data-source.tf @@ -0,0 +1,18 @@ +provider "elasticstack" { + elasticsearch {} +} + +data "elasticstack_elasticsearch_ingest_processor_date_index_name" "date_index_name" { + description = "monthly date-time index naming" + field = "date1" + index_name_prefix = "my-index-" + date_rounding = "M" +} + +resource "elasticstack_elasticsearch_ingest_pipeline" "my_ingest_pipeline" { + name = "date-index-name-ingest" + + processors = [ + data.elasticstack_elasticsearch_ingest_processor_date_index_name.date_index_name.json + ] +} diff --git a/examples/data-sources/elasticstack_elasticsearch_ingest_processor_dissect/data-source.tf b/examples/data-sources/elasticstack_elasticsearch_ingest_processor_dissect/data-source.tf new file mode 100644 index 000000000..c05bae8a6 --- /dev/null +++ b/examples/data-sources/elasticstack_elasticsearch_ingest_processor_dissect/data-source.tf @@ -0,0 +1,16 @@ +provider "elasticstack" { + elasticsearch {} +} + +data "elasticstack_elasticsearch_ingest_processor_dissect" "dissect" { + field = "message" + pattern = "%%{clientip} %%{ident} %%{auth} [%%{@timestamp}] \"%%{verb} %%{request} HTTP/%%{httpversion}\" %%{status} %%{size}" +} + +resource "elasticstack_elasticsearch_ingest_pipeline" "my_ingest_pipeline" { + name = "dissect-ingest" + + processors = [ + data.elasticstack_elasticsearch_ingest_processor_dissect.dissect.json + ] +} diff --git a/examples/data-sources/elasticstack_elasticsearch_ingest_processor_dot_expander/data-source.tf b/examples/data-sources/elasticstack_elasticsearch_ingest_processor_dot_expander/data-source.tf new file mode 100644 index 000000000..025562fa6 --- /dev/null +++ b/examples/data-sources/elasticstack_elasticsearch_ingest_processor_dot_expander/data-source.tf @@ -0,0 +1,15 @@ +provider "elasticstack" { + elasticsearch {} +} + +data "elasticstack_elasticsearch_ingest_processor_dot_expander" "dot_expander" { + field = "foo.bar" +} + +resource "elasticstack_elasticsearch_ingest_pipeline" "my_ingest_pipeline" { + name = "dot-expander-ingest" + + processors = [ + data.elasticstack_elasticsearch_ingest_processor_dot_expander.dot_expander.json + ] +} diff --git a/examples/data-sources/elasticstack_elasticsearch_ingest_processor_drop/data-source.tf b/examples/data-sources/elasticstack_elasticsearch_ingest_processor_drop/data-source.tf new file mode 100644 index 000000000..00b7f24f7 --- /dev/null +++ b/examples/data-sources/elasticstack_elasticsearch_ingest_processor_drop/data-source.tf @@ -0,0 +1,15 @@ +provider "elasticstack" { + elasticsearch {} +} + +data "elasticstack_elasticsearch_ingest_processor_drop" "drop" { + if = "ctx.network_name == 'Guest'" +} + +resource "elasticstack_elasticsearch_ingest_pipeline" "my_ingest_pipeline" { + name = "drop-ingest" + + processors = [ + data.elasticstack_elasticsearch_ingest_processor_drop.drop.json + ] +} diff --git a/examples/data-sources/elasticstack_elasticsearch_ingest_processor_enrich/data-source.tf b/examples/data-sources/elasticstack_elasticsearch_ingest_processor_enrich/data-source.tf new file mode 100644 index 000000000..38a4795d2 --- /dev/null +++ b/examples/data-sources/elasticstack_elasticsearch_ingest_processor_enrich/data-source.tf @@ -0,0 +1,20 @@ +provider "elasticstack" { + elasticsearch {} +} + +// the policy must exist before using this processor +// See example at: https://www.elastic.co/guide/en/elasticsearch/reference/current/match-enrich-policy-type.html +data "elasticstack_elasticsearch_ingest_processor_enrich" "enrich" { + policy_name = "users-policy" + field = "email" + target_field = "user" + max_matches = 1 +} + +resource "elasticstack_elasticsearch_ingest_pipeline" "my_ingest_pipeline" { + name = "enrich-ingest" + + processors = [ + data.elasticstack_elasticsearch_ingest_processor_enrich.enrich.json + ] +} diff --git a/examples/data-sources/elasticstack_elasticsearch_ingest_processor_fail/data-source.tf b/examples/data-sources/elasticstack_elasticsearch_ingest_processor_fail/data-source.tf new file mode 100644 index 000000000..70ccd397c --- /dev/null +++ b/examples/data-sources/elasticstack_elasticsearch_ingest_processor_fail/data-source.tf @@ -0,0 +1,16 @@ +provider "elasticstack" { + elasticsearch {} +} + +data "elasticstack_elasticsearch_ingest_processor_fail" "fail" { + if = "ctx.tags.contains('production') != true" + message = "The production tag is not present, found tags: {{{tags}}}" +} + +resource "elasticstack_elasticsearch_ingest_pipeline" "my_ingest_pipeline" { + name = "fail-ingest" + + processors = [ + data.elasticstack_elasticsearch_ingest_processor_fail.fail.json + ] +} diff --git a/examples/data-sources/elasticstack_elasticsearch_ingest_processor_fingerprint/data-source.tf b/examples/data-sources/elasticstack_elasticsearch_ingest_processor_fingerprint/data-source.tf new file mode 100644 index 000000000..f377842f7 --- /dev/null +++ b/examples/data-sources/elasticstack_elasticsearch_ingest_processor_fingerprint/data-source.tf @@ -0,0 +1,15 @@ +provider "elasticstack" { + elasticsearch {} +} + +data "elasticstack_elasticsearch_ingest_processor_fingerprint" "fingerprint" { + fields = ["user"] +} + +resource "elasticstack_elasticsearch_ingest_pipeline" "my_ingest_pipeline" { + name = "fingerprint-ingest" + + processors = [ + data.elasticstack_elasticsearch_ingest_processor_fingerprint.fingerprint.json + ] +} diff --git a/examples/data-sources/elasticstack_elasticsearch_ingest_processor_foreach/data-source.tf b/examples/data-sources/elasticstack_elasticsearch_ingest_processor_foreach/data-source.tf new file mode 100644 index 000000000..a8db5b257 --- /dev/null +++ b/examples/data-sources/elasticstack_elasticsearch_ingest_processor_foreach/data-source.tf @@ -0,0 +1,21 @@ +provider "elasticstack" { + elasticsearch {} +} + +data "elasticstack_elasticsearch_ingest_processor_convert" "convert" { + field = "_ingest._value" + type = "integer" +} + +data "elasticstack_elasticsearch_ingest_processor_foreach" "foreach" { + field = "values" + processor = data.elasticstack_elasticsearch_ingest_processor_convert.convert.json +} + +resource "elasticstack_elasticsearch_ingest_pipeline" "my_ingest_pipeline" { + name = "foreach-ingest" + + processors = [ + data.elasticstack_elasticsearch_ingest_processor_foreach.foreach.json + ] +} diff --git a/examples/data-sources/elasticstack_elasticsearch_ingest_processor_geoip/data-source.tf b/examples/data-sources/elasticstack_elasticsearch_ingest_processor_geoip/data-source.tf new file mode 100644 index 000000000..cb69d1333 --- /dev/null +++ b/examples/data-sources/elasticstack_elasticsearch_ingest_processor_geoip/data-source.tf @@ -0,0 +1,15 @@ +provider "elasticstack" { + elasticsearch {} +} + +data "elasticstack_elasticsearch_ingest_processor_geoip" "geoip" { + field = "ip" +} + +resource "elasticstack_elasticsearch_ingest_pipeline" "my_ingest_pipeline" { + name = "geoip-ingest" + + processors = [ + data.elasticstack_elasticsearch_ingest_processor_geoip.geoip.json + ] +} diff --git a/examples/data-sources/elasticstack_elasticsearch_ingest_processor_grok/data-source.tf b/examples/data-sources/elasticstack_elasticsearch_ingest_processor_grok/data-source.tf new file mode 100644 index 000000000..c26ec5794 --- /dev/null +++ b/examples/data-sources/elasticstack_elasticsearch_ingest_processor_grok/data-source.tf @@ -0,0 +1,20 @@ +provider "elasticstack" { + elasticsearch {} +} + +data "elasticstack_elasticsearch_ingest_processor_grok" "grok" { + field = "message" + patterns = ["%%{FAVORITE_DOG:pet}", "%%{FAVORITE_CAT:pet}"] + pattern_definitions = { + FAVORITE_DOG = "beagle" + FAVORITE_CAT = "burmese" + } +} + +resource "elasticstack_elasticsearch_ingest_pipeline" "my_ingest_pipeline" { + name = "grok-ingest" + + processors = [ + data.elasticstack_elasticsearch_ingest_processor_grok.grok.json + ] +} diff --git a/examples/data-sources/elasticstack_elasticsearch_ingest_processor_gsub/data-source.tf b/examples/data-sources/elasticstack_elasticsearch_ingest_processor_gsub/data-source.tf new file mode 100644 index 000000000..cae0557ad --- /dev/null +++ b/examples/data-sources/elasticstack_elasticsearch_ingest_processor_gsub/data-source.tf @@ -0,0 +1,17 @@ +provider "elasticstack" { + elasticsearch {} +} + +data "elasticstack_elasticsearch_ingest_processor_gsub" "gsub" { + field = "field1" + pattern = "\\." + replacement = "-" +} + +resource "elasticstack_elasticsearch_ingest_pipeline" "my_ingest_pipeline" { + name = "gsub-ingest" + + processors = [ + data.elasticstack_elasticsearch_ingest_processor_gsub.gsub.json + ] +} diff --git a/examples/data-sources/elasticstack_elasticsearch_ingest_processor_html_strip/data-source.tf b/examples/data-sources/elasticstack_elasticsearch_ingest_processor_html_strip/data-source.tf new file mode 100644 index 000000000..8fd96c0ac --- /dev/null +++ b/examples/data-sources/elasticstack_elasticsearch_ingest_processor_html_strip/data-source.tf @@ -0,0 +1,15 @@ +provider "elasticstack" { + elasticsearch {} +} + +data "elasticstack_elasticsearch_ingest_processor_html_strip" "html_strip" { + field = "foo" +} + +resource "elasticstack_elasticsearch_ingest_pipeline" "my_ingest_pipeline" { + name = "strip-ingest" + + processors = [ + data.elasticstack_elasticsearch_ingest_processor_html_strip.html_strip.json + ] +} diff --git a/examples/data-sources/elasticstack_elasticsearch_ingest_processor_join/data-source.tf b/examples/data-sources/elasticstack_elasticsearch_ingest_processor_join/data-source.tf new file mode 100644 index 000000000..d1cd6f8ba --- /dev/null +++ b/examples/data-sources/elasticstack_elasticsearch_ingest_processor_join/data-source.tf @@ -0,0 +1,16 @@ +provider "elasticstack" { + elasticsearch {} +} + +data "elasticstack_elasticsearch_ingest_processor_join" "join" { + field = "joined_array_field" + separator = "-" +} + +resource "elasticstack_elasticsearch_ingest_pipeline" "my_ingest_pipeline" { + name = "join-ingest" + + processors = [ + data.elasticstack_elasticsearch_ingest_processor_join.join.json + ] +} diff --git a/examples/data-sources/elasticstack_elasticsearch_ingest_processor_json/data-source.tf b/examples/data-sources/elasticstack_elasticsearch_ingest_processor_json/data-source.tf new file mode 100644 index 000000000..7912d9c9e --- /dev/null +++ b/examples/data-sources/elasticstack_elasticsearch_ingest_processor_json/data-source.tf @@ -0,0 +1,16 @@ +provider "elasticstack" { + elasticsearch {} +} + +data "elasticstack_elasticsearch_ingest_processor_json" "json_proc" { + field = "string_source" + target_field = "json_target" +} + +resource "elasticstack_elasticsearch_ingest_pipeline" "my_ingest_pipeline" { + name = "json-ingest" + + processors = [ + data.elasticstack_elasticsearch_ingest_processor_json.json_proc.json + ] +} diff --git a/examples/data-sources/elasticstack_elasticsearch_ingest_processor_kv/data-source.tf b/examples/data-sources/elasticstack_elasticsearch_ingest_processor_kv/data-source.tf new file mode 100644 index 000000000..457ddbdcc --- /dev/null +++ b/examples/data-sources/elasticstack_elasticsearch_ingest_processor_kv/data-source.tf @@ -0,0 +1,20 @@ +provider "elasticstack" { + elasticsearch {} +} + +data "elasticstack_elasticsearch_ingest_processor_kv" "kv" { + field = "message" + field_split = " " + value_split = "=" + + exclude_keys = ["tags"] + prefix = "setting_" +} + +resource "elasticstack_elasticsearch_ingest_pipeline" "my_ingest_pipeline" { + name = "kv-ingest" + + processors = [ + data.elasticstack_elasticsearch_ingest_processor_kv.kv.json + ] +} diff --git a/examples/data-sources/elasticstack_elasticsearch_ingest_processor_lowercase/data-source.tf b/examples/data-sources/elasticstack_elasticsearch_ingest_processor_lowercase/data-source.tf new file mode 100644 index 000000000..002272190 --- /dev/null +++ b/examples/data-sources/elasticstack_elasticsearch_ingest_processor_lowercase/data-source.tf @@ -0,0 +1,15 @@ +provider "elasticstack" { + elasticsearch {} +} + +data "elasticstack_elasticsearch_ingest_processor_lowercase" "lowercase" { + field = "foo" +} + +resource "elasticstack_elasticsearch_ingest_pipeline" "my_ingest_pipeline" { + name = "lowercase-ingest" + + processors = [ + data.elasticstack_elasticsearch_ingest_processor_lowercase.lowercase.json + ] +} diff --git a/examples/data-sources/elasticstack_elasticsearch_ingest_processor_network_direction/data-source.tf b/examples/data-sources/elasticstack_elasticsearch_ingest_processor_network_direction/data-source.tf new file mode 100644 index 000000000..7c232ec66 --- /dev/null +++ b/examples/data-sources/elasticstack_elasticsearch_ingest_processor_network_direction/data-source.tf @@ -0,0 +1,15 @@ +provider "elasticstack" { + elasticsearch {} +} + +data "elasticstack_elasticsearch_ingest_processor_network_direction" "network_direction" { + internal_networks = ["private"] +} + +resource "elasticstack_elasticsearch_ingest_pipeline" "my_ingest_pipeline" { + name = "network-direction-ingest" + + processors = [ + data.elasticstack_elasticsearch_ingest_processor_network_direction.network_direction.json + ] +} diff --git a/examples/data-sources/elasticstack_elasticsearch_ingest_processor_pipeline/data-source.tf b/examples/data-sources/elasticstack_elasticsearch_ingest_processor_pipeline/data-source.tf new file mode 100644 index 000000000..45c910ba5 --- /dev/null +++ b/examples/data-sources/elasticstack_elasticsearch_ingest_processor_pipeline/data-source.tf @@ -0,0 +1,34 @@ +provider "elasticstack" { + elasticsearch {} +} + +data "elasticstack_elasticsearch_ingest_processor_append" "append_tags" { + field = "tags" + value = ["production", "{{{app}}}", "{{{owner}}}"] +} + +resource "elasticstack_elasticsearch_ingest_pipeline" "pipeline_a" { + name = "pipeline_a" + + processors = [ + data.elasticstack_elasticsearch_ingest_processor_append.append_tags.json + ] +} + +data "elasticstack_elasticsearch_ingest_processor_fingerprint" "fingerprint" { + fields = ["owner"] +} + +// use the above defined pipeline in our configuration +data "elasticstack_elasticsearch_ingest_processor_pipeline" "pipeline" { + name = elasticstack_elasticsearch_ingest_pipeline.pipeline_a.name +} + +resource "elasticstack_elasticsearch_ingest_pipeline" "pipeline_b" { + name = "pipeline_b" + + processors = [ + data.elasticstack_elasticsearch_ingest_processor_pipeline.pipeline.json, + data.elasticstack_elasticsearch_ingest_processor_fingerprint.fingerprint.json + ] +} diff --git a/examples/data-sources/elasticstack_elasticsearch_ingest_processor_registered_domain/data-source.tf b/examples/data-sources/elasticstack_elasticsearch_ingest_processor_registered_domain/data-source.tf new file mode 100644 index 000000000..f638170fc --- /dev/null +++ b/examples/data-sources/elasticstack_elasticsearch_ingest_processor_registered_domain/data-source.tf @@ -0,0 +1,16 @@ +provider "elasticstack" { + elasticsearch {} +} + +data "elasticstack_elasticsearch_ingest_processor_registered_domain" "domain" { + field = "fqdn" + target_field = "url" +} + +resource "elasticstack_elasticsearch_ingest_pipeline" "my_ingest_pipeline" { + name = "domain-ingest" + + processors = [ + data.elasticstack_elasticsearch_ingest_processor_registered_domain.domain.json + ] +} diff --git a/examples/data-sources/elasticstack_elasticsearch_ingest_processor_remove/data-source.tf b/examples/data-sources/elasticstack_elasticsearch_ingest_processor_remove/data-source.tf new file mode 100644 index 000000000..40c9a929b --- /dev/null +++ b/examples/data-sources/elasticstack_elasticsearch_ingest_processor_remove/data-source.tf @@ -0,0 +1,15 @@ +provider "elasticstack" { + elasticsearch {} +} + +data "elasticstack_elasticsearch_ingest_processor_remove" "remove" { + field = ["user_agent", "url"] +} + +resource "elasticstack_elasticsearch_ingest_pipeline" "my_ingest_pipeline" { + name = "remove-ingest" + + processors = [ + data.elasticstack_elasticsearch_ingest_processor_remove.remove.json + ] +} diff --git a/examples/data-sources/elasticstack_elasticsearch_ingest_processor_rename/data-source.tf b/examples/data-sources/elasticstack_elasticsearch_ingest_processor_rename/data-source.tf new file mode 100644 index 000000000..dbcfc1ec9 --- /dev/null +++ b/examples/data-sources/elasticstack_elasticsearch_ingest_processor_rename/data-source.tf @@ -0,0 +1,16 @@ +provider "elasticstack" { + elasticsearch {} +} + +data "elasticstack_elasticsearch_ingest_processor_rename" "rename" { + field = "provider" + target_field = "cloud.provider" +} + +resource "elasticstack_elasticsearch_ingest_pipeline" "my_ingest_pipeline" { + name = "rename-ingest" + + processors = [ + data.elasticstack_elasticsearch_ingest_processor_rename.rename.json + ] +} diff --git a/examples/data-sources/elasticstack_elasticsearch_ingest_processor_script/data-source.tf b/examples/data-sources/elasticstack_elasticsearch_ingest_processor_script/data-source.tf new file mode 100644 index 000000000..827608621 --- /dev/null +++ b/examples/data-sources/elasticstack_elasticsearch_ingest_processor_script/data-source.tf @@ -0,0 +1,29 @@ +provider "elasticstack" { + elasticsearch {} +} + +data "elasticstack_elasticsearch_ingest_processor_script" "script" { + description = "Extract 'tags' from 'env' field" + lang = "painless" + + source = <: %v)", + name, + key, + value, + v, + err) + } + return nil + } +} diff --git a/internal/elasticsearch/ingest/processor_append_data_source.go b/internal/elasticsearch/ingest/processor_append_data_source.go new file mode 100644 index 000000000..1dad6cad5 --- /dev/null +++ b/internal/elasticsearch/ingest/processor_append_data_source.go @@ -0,0 +1,147 @@ +package ingest + +import ( + "context" + "encoding/json" + "strings" + + "github.com/elastic/terraform-provider-elasticstack/internal/models" + "github.com/elastic/terraform-provider-elasticstack/internal/utils" + "github.com/hashicorp/terraform-plugin-sdk/v2/diag" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation" +) + +func DataSourceProcessorAppend() *schema.Resource { + processorSchema := map[string]*schema.Schema{ + "id": { + Description: "Internal identifier of the resource", + Type: schema.TypeString, + Computed: true, + }, + "field": { + Description: "The field to be appended to.", + Type: schema.TypeString, + Required: true, + }, + "value": { + Description: "The value to be appended. ", + Type: schema.TypeList, + Required: true, + MinItems: 1, + Elem: &schema.Schema{ + Type: schema.TypeString, + }, + }, + "allow_duplicates": { + Description: "If `false`, the processor does not append values already present in the field.", + Type: schema.TypeBool, + Optional: true, + Default: true, + }, + "media_type": { + Description: "The media type for encoding value. Applies only when value is a template snippet. Must be one of `application/json`, `text/plain`, or `application/x-www-form-urlencoded`.", + Type: schema.TypeString, + Optional: true, + Default: "application/json", + }, + "description": { + Description: "Description of the processor. ", + Type: schema.TypeString, + Optional: true, + }, + "if": { + Description: "Conditionally execute the processor", + Type: schema.TypeString, + Optional: true, + }, + "ignore_failure": { + Description: "Ignore failures for the processor. ", + Type: schema.TypeBool, + Optional: true, + Default: false, + }, + "on_failure": { + Description: "Handle failures for the processor.", + Type: schema.TypeList, + Optional: true, + MinItems: 1, + Elem: &schema.Schema{ + Type: schema.TypeString, + ValidateFunc: validation.StringIsJSON, + DiffSuppressFunc: utils.DiffJsonSuppress, + }, + }, + "tag": { + Description: "Identifier for the processor.", + Type: schema.TypeString, + Optional: true, + }, + "json": { + Description: "JSON representation of this data source.", + Type: schema.TypeString, + Computed: true, + }, + } + + return &schema.Resource{ + Description: "Appends one or more values to an existing array if the field already exists and it is an array. Converts a scalar to an array and appends one or more values to it if the field exists and it is a scalar. Creates an array containing the provided values if the field doesn’t exist. See: https://www.elastic.co/guide/en/elasticsearch/reference/current/append-processor.html", + + ReadContext: dataSourceProcessorAppendRead, + + Schema: processorSchema, + } +} + +func dataSourceProcessorAppendRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { + var diags diag.Diagnostics + + processor := &models.ProcessorAppend{} + + processor.Field = d.Get("field").(string) + values := make([]string, 0) + for _, v := range d.Get("value").([]interface{}) { + values = append(values, v.(string)) + } + processor.Value = values + processor.AllowDuplicates = d.Get("allow_duplicates").(bool) + processor.IgnoreFailure = d.Get("ignore_failure").(bool) + processor.MediaType = d.Get("media_type").(string) + if v, ok := d.GetOk("description"); ok { + processor.Description = v.(string) + } + if v, ok := d.GetOk("if"); ok { + processor.If = v.(string) + } + if v, ok := d.GetOk("tag"); ok { + processor.Tag = v.(string) + } + if v, ok := d.GetOk("on_failure"); ok { + onFailure := make([]map[string]interface{}, len(v.([]interface{}))) + for i, f := range v.([]interface{}) { + item := make(map[string]interface{}) + if err := json.NewDecoder(strings.NewReader(f.(string))).Decode(&item); err != nil { + return diag.FromErr(err) + } + onFailure[i] = item + } + processor.OnFailure = onFailure + } + + processorJson, err := json.MarshalIndent(map[string]*models.ProcessorAppend{"append": processor}, "", " ") + if err != nil { + diag.FromErr(err) + } + if err := d.Set("json", string(processorJson)); err != nil { + diag.FromErr(err) + } + + hash, err := utils.StringToHash(string(processorJson)) + if err != nil { + return diag.FromErr(err) + } + + d.SetId(*hash) + + return diags +} diff --git a/internal/elasticsearch/ingest/processor_append_data_source_test.go b/internal/elasticsearch/ingest/processor_append_data_source_test.go new file mode 100644 index 000000000..3a84ca80a --- /dev/null +++ b/internal/elasticsearch/ingest/processor_append_data_source_test.go @@ -0,0 +1,48 @@ +package ingest_test + +import ( + "testing" + + "github.com/elastic/terraform-provider-elasticstack/internal/acctest" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/resource" +) + +func TestAccDataSourceIngestProcessorAppend(t *testing.T) { + resource.Test(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(t) }, + ProviderFactories: acctest.Providers, + Steps: []resource.TestStep{ + { + Config: testAccDataSourceIngestProcessorAppend, + Check: resource.ComposeTestCheckFunc( + resource.TestCheckResourceAttr("data.elasticstack_elasticsearch_ingest_processor_append.test", "field", "tags"), + CheckResourceJson("data.elasticstack_elasticsearch_ingest_processor_append.test", "json", expectedJsonAppend), + ), + }, + }, + }) +} + +const expectedJsonAppend = `{ + "append": { + "field": "tags", + "value": ["production", "{{{app}}}", "{{{owner}}}"], + "allow_duplicates": true, + "media_type": "application/json", + "description": "Append tags to the doc", + "ignore_failure": false + } +}` + +const testAccDataSourceIngestProcessorAppend = ` +provider "elasticstack" { + elasticsearch {} +} + +data "elasticstack_elasticsearch_ingest_processor_append" "test" { + description = "Append tags to the doc" + field = "tags" + value = ["production", "{{{app}}}", "{{{owner}}}"] + allow_duplicates = true +} +` diff --git a/internal/elasticsearch/ingest/processor_bytes_data_source.go b/internal/elasticsearch/ingest/processor_bytes_data_source.go new file mode 100644 index 000000000..bb19c80c4 --- /dev/null +++ b/internal/elasticsearch/ingest/processor_bytes_data_source.go @@ -0,0 +1,134 @@ +package ingest + +import ( + "context" + "encoding/json" + "strings" + + "github.com/elastic/terraform-provider-elasticstack/internal/models" + "github.com/elastic/terraform-provider-elasticstack/internal/utils" + "github.com/hashicorp/terraform-plugin-sdk/v2/diag" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation" +) + +func DataSourceProcessorBytes() *schema.Resource { + processorSchema := map[string]*schema.Schema{ + "id": { + Description: "Internal identifier of the resource", + Type: schema.TypeString, + Computed: true, + }, + "field": { + Description: "The field to convert", + Type: schema.TypeString, + Required: true, + }, + "target_field": { + Description: "The field to assign the converted value to, by default `field` is updated in-place", + Type: schema.TypeString, + Optional: true, + }, + "ignore_missing": { + Description: "If `true` and `field` does not exist or is `null`, the processor quietly exits without modifying the document.", + Type: schema.TypeBool, + Optional: true, + Default: false, + }, + "description": { + Description: "Description of the processor. ", + Type: schema.TypeString, + Optional: true, + }, + "if": { + Description: "Conditionally execute the processor", + Type: schema.TypeString, + Optional: true, + }, + "ignore_failure": { + Description: "Ignore failures for the processor. ", + Type: schema.TypeBool, + Optional: true, + Default: false, + }, + "on_failure": { + Description: "Handle failures for the processor.", + Type: schema.TypeList, + Optional: true, + MinItems: 1, + Elem: &schema.Schema{ + Type: schema.TypeString, + ValidateFunc: validation.StringIsJSON, + DiffSuppressFunc: utils.DiffJsonSuppress, + }, + }, + "tag": { + Description: "Identifier for the processor.", + Type: schema.TypeString, + Optional: true, + }, + "json": { + Description: "JSON representation of this data source.", + Type: schema.TypeString, + Computed: true, + }, + } + + return &schema.Resource{ + Description: "Converts a human readable byte value (e.g. 1kb) to its value in bytes (e.g. 1024). See: https://www.elastic.co/guide/en/elasticsearch/reference/current/bytes-processor.html", + + ReadContext: dataSourceProcessorBytesRead, + + Schema: processorSchema, + } +} + +func dataSourceProcessorBytesRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { + var diags diag.Diagnostics + + processor := &models.ProcessorBytes{} + + processor.Field = d.Get("field").(string) + processor.IgnoreFailure = d.Get("ignore_failure").(bool) + processor.IgnoreMissing = d.Get("ignore_missing").(bool) + if v, ok := d.GetOk("target_field"); ok { + processor.TargetField = v.(string) + } + if v, ok := d.GetOk("description"); ok { + processor.Description = v.(string) + } + if v, ok := d.GetOk("if"); ok { + processor.If = v.(string) + } + if v, ok := d.GetOk("tag"); ok { + processor.Tag = v.(string) + } + if v, ok := d.GetOk("on_failure"); ok { + onFailure := make([]map[string]interface{}, len(v.([]interface{}))) + for i, f := range v.([]interface{}) { + item := make(map[string]interface{}) + if err := json.NewDecoder(strings.NewReader(f.(string))).Decode(&item); err != nil { + return diag.FromErr(err) + } + onFailure[i] = item + } + processor.OnFailure = onFailure + } + + processorJson, err := json.MarshalIndent(map[string]*models.ProcessorBytes{"bytes": processor}, "", " ") + if err != nil { + diag.FromErr(err) + } + if err := d.Set("json", string(processorJson)); err != nil { + diag.FromErr(err) + } + + hash, err := utils.StringToHash(string(processorJson)) + if err != nil { + return diag.FromErr(err) + } + + d.SetId(*hash) + + return diags +} diff --git a/internal/elasticsearch/ingest/processor_bytes_data_source_test.go b/internal/elasticsearch/ingest/processor_bytes_data_source_test.go new file mode 100644 index 000000000..054ef7faa --- /dev/null +++ b/internal/elasticsearch/ingest/processor_bytes_data_source_test.go @@ -0,0 +1,42 @@ +package ingest_test + +import ( + "testing" + + "github.com/elastic/terraform-provider-elasticstack/internal/acctest" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/resource" +) + +func TestAccDataSourceIngestProcessorBytes(t *testing.T) { + resource.Test(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(t) }, + ProviderFactories: acctest.Providers, + Steps: []resource.TestStep{ + { + Config: testAccDataSourceIngestProcessorBytes, + Check: resource.ComposeTestCheckFunc( + resource.TestCheckResourceAttr("data.elasticstack_elasticsearch_ingest_processor_bytes.test", "field", "file.size"), + CheckResourceJson("data.elasticstack_elasticsearch_ingest_processor_bytes.test", "json", expectedJsonBytes), + ), + }, + }, + }) +} + +const expectedJsonBytes = `{ + "bytes": { + "field": "file.size", + "ignore_failure": false, + "ignore_missing": false + } +}` + +const testAccDataSourceIngestProcessorBytes = ` +provider "elasticstack" { + elasticsearch {} +} + +data "elasticstack_elasticsearch_ingest_processor_bytes" "test" { + field = "file.size" +} +` diff --git a/internal/elasticsearch/ingest/processor_circle_data_source.go b/internal/elasticsearch/ingest/processor_circle_data_source.go new file mode 100644 index 000000000..686ae9253 --- /dev/null +++ b/internal/elasticsearch/ingest/processor_circle_data_source.go @@ -0,0 +1,148 @@ +package ingest + +import ( + "context" + "encoding/json" + "strings" + + "github.com/elastic/terraform-provider-elasticstack/internal/models" + "github.com/elastic/terraform-provider-elasticstack/internal/utils" + "github.com/hashicorp/terraform-plugin-sdk/v2/diag" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation" +) + +func DataSourceProcessorCircle() *schema.Resource { + processorSchema := map[string]*schema.Schema{ + "id": { + Description: "Internal identifier of the resource", + Type: schema.TypeString, + Computed: true, + }, + "field": { + Description: "The string-valued field to trim whitespace from.", + Type: schema.TypeString, + Required: true, + }, + "target_field": { + Description: "The field to assign the converted value to, by default `field` is updated in-place", + Type: schema.TypeString, + Optional: true, + }, + "ignore_missing": { + Description: "If `true` and `field` does not exist or is `null`, the processor quietly exits without modifying the document.", + Type: schema.TypeBool, + Optional: true, + Default: false, + }, + "error_distance": { + Description: "The difference between the resulting inscribed distance from center to side and the circle’s radius (measured in meters for `geo_shape`, unit-less for `shape`)", + Type: schema.TypeFloat, + Required: true, + }, + "shape_type": { + Description: "Which field mapping type is to be used when processing the circle.", + Type: schema.TypeString, + Required: true, + ValidateFunc: validation.StringInSlice([]string{"geo_shape", "shape"}, false), + }, + "description": { + Description: "Description of the processor. ", + Type: schema.TypeString, + Optional: true, + }, + "if": { + Description: "Conditionally execute the processor", + Type: schema.TypeString, + Optional: true, + }, + "ignore_failure": { + Description: "Ignore failures for the processor. ", + Type: schema.TypeBool, + Optional: true, + Default: false, + }, + "on_failure": { + Description: "Handle failures for the processor.", + Type: schema.TypeList, + Optional: true, + MinItems: 1, + Elem: &schema.Schema{ + Type: schema.TypeString, + ValidateFunc: validation.StringIsJSON, + DiffSuppressFunc: utils.DiffJsonSuppress, + }, + }, + "tag": { + Description: "Identifier for the processor.", + Type: schema.TypeString, + Optional: true, + }, + "json": { + Description: "JSON representation of this data source.", + Type: schema.TypeString, + Computed: true, + }, + } + + return &schema.Resource{ + Description: "Converts circle definitions of shapes to regular polygons which approximate them. See: https://www.elastic.co/guide/en/elasticsearch/reference/current/ingest-circle-processor.html", + + ReadContext: dataSourceProcessorCircleRead, + + Schema: processorSchema, + } +} + +func dataSourceProcessorCircleRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { + var diags diag.Diagnostics + + processor := &models.ProcessorCircle{} + + processor.Field = d.Get("field").(string) + processor.IgnoreFailure = d.Get("ignore_failure").(bool) + processor.IgnoreMissing = d.Get("ignore_missing").(bool) + processor.ErrorDistance = d.Get("error_distance").(float64) + processor.ShapeType = d.Get("shape_type").(string) + + if v, ok := d.GetOk("target_field"); ok { + processor.TargetField = v.(string) + } + if v, ok := d.GetOk("description"); ok { + processor.Description = v.(string) + } + if v, ok := d.GetOk("if"); ok { + processor.If = v.(string) + } + if v, ok := d.GetOk("tag"); ok { + processor.Tag = v.(string) + } + if v, ok := d.GetOk("on_failure"); ok { + onFailure := make([]map[string]interface{}, len(v.([]interface{}))) + for i, f := range v.([]interface{}) { + item := make(map[string]interface{}) + if err := json.NewDecoder(strings.NewReader(f.(string))).Decode(&item); err != nil { + return diag.FromErr(err) + } + onFailure[i] = item + } + processor.OnFailure = onFailure + } + + processorJson, err := json.MarshalIndent(map[string]*models.ProcessorCircle{"circle": processor}, "", " ") + if err != nil { + diag.FromErr(err) + } + if err := d.Set("json", string(processorJson)); err != nil { + diag.FromErr(err) + } + + hash, err := utils.StringToHash(string(processorJson)) + if err != nil { + return diag.FromErr(err) + } + + d.SetId(*hash) + + return diags +} diff --git a/internal/elasticsearch/ingest/processor_circle_data_source_test.go b/internal/elasticsearch/ingest/processor_circle_data_source_test.go new file mode 100644 index 000000000..cb26c8de9 --- /dev/null +++ b/internal/elasticsearch/ingest/processor_circle_data_source_test.go @@ -0,0 +1,46 @@ +package ingest_test + +import ( + "testing" + + "github.com/elastic/terraform-provider-elasticstack/internal/acctest" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/resource" +) + +func TestAccDataSourceIngestProcessorCircle(t *testing.T) { + resource.Test(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(t) }, + ProviderFactories: acctest.Providers, + Steps: []resource.TestStep{ + { + Config: testAccDataSourceIngestProcessorCircle, + Check: resource.ComposeTestCheckFunc( + resource.TestCheckResourceAttr("data.elasticstack_elasticsearch_ingest_processor_circle.test", "field", "circle"), + CheckResourceJson("data.elasticstack_elasticsearch_ingest_processor_circle.test", "json", expectedJsonCircle), + ), + }, + }, + }) +} + +const expectedJsonCircle = `{ + "circle": { + "field": "circle", + "error_distance": 28.1, + "shape_type": "geo_shape", + "ignore_failure": false, + "ignore_missing": false + } +}` + +const testAccDataSourceIngestProcessorCircle = ` +provider "elasticstack" { + elasticsearch {} +} + +data "elasticstack_elasticsearch_ingest_processor_circle" "test" { + field = "circle" + error_distance = 28.1 + shape_type = "geo_shape" +} +` diff --git a/internal/elasticsearch/ingest/processor_community_id_data_source.go b/internal/elasticsearch/ingest/processor_community_id_data_source.go new file mode 100644 index 000000000..31a3c7cbc --- /dev/null +++ b/internal/elasticsearch/ingest/processor_community_id_data_source.go @@ -0,0 +1,206 @@ +package ingest + +import ( + "context" + "encoding/json" + "strings" + + "github.com/elastic/terraform-provider-elasticstack/internal/models" + "github.com/elastic/terraform-provider-elasticstack/internal/utils" + "github.com/hashicorp/terraform-plugin-sdk/v2/diag" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation" +) + +func DataSourceProcessorCommunityId() *schema.Resource { + processorSchema := map[string]*schema.Schema{ + "id": { + Description: "Internal identifier of the resource", + Type: schema.TypeString, + Computed: true, + }, + "source_ip": { + Description: "Field containing the source IP address.", + Type: schema.TypeString, + Optional: true, + }, + "source_port": { + Description: "Field containing the source port.", + Type: schema.TypeInt, + Optional: true, + }, + "destination_ip": { + Description: "Field containing the destination IP address.", + Type: schema.TypeString, + Optional: true, + }, + "destination_port": { + Description: "Field containing the destination port.", + Type: schema.TypeInt, + Optional: true, + }, + "iana_number": { + Description: "Field containing the IANA number.", + Type: schema.TypeInt, + Optional: true, + }, + "icmp_type": { + Description: "Field containing the ICMP type.", + Type: schema.TypeInt, + Optional: true, + }, + "icmp_code": { + Description: "Field containing the ICMP code.", + Type: schema.TypeInt, + Optional: true, + }, + "seed": { + Description: "Seed for the community ID hash. Must be between 0 and 65535 (inclusive). The seed can prevent hash collisions between network domains, such as a staging and production network that use the same addressing scheme.", + Type: schema.TypeInt, + Optional: true, + Default: 0, + ValidateFunc: validation.IntBetween(0, 65535), + }, + "transport": { + Description: "Field containing the transport protocol. Used only when the `iana_number` field is not present.", + Type: schema.TypeString, + Optional: true, + }, + "target_field": { + Description: "Output field for the community ID.", + Type: schema.TypeString, + Optional: true, + }, + "ignore_missing": { + Description: "If `true` and `field` does not exist or is `null`, the processor quietly exits without modifying the document.", + Type: schema.TypeBool, + Optional: true, + Default: false, + }, + "description": { + Description: "Description of the processor. ", + Type: schema.TypeString, + Optional: true, + }, + "if": { + Description: "Conditionally execute the processor", + Type: schema.TypeString, + Optional: true, + }, + "ignore_failure": { + Description: "Ignore failures for the processor. ", + Type: schema.TypeBool, + Optional: true, + Default: false, + }, + "on_failure": { + Description: "Handle failures for the processor.", + Type: schema.TypeList, + Optional: true, + MinItems: 1, + Elem: &schema.Schema{ + Type: schema.TypeString, + ValidateFunc: validation.StringIsJSON, + DiffSuppressFunc: utils.DiffJsonSuppress, + }, + }, + "tag": { + Description: "Identifier for the processor.", + Type: schema.TypeString, + Optional: true, + }, + "json": { + Description: "JSON representation of this data source.", + Type: schema.TypeString, + Computed: true, + }, + } + + return &schema.Resource{ + Description: "Computes the Community ID for network flow data as defined in the Community ID Specification. See: https://www.elastic.co/guide/en/elasticsearch/reference/current/community-id-processor.html", + + ReadContext: dataSourceProcessorCommunityIdRead, + + Schema: processorSchema, + } +} + +func dataSourceProcessorCommunityIdRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { + var diags diag.Diagnostics + + processor := &models.ProcessorCommunityId{} + + processor.IgnoreFailure = d.Get("ignore_failure").(bool) + processor.IgnoreMissing = d.Get("ignore_missing").(bool) + seed := d.Get("seed").(int) + processor.Seed = &seed + + if v, ok := d.GetOk("source_ip"); ok { + processor.SourceIp = v.(string) + } + if v, ok := d.GetOk("source_port"); ok { + port := v.(int) + processor.SourcePort = &port + } + if v, ok := d.GetOk("destination_ip"); ok { + processor.DestinationIp = v.(string) + } + if v, ok := d.GetOk("destination_port"); ok { + port := v.(int) + processor.DestinationPort = &port + } + if v, ok := d.GetOk("iana_number"); ok { + processor.IanaNumber = v.(string) + } + if v, ok := d.GetOk("icmp_type"); ok { + num := v.(int) + processor.IcmpType = &num + } + if v, ok := d.GetOk("icmp_code"); ok { + num := v.(int) + processor.IcmpCode = &num + } + if v, ok := d.GetOk("transport"); ok { + processor.Transport = v.(string) + } + if v, ok := d.GetOk("target_field"); ok { + processor.TargetField = v.(string) + } + if v, ok := d.GetOk("description"); ok { + processor.Description = v.(string) + } + if v, ok := d.GetOk("if"); ok { + processor.If = v.(string) + } + if v, ok := d.GetOk("tag"); ok { + processor.Tag = v.(string) + } + if v, ok := d.GetOk("on_failure"); ok { + onFailure := make([]map[string]interface{}, len(v.([]interface{}))) + for i, f := range v.([]interface{}) { + item := make(map[string]interface{}) + if err := json.NewDecoder(strings.NewReader(f.(string))).Decode(&item); err != nil { + return diag.FromErr(err) + } + onFailure[i] = item + } + processor.OnFailure = onFailure + } + + processorJson, err := json.MarshalIndent(map[string]*models.ProcessorCommunityId{"community_id": processor}, "", " ") + if err != nil { + diag.FromErr(err) + } + if err := d.Set("json", string(processorJson)); err != nil { + diag.FromErr(err) + } + + hash, err := utils.StringToHash(string(processorJson)) + if err != nil { + return diag.FromErr(err) + } + + d.SetId(*hash) + + return diags +} diff --git a/internal/elasticsearch/ingest/processor_community_id_data_source_test.go b/internal/elasticsearch/ingest/processor_community_id_data_source_test.go new file mode 100644 index 000000000..772a0fea1 --- /dev/null +++ b/internal/elasticsearch/ingest/processor_community_id_data_source_test.go @@ -0,0 +1,39 @@ +package ingest_test + +import ( + "testing" + + "github.com/elastic/terraform-provider-elasticstack/internal/acctest" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/resource" +) + +func TestAccDataSourceIngestProcessorCommunityId(t *testing.T) { + resource.Test(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(t) }, + ProviderFactories: acctest.Providers, + Steps: []resource.TestStep{ + { + Config: testAccDataSourceIngestProcessorCommunityId, + Check: resource.ComposeTestCheckFunc( + CheckResourceJson("data.elasticstack_elasticsearch_ingest_processor_community_id.test", "json", expectedJsonCommunityId), + ), + }, + }, + }) +} + +const expectedJsonCommunityId = `{ + "community_id": { + "seed": 0, + "ignore_failure": false, + "ignore_missing": false + } +}` + +const testAccDataSourceIngestProcessorCommunityId = ` +provider "elasticstack" { + elasticsearch {} +} + +data "elasticstack_elasticsearch_ingest_processor_community_id" "test" {} +` diff --git a/internal/elasticsearch/ingest/processor_convert_data_source.go b/internal/elasticsearch/ingest/processor_convert_data_source.go new file mode 100644 index 000000000..356df78a8 --- /dev/null +++ b/internal/elasticsearch/ingest/processor_convert_data_source.go @@ -0,0 +1,142 @@ +package ingest + +import ( + "context" + "encoding/json" + "strings" + + "github.com/elastic/terraform-provider-elasticstack/internal/models" + "github.com/elastic/terraform-provider-elasticstack/internal/utils" + "github.com/hashicorp/terraform-plugin-sdk/v2/diag" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation" +) + +func DataSourceProcessorConvert() *schema.Resource { + processorSchema := map[string]*schema.Schema{ + "id": { + Description: "Internal identifier of the resource", + Type: schema.TypeString, + Computed: true, + }, + "field": { + Description: "The field whose value is to be converted.", + Type: schema.TypeString, + Required: true, + }, + "target_field": { + Description: "The field to assign the converted value to.", + Type: schema.TypeString, + Optional: true, + }, + "ignore_missing": { + Description: "If `true` and `field` does not exist or is `null`, the processor quietly exits without modifying the document.", + Type: schema.TypeBool, + Optional: true, + Default: false, + }, + "type": { + Description: "The type to convert the existing value to", + Type: schema.TypeString, + Required: true, + ValidateFunc: validation.StringInSlice([]string{"integer", "long", "float", "double", "string", "boolean", "ip", "auto"}, true), + }, + "description": { + Description: "Description of the processor. ", + Type: schema.TypeString, + Optional: true, + }, + "if": { + Description: "Conditionally execute the processor", + Type: schema.TypeString, + Optional: true, + }, + "ignore_failure": { + Description: "Ignore failures for the processor. ", + Type: schema.TypeBool, + Optional: true, + Default: false, + }, + "on_failure": { + Description: "Handle failures for the processor.", + Type: schema.TypeList, + Optional: true, + MinItems: 1, + Elem: &schema.Schema{ + Type: schema.TypeString, + ValidateFunc: validation.StringIsJSON, + DiffSuppressFunc: utils.DiffJsonSuppress, + }, + }, + "tag": { + Description: "Identifier for the processor.", + Type: schema.TypeString, + Optional: true, + }, + "json": { + Description: "JSON representation of this data source.", + Type: schema.TypeString, + Computed: true, + }, + } + + return &schema.Resource{ + Description: "Converts a field in the currently ingested document to a different type, such as converting a string to an integer. See: https://www.elastic.co/guide/en/elasticsearch/reference/current/convert-processor.html", + + ReadContext: dataSourceProcessorConvertRead, + + Schema: processorSchema, + } +} + +func dataSourceProcessorConvertRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { + var diags diag.Diagnostics + + processor := &models.ProcessorConvert{} + + processor.Field = d.Get("field").(string) + processor.IgnoreFailure = d.Get("ignore_failure").(bool) + processor.IgnoreMissing = d.Get("ignore_missing").(bool) + processor.Type = d.Get("type").(string) + + if v, ok := d.GetOk("target_field"); ok { + processor.TargetField = v.(string) + } + if v, ok := d.GetOk("description"); ok { + processor.Description = v.(string) + } + if v, ok := d.GetOk("if"); ok { + processor.If = v.(string) + } + if v, ok := d.GetOk("tag"); ok { + processor.Tag = v.(string) + } + if v, ok := d.GetOk("on_failure"); ok { + onFailure := make([]map[string]interface{}, len(v.([]interface{}))) + for i, f := range v.([]interface{}) { + item := make(map[string]interface{}) + if err := json.NewDecoder(strings.NewReader(f.(string))).Decode(&item); err != nil { + return diag.FromErr(err) + } + onFailure[i] = item + } + processor.OnFailure = onFailure + } + + processorJson, err := json.MarshalIndent(map[string]*models.ProcessorConvert{"convert": processor}, "", " ") + if err != nil { + diag.FromErr(err) + } + if err := d.Set("json", string(processorJson)); err != nil { + diag.FromErr(err) + } + + hash, err := utils.StringToHash(string(processorJson)) + if err != nil { + return diag.FromErr(err) + } + + d.SetId(*hash) + + return diags +} diff --git a/internal/elasticsearch/ingest/processor_convert_data_source_test.go b/internal/elasticsearch/ingest/processor_convert_data_source_test.go new file mode 100644 index 000000000..517d05a49 --- /dev/null +++ b/internal/elasticsearch/ingest/processor_convert_data_source_test.go @@ -0,0 +1,46 @@ +package ingest_test + +import ( + "testing" + + "github.com/elastic/terraform-provider-elasticstack/internal/acctest" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/resource" +) + +func TestAccDataSourceIngestProcessorConvert(t *testing.T) { + resource.Test(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(t) }, + ProviderFactories: acctest.Providers, + Steps: []resource.TestStep{ + { + Config: testAccDataSourceIngestProcessorConvert, + Check: resource.ComposeTestCheckFunc( + resource.TestCheckResourceAttr("data.elasticstack_elasticsearch_ingest_processor_convert.test", "field", "id"), + CheckResourceJson("data.elasticstack_elasticsearch_ingest_processor_convert.test", "json", expectedJsonConvert), + ), + }, + }, + }) +} + +const expectedJsonConvert = `{ + "convert": { + "description": "converts the content of the id field to an integer", + "field": "id", + "type": "integer", + "ignore_failure": false, + "ignore_missing": false + } +}` + +const testAccDataSourceIngestProcessorConvert = ` +provider "elasticstack" { + elasticsearch {} +} + +data "elasticstack_elasticsearch_ingest_processor_convert" "test" { + description = "converts the content of the id field to an integer" + field = "id" + type = "integer" +} +` diff --git a/internal/elasticsearch/ingest/processor_csv_data_source.go b/internal/elasticsearch/ingest/processor_csv_data_source.go new file mode 100644 index 000000000..e94532ac6 --- /dev/null +++ b/internal/elasticsearch/ingest/processor_csv_data_source.go @@ -0,0 +1,172 @@ +package ingest + +import ( + "context" + "encoding/json" + "strings" + + "github.com/elastic/terraform-provider-elasticstack/internal/models" + "github.com/elastic/terraform-provider-elasticstack/internal/utils" + "github.com/hashicorp/terraform-plugin-sdk/v2/diag" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation" +) + +func DataSourceProcessorCSV() *schema.Resource { + processorSchema := map[string]*schema.Schema{ + "id": { + Description: "Internal identifier of the resource", + Type: schema.TypeString, + Computed: true, + }, + "field": { + Description: "The field to extract data from.", + Type: schema.TypeString, + Required: true, + }, + "target_fields": { + Description: "The array of fields to assign extracted values to.", + Type: schema.TypeList, + Required: true, + MinItems: 1, + Elem: &schema.Schema{ + Type: schema.TypeString, + }, + }, + "ignore_missing": { + Description: "If `true` and `field` does not exist or is `null`, the processor quietly exits without modifying the document.", + Type: schema.TypeBool, + Optional: true, + Default: false, + }, + "separator": { + Description: "Separator used in CSV, has to be single character string.", + Type: schema.TypeString, + Optional: true, + Default: ",", + }, + "quote": { + Description: "Quote used in CSV, has to be single character string", + Type: schema.TypeString, + Optional: true, + Default: `"`, + }, + "trim": { + Description: "Trim whitespaces in unquoted fields.", + Type: schema.TypeBool, + Optional: true, + Default: false, + }, + "empty_value": { + Description: "Value used to fill empty fields, empty fields will be skipped if this is not provided.", + Type: schema.TypeString, + Optional: true, + }, + "description": { + Description: "Description of the processor. ", + Type: schema.TypeString, + Optional: true, + }, + "if": { + Description: "Conditionally execute the processor", + Type: schema.TypeString, + Optional: true, + }, + "ignore_failure": { + Description: "Ignore failures for the processor. ", + Type: schema.TypeBool, + Optional: true, + Default: false, + }, + "on_failure": { + Description: "Handle failures for the processor.", + Type: schema.TypeList, + Optional: true, + MinItems: 1, + Elem: &schema.Schema{ + Type: schema.TypeString, + ValidateFunc: validation.StringIsJSON, + DiffSuppressFunc: utils.DiffJsonSuppress, + }, + }, + "tag": { + Description: "Identifier for the processor.", + Type: schema.TypeString, + Optional: true, + }, + "json": { + Description: "JSON representation of this data source.", + Type: schema.TypeString, + Computed: true, + }, + } + + return &schema.Resource{ + Description: "Extracts fields from CSV line out of a single text field within a document. Any empty field in CSV will be skipped. See: https://www.elastic.co/guide/en/elasticsearch/reference/current/csv-processor.html", + + ReadContext: dataSourceProcessorCSVRead, + + Schema: processorSchema, + } +} + +func dataSourceProcessorCSVRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { + var diags diag.Diagnostics + + processor := &models.ProcessorCSV{} + + processor.Field = d.Get("field").(string) + processor.IgnoreFailure = d.Get("ignore_failure").(bool) + processor.IgnoreMissing = d.Get("ignore_missing").(bool) + processor.Separator = d.Get("separator").(string) + processor.Quote = d.Get("quote").(string) + processor.Trim = d.Get("trim").(bool) + + tFields := d.Get("target_fields").([]interface{}) + targets := make([]string, len(tFields)) + for i, v := range tFields { + targets[i] = v.(string) + } + processor.TargetFields = targets + + if v, ok := d.GetOk("empty_value"); ok { + processor.EmptyValue = v.(string) + } + if v, ok := d.GetOk("description"); ok { + processor.Description = v.(string) + } + if v, ok := d.GetOk("if"); ok { + processor.If = v.(string) + } + if v, ok := d.GetOk("tag"); ok { + processor.Tag = v.(string) + } + if v, ok := d.GetOk("on_failure"); ok { + onFailure := make([]map[string]interface{}, len(v.([]interface{}))) + for i, f := range v.([]interface{}) { + item := make(map[string]interface{}) + if err := json.NewDecoder(strings.NewReader(f.(string))).Decode(&item); err != nil { + return diag.FromErr(err) + } + onFailure[i] = item + } + processor.OnFailure = onFailure + } + + processorJson, err := json.MarshalIndent(map[string]*models.ProcessorCSV{"csv": processor}, "", " ") + if err != nil { + diag.FromErr(err) + } + if err := d.Set("json", string(processorJson)); err != nil { + diag.FromErr(err) + } + + hash, err := utils.StringToHash(string(processorJson)) + if err != nil { + return diag.FromErr(err) + } + + d.SetId(*hash) + + return diags +} diff --git a/internal/elasticsearch/ingest/processor_csv_data_source_test.go b/internal/elasticsearch/ingest/processor_csv_data_source_test.go new file mode 100644 index 000000000..79eb63ce2 --- /dev/null +++ b/internal/elasticsearch/ingest/processor_csv_data_source_test.go @@ -0,0 +1,47 @@ +package ingest_test + +import ( + "testing" + + "github.com/elastic/terraform-provider-elasticstack/internal/acctest" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/resource" +) + +func TestAccDataSourceIngestProcessorCSV(t *testing.T) { + resource.Test(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(t) }, + ProviderFactories: acctest.Providers, + Steps: []resource.TestStep{ + { + Config: testAccDataSourceIngestProcessorCSV, + Check: resource.ComposeTestCheckFunc( + resource.TestCheckResourceAttr("data.elasticstack_elasticsearch_ingest_processor_csv.test", "field", "my_field"), + CheckResourceJson("data.elasticstack_elasticsearch_ingest_processor_csv.test", "json", expectedJsonCSV), + ), + }, + }, + }) +} + +const expectedJsonCSV = `{ + "csv": { + "field": "my_field", + "target_fields": ["field1", "field2"], + "separator": ",", + "trim": false, + "quote": "\"", + "ignore_failure": false, + "ignore_missing": false + } +}` + +const testAccDataSourceIngestProcessorCSV = ` +provider "elasticstack" { + elasticsearch {} +} + +data "elasticstack_elasticsearch_ingest_processor_csv" "test" { + field = "my_field" + target_fields = ["field1", "field2"] +} +` diff --git a/internal/elasticsearch/ingest/processor_date_data_source.go b/internal/elasticsearch/ingest/processor_date_data_source.go new file mode 100644 index 000000000..b96ad3bdf --- /dev/null +++ b/internal/elasticsearch/ingest/processor_date_data_source.go @@ -0,0 +1,166 @@ +package ingest + +import ( + "context" + "encoding/json" + "strings" + + "github.com/elastic/terraform-provider-elasticstack/internal/models" + "github.com/elastic/terraform-provider-elasticstack/internal/utils" + "github.com/hashicorp/terraform-plugin-sdk/v2/diag" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation" +) + +func DataSourceProcessorDate() *schema.Resource { + processorSchema := map[string]*schema.Schema{ + "id": { + Description: "Internal identifier of the resource", + Type: schema.TypeString, + Computed: true, + }, + "field": { + Description: "The field to get the date from.", + Type: schema.TypeString, + Required: true, + }, + "target_field": { + Description: "The field that will hold the parsed date.", + Type: schema.TypeString, + Optional: true, + Default: "@timestamp", + }, + "formats": { + Description: "An array of the expected date formats.", + Type: schema.TypeList, + Required: true, + MinItems: 1, + Elem: &schema.Schema{ + Type: schema.TypeString, + }, + }, + "timezone": { + Description: "The timezone to use when parsing the date.", + Type: schema.TypeString, + Optional: true, + Default: "UTC", + }, + "locale": { + Description: "The locale to use when parsing the date, relevant when parsing month names or week days.", + Type: schema.TypeString, + Optional: true, + Default: "ENGLISH", + }, + "output_format": { + Description: "The format to use when writing the date to `target_field`.", + Type: schema.TypeString, + Optional: true, + Default: "yyyy-MM-dd'T'HH:mm:ss.SSSXXX", + }, + "description": { + Description: "Description of the processor. ", + Type: schema.TypeString, + Optional: true, + }, + "if": { + Description: "Conditionally execute the processor", + Type: schema.TypeString, + Optional: true, + }, + "ignore_failure": { + Description: "Ignore failures for the processor. ", + Type: schema.TypeBool, + Optional: true, + Default: false, + }, + "on_failure": { + Description: "Handle failures for the processor.", + Type: schema.TypeList, + Optional: true, + MinItems: 1, + Elem: &schema.Schema{ + Type: schema.TypeString, + ValidateFunc: validation.StringIsJSON, + DiffSuppressFunc: utils.DiffJsonSuppress, + }, + }, + "tag": { + Description: "Identifier for the processor.", + Type: schema.TypeString, + Optional: true, + }, + "json": { + Description: "JSON representation of this data source.", + Type: schema.TypeString, + Computed: true, + }, + } + + return &schema.Resource{ + Description: "Parses dates from fields, and then uses the date or timestamp as the timestamp for the document. See: https://www.elastic.co/guide/en/elasticsearch/reference/current/date-processor.html", + + ReadContext: dataSourceProcessorDateRead, + + Schema: processorSchema, + } +} + +func dataSourceProcessorDateRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { + var diags diag.Diagnostics + + processor := &models.ProcessorDate{} + + processor.Field = d.Get("field").(string) + processor.IgnoreFailure = d.Get("ignore_failure").(bool) + processor.Timezone = d.Get("timezone").(string) + processor.Locale = d.Get("locale").(string) + processor.OutputFormat = d.Get("output_format").(string) + + formats := d.Get("formats").([]interface{}) + res := make([]string, len(formats)) + for i, v := range formats { + res[i] = v.(string) + } + processor.Formats = res + + if v, ok := d.GetOk("target_field"); ok { + processor.TargetField = v.(string) + } + if v, ok := d.GetOk("description"); ok { + processor.Description = v.(string) + } + if v, ok := d.GetOk("if"); ok { + processor.If = v.(string) + } + if v, ok := d.GetOk("tag"); ok { + processor.Tag = v.(string) + } + if v, ok := d.GetOk("on_failure"); ok { + onFailure := make([]map[string]interface{}, len(v.([]interface{}))) + for i, f := range v.([]interface{}) { + item := make(map[string]interface{}) + if err := json.NewDecoder(strings.NewReader(f.(string))).Decode(&item); err != nil { + return diag.FromErr(err) + } + onFailure[i] = item + } + processor.OnFailure = onFailure + } + + processorJson, err := json.MarshalIndent(map[string]*models.ProcessorDate{"date": processor}, "", " ") + if err != nil { + diag.FromErr(err) + } + if err := d.Set("json", string(processorJson)); err != nil { + diag.FromErr(err) + } + + hash, err := utils.StringToHash(string(processorJson)) + if err != nil { + return diag.FromErr(err) + } + + d.SetId(*hash) + + return diags +} diff --git a/internal/elasticsearch/ingest/processor_date_data_source_test.go b/internal/elasticsearch/ingest/processor_date_data_source_test.go new file mode 100644 index 000000000..38fac0368 --- /dev/null +++ b/internal/elasticsearch/ingest/processor_date_data_source_test.go @@ -0,0 +1,52 @@ +package ingest_test + +import ( + "testing" + + "github.com/elastic/terraform-provider-elasticstack/internal/acctest" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/resource" +) + +func TestAccDataSourceIngestProcessorDate(t *testing.T) { + resource.Test(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(t) }, + ProviderFactories: acctest.Providers, + Steps: []resource.TestStep{ + { + Config: testAccDataSourceIngestProcessorDate, + Check: resource.ComposeTestCheckFunc( + resource.TestCheckResourceAttr("data.elasticstack_elasticsearch_ingest_processor_date.test", "field", "initial_date"), + CheckResourceJson("data.elasticstack_elasticsearch_ingest_processor_date.test", "json", expectedJsonDate), + ), + }, + }, + }) +} + +const expectedJsonDate = `{ + "date": { + "field": "initial_date", + "formats": [ + "dd/MM/yyyy HH:mm:ss" + ], + "ignore_failure": false, + "locale": "ENGLISH", + "output_format": "yyyy-MM-dd'T'HH:mm:ss.SSSXXX", + "target_field": "timestamp", + "timezone": "Europe/Amsterdam" + } +} +` + +const testAccDataSourceIngestProcessorDate = ` +provider "elasticstack" { + elasticsearch {} +} + +data "elasticstack_elasticsearch_ingest_processor_date" "test" { + field = "initial_date" + target_field = "timestamp" + formats = ["dd/MM/yyyy HH:mm:ss"] + timezone = "Europe/Amsterdam" +} +` diff --git a/internal/elasticsearch/ingest/processor_date_index_name_data_source.go b/internal/elasticsearch/ingest/processor_date_index_name_data_source.go new file mode 100644 index 000000000..50fc28b97 --- /dev/null +++ b/internal/elasticsearch/ingest/processor_date_index_name_data_source.go @@ -0,0 +1,173 @@ +package ingest + +import ( + "context" + "encoding/json" + "strings" + + "github.com/elastic/terraform-provider-elasticstack/internal/models" + "github.com/elastic/terraform-provider-elasticstack/internal/utils" + "github.com/hashicorp/terraform-plugin-sdk/v2/diag" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation" +) + +func DataSourceProcessorDateIndexName() *schema.Resource { + processorSchema := map[string]*schema.Schema{ + "id": { + Description: "Internal identifier of the resource", + Type: schema.TypeString, + Computed: true, + }, + "field": { + Description: "The field to get the date or timestamp from.", + Type: schema.TypeString, + Required: true, + }, + "index_name_prefix": { + Description: "A prefix of the index name to be prepended before the printed date.", + Type: schema.TypeString, + Optional: true, + }, + "date_rounding": { + Description: "How to round the date when formatting the date into the index name.", + Type: schema.TypeString, + Required: true, + ValidateFunc: validation.StringInSlice([]string{"y", "M", "w", "d", "h", "m", "s"}, false), + }, + "date_formats": { + Description: "An array of the expected date formats for parsing dates / timestamps in the document being preprocessed.", + Type: schema.TypeList, + Optional: true, + Elem: &schema.Schema{ + Type: schema.TypeString, + }, + }, + "timezone": { + Description: "The timezone to use when parsing the date and when date math index supports resolves expressions into concrete index names.", + Type: schema.TypeString, + Optional: true, + Default: "UTC", + }, + "locale": { + Description: "The locale to use when parsing the date from the document being preprocessed, relevant when parsing month names or week days.", + Type: schema.TypeString, + Optional: true, + Default: "ENGLISH", + }, + "index_name_format": { + Description: "The format to be used when printing the parsed date into the index name.", + Type: schema.TypeString, + Optional: true, + Default: "yyyy-MM-dd", + }, + "description": { + Description: "Description of the processor. ", + Type: schema.TypeString, + Optional: true, + }, + "if": { + Description: "Conditionally execute the processor", + Type: schema.TypeString, + Optional: true, + }, + "ignore_failure": { + Description: "Ignore failures for the processor. ", + Type: schema.TypeBool, + Optional: true, + Default: false, + }, + "on_failure": { + Description: "Handle failures for the processor.", + Type: schema.TypeList, + Optional: true, + MinItems: 1, + Elem: &schema.Schema{ + Type: schema.TypeString, + ValidateFunc: validation.StringIsJSON, + DiffSuppressFunc: utils.DiffJsonSuppress, + }, + }, + "tag": { + Description: "Identifier for the processor.", + Type: schema.TypeString, + Optional: true, + }, + "json": { + Description: "JSON representation of this data source.", + Type: schema.TypeString, + Computed: true, + }, + } + + return &schema.Resource{ + Description: "The purpose of this processor is to point documents to the right time based index based on a date or timestamp field in a document by using the date math index name support. See: https://www.elastic.co/guide/en/elasticsearch/reference/current/date-index-name-processor.html", + + ReadContext: dataSourceProcessorDateIndexNameRead, + + Schema: processorSchema, + } +} + +func dataSourceProcessorDateIndexNameRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { + var diags diag.Diagnostics + + processor := &models.ProcessorDateIndexName{} + + processor.Field = d.Get("field").(string) + processor.IgnoreFailure = d.Get("ignore_failure").(bool) + processor.Timezone = d.Get("timezone").(string) + processor.Locale = d.Get("locale").(string) + processor.IndexNameFormat = d.Get("index_name_format").(string) + processor.DateRounding = d.Get("date_rounding").(string) + + if v, ok := d.GetOk("date_formats"); ok { + formats := v.([]interface{}) + res := make([]string, len(formats)) + for i, v := range formats { + res[i] = v.(string) + } + processor.DateFormats = res + } + + if v, ok := d.GetOk("index_name_prefix"); ok { + processor.IndexNamePrefix = v.(string) + } + if v, ok := d.GetOk("description"); ok { + processor.Description = v.(string) + } + if v, ok := d.GetOk("if"); ok { + processor.If = v.(string) + } + if v, ok := d.GetOk("tag"); ok { + processor.Tag = v.(string) + } + if v, ok := d.GetOk("on_failure"); ok { + onFailure := make([]map[string]interface{}, len(v.([]interface{}))) + for i, f := range v.([]interface{}) { + item := make(map[string]interface{}) + if err := json.NewDecoder(strings.NewReader(f.(string))).Decode(&item); err != nil { + return diag.FromErr(err) + } + onFailure[i] = item + } + processor.OnFailure = onFailure + } + + processorJson, err := json.MarshalIndent(map[string]*models.ProcessorDateIndexName{"date_index_name": processor}, "", " ") + if err != nil { + diag.FromErr(err) + } + if err := d.Set("json", string(processorJson)); err != nil { + diag.FromErr(err) + } + + hash, err := utils.StringToHash(string(processorJson)) + if err != nil { + return diag.FromErr(err) + } + + d.SetId(*hash) + + return diags +} diff --git a/internal/elasticsearch/ingest/processor_date_index_name_data_source_test.go b/internal/elasticsearch/ingest/processor_date_index_name_data_source_test.go new file mode 100644 index 000000000..f4c4ce6d4 --- /dev/null +++ b/internal/elasticsearch/ingest/processor_date_index_name_data_source_test.go @@ -0,0 +1,51 @@ +package ingest_test + +import ( + "testing" + + "github.com/elastic/terraform-provider-elasticstack/internal/acctest" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/resource" +) + +func TestAccDataSourceIngestProcessorDateIndexName(t *testing.T) { + resource.Test(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(t) }, + ProviderFactories: acctest.Providers, + Steps: []resource.TestStep{ + { + Config: testAccDataSourceIngestProcessorDateIndexName, + Check: resource.ComposeTestCheckFunc( + resource.TestCheckResourceAttr("data.elasticstack_elasticsearch_ingest_processor_date_index_name.test", "field", "date1"), + CheckResourceJson("data.elasticstack_elasticsearch_ingest_processor_date_index_name.test", "json", expectedJsonDateIndexName), + ), + }, + }, + }) +} + +const expectedJsonDateIndexName = `{ + "date_index_name": { + "date_rounding": "M", + "description": "monthly date-time index naming", + "field": "date1", + "ignore_failure": false, + "index_name_format": "yyyy-MM-dd", + "index_name_prefix": "my-index-", + "locale": "ENGLISH", + "timezone": "UTC" + } +} +` + +const testAccDataSourceIngestProcessorDateIndexName = ` +provider "elasticstack" { + elasticsearch {} +} + +data "elasticstack_elasticsearch_ingest_processor_date_index_name" "test" { + description = "monthly date-time index naming" + field = "date1" + index_name_prefix = "my-index-" + date_rounding = "M" +} +` diff --git a/internal/elasticsearch/ingest/processor_dissect_data_source.go b/internal/elasticsearch/ingest/processor_dissect_data_source.go new file mode 100644 index 000000000..1d2686e2e --- /dev/null +++ b/internal/elasticsearch/ingest/processor_dissect_data_source.go @@ -0,0 +1,140 @@ +package ingest + +import ( + "context" + "encoding/json" + "strings" + + "github.com/elastic/terraform-provider-elasticstack/internal/models" + "github.com/elastic/terraform-provider-elasticstack/internal/utils" + "github.com/hashicorp/terraform-plugin-sdk/v2/diag" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation" +) + +func DataSourceProcessorDissect() *schema.Resource { + processorSchema := map[string]*schema.Schema{ + "id": { + Description: "Internal identifier of the resource", + Type: schema.TypeString, + Computed: true, + }, + "field": { + Description: "The field to dissect.", + Type: schema.TypeString, + Required: true, + }, + "pattern": { + Description: "The pattern to apply to the field.", + Type: schema.TypeString, + Required: true, + }, + "append_separator": { + Description: "The character(s) that separate the appended fields.", + Type: schema.TypeString, + Optional: true, + Default: "", + }, + "ignore_missing": { + Description: "If `true` and `field` does not exist or is `null`, the processor quietly exits without modifying the document.", + Type: schema.TypeBool, + Optional: true, + Default: false, + }, + "description": { + Description: "Description of the processor. ", + Type: schema.TypeString, + Optional: true, + }, + "if": { + Description: "Conditionally execute the processor", + Type: schema.TypeString, + Optional: true, + }, + "ignore_failure": { + Description: "Ignore failures for the processor. ", + Type: schema.TypeBool, + Optional: true, + Default: false, + }, + "on_failure": { + Description: "Handle failures for the processor.", + Type: schema.TypeList, + Optional: true, + MinItems: 1, + Elem: &schema.Schema{ + Type: schema.TypeString, + ValidateFunc: validation.StringIsJSON, + DiffSuppressFunc: utils.DiffJsonSuppress, + }, + }, + "tag": { + Description: "Identifier for the processor.", + Type: schema.TypeString, + Optional: true, + }, + "json": { + Description: "JSON representation of this data source.", + Type: schema.TypeString, + Computed: true, + }, + } + + return &schema.Resource{ + Description: "Extracts structured fields out of a single text field within a document. See: https://www.elastic.co/guide/en/elasticsearch/reference/current/dissect-processor.html#dissect-processor", + + ReadContext: dataSourceProcessorDissectRead, + + Schema: processorSchema, + } +} + +func dataSourceProcessorDissectRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { + var diags diag.Diagnostics + + processor := &models.ProcessorDissect{} + + processor.Field = d.Get("field").(string) + processor.IgnoreFailure = d.Get("ignore_failure").(bool) + processor.IgnoreMissing = d.Get("ignore_missing").(bool) + processor.Pattern = d.Get("pattern").(string) + processor.AppendSeparator = d.Get("append_separator").(string) + + if v, ok := d.GetOk("description"); ok { + processor.Description = v.(string) + } + if v, ok := d.GetOk("if"); ok { + processor.If = v.(string) + } + if v, ok := d.GetOk("tag"); ok { + processor.Tag = v.(string) + } + if v, ok := d.GetOk("on_failure"); ok { + onFailure := make([]map[string]interface{}, len(v.([]interface{}))) + for i, f := range v.([]interface{}) { + item := make(map[string]interface{}) + if err := json.NewDecoder(strings.NewReader(f.(string))).Decode(&item); err != nil { + return diag.FromErr(err) + } + onFailure[i] = item + } + processor.OnFailure = onFailure + } + + processorJson, err := json.MarshalIndent(map[string]*models.ProcessorDissect{"dissect": processor}, "", " ") + if err != nil { + diag.FromErr(err) + } + if err := d.Set("json", string(processorJson)); err != nil { + diag.FromErr(err) + } + + hash, err := utils.StringToHash(string(processorJson)) + if err != nil { + return diag.FromErr(err) + } + + d.SetId(*hash) + + return diags +} diff --git a/internal/elasticsearch/ingest/processor_dissect_data_source_test.go b/internal/elasticsearch/ingest/processor_dissect_data_source_test.go new file mode 100644 index 000000000..039137b39 --- /dev/null +++ b/internal/elasticsearch/ingest/processor_dissect_data_source_test.go @@ -0,0 +1,46 @@ +package ingest_test + +import ( + "testing" + + "github.com/elastic/terraform-provider-elasticstack/internal/acctest" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/resource" +) + +func TestAccDataSourceIngestProcessorDissect(t *testing.T) { + resource.Test(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(t) }, + ProviderFactories: acctest.Providers, + Steps: []resource.TestStep{ + { + Config: testAccDataSourceIngestProcessorDissect, + Check: resource.ComposeTestCheckFunc( + resource.TestCheckResourceAttr("data.elasticstack_elasticsearch_ingest_processor_dissect.test", "field", "message"), + CheckResourceJson("data.elasticstack_elasticsearch_ingest_processor_dissect.test", "json", expectedJsonDissect), + ), + }, + }, + }) +} + +const expectedJsonDissect = `{ + "dissect": { + "append_separator": "", + "field": "message", + "ignore_failure": false, + "ignore_missing": false, + "pattern": "%{clientip} %{ident} %{auth} [%{@timestamp}] \"%{verb} %{request} HTTP/%{httpversion}\" %{status} %{size}" + } +} +` + +const testAccDataSourceIngestProcessorDissect = ` +provider "elasticstack" { + elasticsearch {} +} + +data "elasticstack_elasticsearch_ingest_processor_dissect" "test" { + field = "message" + pattern = "%%{clientip} %%{ident} %%{auth} [%%{@timestamp}] \"%%{verb} %%{request} HTTP/%%{httpversion}\" %%{status} %%{size}" +} +` diff --git a/internal/elasticsearch/ingest/processor_dissect_dot_expander_source.go b/internal/elasticsearch/ingest/processor_dissect_dot_expander_source.go new file mode 100644 index 000000000..c701d0f11 --- /dev/null +++ b/internal/elasticsearch/ingest/processor_dissect_dot_expander_source.go @@ -0,0 +1,135 @@ +package ingest + +import ( + "context" + "encoding/json" + "strings" + + "github.com/elastic/terraform-provider-elasticstack/internal/models" + "github.com/elastic/terraform-provider-elasticstack/internal/utils" + "github.com/hashicorp/terraform-plugin-sdk/v2/diag" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation" +) + +func DataSourceProcessorDotExpander() *schema.Resource { + processorSchema := map[string]*schema.Schema{ + "id": { + Description: "Internal identifier of the resource", + Type: schema.TypeString, + Computed: true, + }, + "field": { + Description: "The field to expand into an object field. If set to *, all top-level fields will be expanded.", + Type: schema.TypeString, + Required: true, + }, + "path": { + Description: "The field that contains the field to expand. ", + Type: schema.TypeString, + Optional: true, + }, + "override": { + Description: "Controls the behavior when there is already an existing nested object that conflicts with the expanded field.", + Type: schema.TypeBool, + Optional: true, + Default: false, + }, + "description": { + Description: "Description of the processor. ", + Type: schema.TypeString, + Optional: true, + }, + "if": { + Description: "Conditionally execute the processor", + Type: schema.TypeString, + Optional: true, + }, + "ignore_failure": { + Description: "Ignore failures for the processor. ", + Type: schema.TypeBool, + Optional: true, + Default: false, + }, + "on_failure": { + Description: "Handle failures for the processor.", + Type: schema.TypeList, + Optional: true, + MinItems: 1, + Elem: &schema.Schema{ + Type: schema.TypeString, + ValidateFunc: validation.StringIsJSON, + DiffSuppressFunc: utils.DiffJsonSuppress, + }, + }, + "tag": { + Description: "Identifier for the processor.", + Type: schema.TypeString, + Optional: true, + }, + "json": { + Description: "JSON representation of this data source.", + Type: schema.TypeString, + Computed: true, + }, + } + + return &schema.Resource{ + Description: "Expands a field with dots into an object field. See: https://www.elastic.co/guide/en/elasticsearch/reference/current/dot-expand-processor.html", + + ReadContext: dataSourceProcessorDotExpanderRead, + + Schema: processorSchema, + } +} + +func dataSourceProcessorDotExpanderRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { + var diags diag.Diagnostics + + processor := &models.ProcessorDotExpander{} + + processor.Field = d.Get("field").(string) + processor.IgnoreFailure = d.Get("ignore_failure").(bool) + processor.Override = d.Get("override").(bool) + + if v, ok := d.GetOk("path"); ok { + processor.Path = v.(string) + } + if v, ok := d.GetOk("description"); ok { + processor.Description = v.(string) + } + if v, ok := d.GetOk("if"); ok { + processor.If = v.(string) + } + if v, ok := d.GetOk("tag"); ok { + processor.Tag = v.(string) + } + if v, ok := d.GetOk("on_failure"); ok { + onFailure := make([]map[string]interface{}, len(v.([]interface{}))) + for i, f := range v.([]interface{}) { + item := make(map[string]interface{}) + if err := json.NewDecoder(strings.NewReader(f.(string))).Decode(&item); err != nil { + return diag.FromErr(err) + } + onFailure[i] = item + } + processor.OnFailure = onFailure + } + + processorJson, err := json.MarshalIndent(map[string]*models.ProcessorDotExpander{"dot_expander": processor}, "", " ") + if err != nil { + diag.FromErr(err) + } + if err := d.Set("json", string(processorJson)); err != nil { + diag.FromErr(err) + } + + hash, err := utils.StringToHash(string(processorJson)) + if err != nil { + return diag.FromErr(err) + } + + d.SetId(*hash) + + return diags +} diff --git a/internal/elasticsearch/ingest/processor_dot_expander_data_source_test.go b/internal/elasticsearch/ingest/processor_dot_expander_data_source_test.go new file mode 100644 index 000000000..bb121f425 --- /dev/null +++ b/internal/elasticsearch/ingest/processor_dot_expander_data_source_test.go @@ -0,0 +1,43 @@ +package ingest_test + +import ( + "testing" + + "github.com/elastic/terraform-provider-elasticstack/internal/acctest" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/resource" +) + +func TestAccDataSourceIngestProcessorDotExpander(t *testing.T) { + resource.Test(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(t) }, + ProviderFactories: acctest.Providers, + Steps: []resource.TestStep{ + { + Config: testAccDataSourceIngestProcessorDotExpander, + Check: resource.ComposeTestCheckFunc( + resource.TestCheckResourceAttr("data.elasticstack_elasticsearch_ingest_processor_dot_expander.test", "field", "foo.bar"), + CheckResourceJson("data.elasticstack_elasticsearch_ingest_processor_dot_expander.test", "json", expectedJsonDotExpander), + ), + }, + }, + }) +} + +const expectedJsonDotExpander = `{ + "dot_expander": { + "field": "foo.bar", + "ignore_failure": false, + "override": false + } +} +` + +const testAccDataSourceIngestProcessorDotExpander = ` +provider "elasticstack" { + elasticsearch {} +} + +data "elasticstack_elasticsearch_ingest_processor_dot_expander" "test" { + field = "foo.bar" +} +` diff --git a/internal/elasticsearch/ingest/processor_drop_data_source.go b/internal/elasticsearch/ingest/processor_drop_data_source.go new file mode 100644 index 000000000..f52dafaa9 --- /dev/null +++ b/internal/elasticsearch/ingest/processor_drop_data_source.go @@ -0,0 +1,114 @@ +package ingest + +import ( + "context" + "encoding/json" + "strings" + + "github.com/elastic/terraform-provider-elasticstack/internal/models" + "github.com/elastic/terraform-provider-elasticstack/internal/utils" + "github.com/hashicorp/terraform-plugin-sdk/v2/diag" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation" +) + +func DataSourceProcessorDrop() *schema.Resource { + processorSchema := map[string]*schema.Schema{ + "id": { + Description: "Internal identifier of the resource", + Type: schema.TypeString, + Computed: true, + }, + "description": { + Description: "Description of the processor. ", + Type: schema.TypeString, + Optional: true, + }, + "if": { + Description: "Conditionally execute the processor", + Type: schema.TypeString, + Optional: true, + }, + "ignore_failure": { + Description: "Ignore failures for the processor. ", + Type: schema.TypeBool, + Optional: true, + Default: false, + }, + "on_failure": { + Description: "Handle failures for the processor.", + Type: schema.TypeList, + Optional: true, + MinItems: 1, + Elem: &schema.Schema{ + Type: schema.TypeString, + ValidateFunc: validation.StringIsJSON, + DiffSuppressFunc: utils.DiffJsonSuppress, + }, + }, + "tag": { + Description: "Identifier for the processor.", + Type: schema.TypeString, + Optional: true, + }, + "json": { + Description: "JSON representation of this data source.", + Type: schema.TypeString, + Computed: true, + }, + } + + return &schema.Resource{ + Description: "Drops the document without raising any errors. See: https://www.elastic.co/guide/en/elasticsearch/reference/current/drop-processor.html", + + ReadContext: dataSourceProcessorDropRead, + + Schema: processorSchema, + } +} + +func dataSourceProcessorDropRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { + var diags diag.Diagnostics + + processor := &models.ProcessorDrop{} + + processor.IgnoreFailure = d.Get("ignore_failure").(bool) + + if v, ok := d.GetOk("description"); ok { + processor.Description = v.(string) + } + if v, ok := d.GetOk("if"); ok { + processor.If = v.(string) + } + if v, ok := d.GetOk("tag"); ok { + processor.Tag = v.(string) + } + if v, ok := d.GetOk("on_failure"); ok { + onFailure := make([]map[string]interface{}, len(v.([]interface{}))) + for i, f := range v.([]interface{}) { + item := make(map[string]interface{}) + if err := json.NewDecoder(strings.NewReader(f.(string))).Decode(&item); err != nil { + return diag.FromErr(err) + } + onFailure[i] = item + } + processor.OnFailure = onFailure + } + + processorJson, err := json.MarshalIndent(map[string]*models.ProcessorDrop{"drop": processor}, "", " ") + if err != nil { + diag.FromErr(err) + } + if err := d.Set("json", string(processorJson)); err != nil { + diag.FromErr(err) + } + + hash, err := utils.StringToHash(string(processorJson)) + if err != nil { + return diag.FromErr(err) + } + + d.SetId(*hash) + + return diags +} diff --git a/internal/elasticsearch/ingest/processor_drop_data_source_test.go b/internal/elasticsearch/ingest/processor_drop_data_source_test.go new file mode 100644 index 000000000..f4558d0bd --- /dev/null +++ b/internal/elasticsearch/ingest/processor_drop_data_source_test.go @@ -0,0 +1,41 @@ +package ingest_test + +import ( + "testing" + + "github.com/elastic/terraform-provider-elasticstack/internal/acctest" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/resource" +) + +func TestAccDataSourceIngestProcessorDrop(t *testing.T) { + resource.Test(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(t) }, + ProviderFactories: acctest.Providers, + Steps: []resource.TestStep{ + { + Config: testAccDataSourceIngestProcessorDrop, + Check: resource.ComposeTestCheckFunc( + CheckResourceJson("data.elasticstack_elasticsearch_ingest_processor_drop.test", "json", expectedJsonDrop), + ), + }, + }, + }) +} + +const expectedJsonDrop = `{ + "drop": { + "ignore_failure": false, + "if" : "ctx.network_name == 'Guest'" + } +} +` + +const testAccDataSourceIngestProcessorDrop = ` +provider "elasticstack" { + elasticsearch {} +} + +data "elasticstack_elasticsearch_ingest_processor_drop" "test" { + if = "ctx.network_name == 'Guest'" +} +` diff --git a/internal/elasticsearch/ingest/processor_enrich_data_source.go b/internal/elasticsearch/ingest/processor_enrich_data_source.go new file mode 100644 index 000000000..7f59e43d1 --- /dev/null +++ b/internal/elasticsearch/ingest/processor_enrich_data_source.go @@ -0,0 +1,161 @@ +package ingest + +import ( + "context" + "encoding/json" + "strings" + + "github.com/elastic/terraform-provider-elasticstack/internal/models" + "github.com/elastic/terraform-provider-elasticstack/internal/utils" + "github.com/hashicorp/terraform-plugin-sdk/v2/diag" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation" +) + +func DataSourceProcessorEnrich() *schema.Resource { + processorSchema := map[string]*schema.Schema{ + "id": { + Description: "Internal identifier of the resource", + Type: schema.TypeString, + Computed: true, + }, + "field": { + Description: "The field in the input document that matches the policies match_field used to retrieve the enrichment data.", + Type: schema.TypeString, + Required: true, + }, + "target_field": { + Description: "Field added to incoming documents to contain enrich data.", + Type: schema.TypeString, + Required: true, + }, + "policy_name": { + Description: "The name of the enrich policy to use.", + Type: schema.TypeString, + Required: true, + }, + "ignore_missing": { + Description: "If `true` and `field` does not exist or is `null`, the processor quietly exits without modifying the document.", + Type: schema.TypeBool, + Optional: true, + Default: false, + }, + "override": { + Description: "If processor will update fields with pre-existing non-null-valued field. ", + Type: schema.TypeBool, + Optional: true, + Default: true, + }, + "max_matches": { + Description: "The maximum number of matched documents to include under the configured target field. ", + Type: schema.TypeInt, + Optional: true, + Default: 1, + }, + "shape_relation": { + Description: "A spatial relation operator used to match the geoshape of incoming documents to documents in the enrich index.", + Type: schema.TypeString, + Optional: true, + }, + "description": { + Description: "Description of the processor. ", + Type: schema.TypeString, + Optional: true, + }, + "if": { + Description: "Conditionally execute the processor", + Type: schema.TypeString, + Optional: true, + }, + "ignore_failure": { + Description: "Ignore failures for the processor. ", + Type: schema.TypeBool, + Optional: true, + Default: false, + }, + "on_failure": { + Description: "Handle failures for the processor.", + Type: schema.TypeList, + Optional: true, + MinItems: 1, + Elem: &schema.Schema{ + Type: schema.TypeString, + ValidateFunc: validation.StringIsJSON, + DiffSuppressFunc: utils.DiffJsonSuppress, + }, + }, + "tag": { + Description: "Identifier for the processor.", + Type: schema.TypeString, + Optional: true, + }, + "json": { + Description: "JSON representation of this data source.", + Type: schema.TypeString, + Computed: true, + }, + } + + return &schema.Resource{ + Description: "The enrich processor can enrich documents with data from another index. See: https://www.elastic.co/guide/en/elasticsearch/reference/current/enrich-processor.html", + + ReadContext: dataSourceProcessorEnrichRead, + + Schema: processorSchema, + } +} + +func dataSourceProcessorEnrichRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { + var diags diag.Diagnostics + + processor := &models.ProcessorEnrich{} + + processor.Field = d.Get("field").(string) + processor.TargetField = d.Get("target_field").(string) + processor.IgnoreFailure = d.Get("ignore_failure").(bool) + processor.IgnoreMissing = d.Get("ignore_missing").(bool) + processor.Override = d.Get("override").(bool) + processor.PolicyName = d.Get("policy_name").(string) + processor.MaxMatches = d.Get("max_matches").(int) + + if v, ok := d.GetOk("shape_relation"); ok { + processor.ShapeRelation = v.(string) + } + if v, ok := d.GetOk("description"); ok { + processor.Description = v.(string) + } + if v, ok := d.GetOk("if"); ok { + processor.If = v.(string) + } + if v, ok := d.GetOk("tag"); ok { + processor.Tag = v.(string) + } + if v, ok := d.GetOk("on_failure"); ok { + onFailure := make([]map[string]interface{}, len(v.([]interface{}))) + for i, f := range v.([]interface{}) { + item := make(map[string]interface{}) + if err := json.NewDecoder(strings.NewReader(f.(string))).Decode(&item); err != nil { + return diag.FromErr(err) + } + onFailure[i] = item + } + processor.OnFailure = onFailure + } + + processorJson, err := json.MarshalIndent(map[string]*models.ProcessorEnrich{"enrich": processor}, "", " ") + if err != nil { + diag.FromErr(err) + } + if err := d.Set("json", string(processorJson)); err != nil { + diag.FromErr(err) + } + + hash, err := utils.StringToHash(string(processorJson)) + if err != nil { + return diag.FromErr(err) + } + + d.SetId(*hash) + + return diags +} diff --git a/internal/elasticsearch/ingest/processor_fail_data_source.go b/internal/elasticsearch/ingest/processor_fail_data_source.go new file mode 100644 index 000000000..1a0fd65a4 --- /dev/null +++ b/internal/elasticsearch/ingest/processor_fail_data_source.go @@ -0,0 +1,120 @@ +package ingest + +import ( + "context" + "encoding/json" + "strings" + + "github.com/elastic/terraform-provider-elasticstack/internal/models" + "github.com/elastic/terraform-provider-elasticstack/internal/utils" + "github.com/hashicorp/terraform-plugin-sdk/v2/diag" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation" +) + +func DataSourceProcessorFail() *schema.Resource { + processorSchema := map[string]*schema.Schema{ + "id": { + Description: "Internal identifier of the resource", + Type: schema.TypeString, + Computed: true, + }, + "message": { + Description: "The error message thrown by the processor.", + Type: schema.TypeString, + Required: true, + }, + "description": { + Description: "Description of the processor. ", + Type: schema.TypeString, + Optional: true, + }, + "if": { + Description: "Conditionally execute the processor", + Type: schema.TypeString, + Optional: true, + }, + "ignore_failure": { + Description: "Ignore failures for the processor. ", + Type: schema.TypeBool, + Optional: true, + Default: false, + }, + "on_failure": { + Description: "Handle failures for the processor.", + Type: schema.TypeList, + Optional: true, + MinItems: 1, + Elem: &schema.Schema{ + Type: schema.TypeString, + ValidateFunc: validation.StringIsJSON, + DiffSuppressFunc: utils.DiffJsonSuppress, + }, + }, + "tag": { + Description: "Identifier for the processor.", + Type: schema.TypeString, + Optional: true, + }, + "json": { + Description: "JSON representation of this data source.", + Type: schema.TypeString, + Computed: true, + }, + } + + return &schema.Resource{ + Description: "Raises an exception. See: https://www.elastic.co/guide/en/elasticsearch/reference/current/fail-processor.html", + + ReadContext: dataSourceProcessorFailRead, + + Schema: processorSchema, + } +} + +func dataSourceProcessorFailRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { + var diags diag.Diagnostics + + processor := &models.ProcessorFail{} + + processor.IgnoreFailure = d.Get("ignore_failure").(bool) + processor.Message = d.Get("message").(string) + + if v, ok := d.GetOk("description"); ok { + processor.Description = v.(string) + } + if v, ok := d.GetOk("if"); ok { + processor.If = v.(string) + } + if v, ok := d.GetOk("tag"); ok { + processor.Tag = v.(string) + } + if v, ok := d.GetOk("on_failure"); ok { + onFailure := make([]map[string]interface{}, len(v.([]interface{}))) + for i, f := range v.([]interface{}) { + item := make(map[string]interface{}) + if err := json.NewDecoder(strings.NewReader(f.(string))).Decode(&item); err != nil { + return diag.FromErr(err) + } + onFailure[i] = item + } + processor.OnFailure = onFailure + } + + processorJson, err := json.MarshalIndent(map[string]*models.ProcessorFail{"fail": processor}, "", " ") + if err != nil { + diag.FromErr(err) + } + if err := d.Set("json", string(processorJson)); err != nil { + diag.FromErr(err) + } + + hash, err := utils.StringToHash(string(processorJson)) + if err != nil { + return diag.FromErr(err) + } + + d.SetId(*hash) + + return diags +} diff --git a/internal/elasticsearch/ingest/processor_fail_data_source_test.go b/internal/elasticsearch/ingest/processor_fail_data_source_test.go new file mode 100644 index 000000000..03def390f --- /dev/null +++ b/internal/elasticsearch/ingest/processor_fail_data_source_test.go @@ -0,0 +1,43 @@ +package ingest_test + +import ( + "testing" + + "github.com/elastic/terraform-provider-elasticstack/internal/acctest" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/resource" +) + +func TestAccDataSourceIngestProcessorFail(t *testing.T) { + resource.Test(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(t) }, + ProviderFactories: acctest.Providers, + Steps: []resource.TestStep{ + { + Config: testAccDataSourceIngestProcessorFail, + Check: resource.ComposeTestCheckFunc( + CheckResourceJson("data.elasticstack_elasticsearch_ingest_processor_fail.test", "json", expectedJsonFail), + ), + }, + }, + }) +} + +const expectedJsonFail = `{ + "fail": { + "message": "The production tag is not present, found tags: {{{tags}}}", + "ignore_failure": false, + "if" : "ctx.tags.contains('production') != true" + } +} +` + +const testAccDataSourceIngestProcessorFail = ` +provider "elasticstack" { + elasticsearch {} +} + +data "elasticstack_elasticsearch_ingest_processor_fail" "test" { + if = "ctx.tags.contains('production') != true" + message = "The production tag is not present, found tags: {{{tags}}}" +} +` diff --git a/internal/elasticsearch/ingest/processor_fingerprint_data_source.go b/internal/elasticsearch/ingest/processor_fingerprint_data_source.go new file mode 100644 index 000000000..75f949b17 --- /dev/null +++ b/internal/elasticsearch/ingest/processor_fingerprint_data_source.go @@ -0,0 +1,160 @@ +package ingest + +import ( + "context" + "encoding/json" + "strings" + + "github.com/elastic/terraform-provider-elasticstack/internal/models" + "github.com/elastic/terraform-provider-elasticstack/internal/utils" + "github.com/hashicorp/terraform-plugin-sdk/v2/diag" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation" +) + +func DataSourceProcessorFingerprint() *schema.Resource { + processorSchema := map[string]*schema.Schema{ + "id": { + Description: "Internal identifier of the resource", + Type: schema.TypeString, + Computed: true, + }, + "fields": { + Description: "Array of fields to include in the fingerprint.", + Type: schema.TypeList, + Required: true, + MinItems: 1, + Elem: &schema.Schema{ + Type: schema.TypeString, + }, + }, + "target_field": { + Description: "Output field for the fingerprint.", + Type: schema.TypeString, + Optional: true, + Default: "fingerprint", + }, + "salt": { + Description: "Salt value for the hash function.", + Type: schema.TypeString, + Optional: true, + }, + "method": { + Description: "The hash method used to compute the fingerprint.", + Type: schema.TypeString, + Optional: true, + Default: "SHA-1", + ValidateFunc: validation.StringInSlice([]string{"MD5", "SHA-1", "SHA-256", "SHA-512", "MurmurHash3"}, false), + }, + "ignore_missing": { + Description: "If `true`, the processor ignores any missing `fields`. If all fields are missing, the processor silently exits without modifying the document.", + Type: schema.TypeBool, + Optional: true, + Default: false, + }, + "description": { + Description: "Description of the processor. ", + Type: schema.TypeString, + Optional: true, + }, + "if": { + Description: "Conditionally execute the processor", + Type: schema.TypeString, + Optional: true, + }, + "ignore_failure": { + Description: "Ignore failures for the processor. ", + Type: schema.TypeBool, + Optional: true, + Default: false, + }, + "on_failure": { + Description: "Handle failures for the processor.", + Type: schema.TypeList, + Optional: true, + MinItems: 1, + Elem: &schema.Schema{ + Type: schema.TypeString, + ValidateFunc: validation.StringIsJSON, + DiffSuppressFunc: utils.DiffJsonSuppress, + }, + }, + "tag": { + Description: "Identifier for the processor.", + Type: schema.TypeString, + Optional: true, + }, + "json": { + Description: "JSON representation of this data source.", + Type: schema.TypeString, + Computed: true, + }, + } + + return &schema.Resource{ + Description: "Computes a hash of the document’s content. See: https://www.elastic.co/guide/en/elasticsearch/reference/current/fingerprint-processor.html", + + ReadContext: dataSourceProcessorFingerprintRead, + + Schema: processorSchema, + } +} + +func dataSourceProcessorFingerprintRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { + var diags diag.Diagnostics + + processor := &models.ProcessorFingerprint{} + + processor.IgnoreFailure = d.Get("ignore_failure").(bool) + processor.IgnoreMissing = d.Get("ignore_missing").(bool) + processor.Method = d.Get("method").(string) + processor.TargetField = d.Get("target_field").(string) + + fields := d.Get("fields").([]interface{}) + flds := make([]string, len(fields)) + for i, v := range fields { + flds[i] = v.(string) + } + processor.Fields = flds + + if v, ok := d.GetOk("salt"); ok { + processor.Salt = v.(string) + } + if v, ok := d.GetOk("description"); ok { + processor.Description = v.(string) + } + if v, ok := d.GetOk("if"); ok { + processor.If = v.(string) + } + if v, ok := d.GetOk("tag"); ok { + processor.Tag = v.(string) + } + if v, ok := d.GetOk("on_failure"); ok { + onFailure := make([]map[string]interface{}, len(v.([]interface{}))) + for i, f := range v.([]interface{}) { + item := make(map[string]interface{}) + if err := json.NewDecoder(strings.NewReader(f.(string))).Decode(&item); err != nil { + return diag.FromErr(err) + } + onFailure[i] = item + } + processor.OnFailure = onFailure + } + + processorJson, err := json.MarshalIndent(map[string]*models.ProcessorFingerprint{"fingerprint": processor}, "", " ") + if err != nil { + diag.FromErr(err) + } + if err := d.Set("json", string(processorJson)); err != nil { + diag.FromErr(err) + } + + hash, err := utils.StringToHash(string(processorJson)) + if err != nil { + return diag.FromErr(err) + } + + d.SetId(*hash) + + return diags +} diff --git a/internal/elasticsearch/ingest/processor_fingerprint_data_source_test.go b/internal/elasticsearch/ingest/processor_fingerprint_data_source_test.go new file mode 100644 index 000000000..b5757749b --- /dev/null +++ b/internal/elasticsearch/ingest/processor_fingerprint_data_source_test.go @@ -0,0 +1,46 @@ +package ingest_test + +import ( + "testing" + + "github.com/elastic/terraform-provider-elasticstack/internal/acctest" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/resource" +) + +func TestAccDataSourceIngestProcessorFingerprint(t *testing.T) { + resource.Test(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(t) }, + ProviderFactories: acctest.Providers, + Steps: []resource.TestStep{ + { + Config: testAccDataSourceIngestProcessorFingerprint, + Check: resource.ComposeTestCheckFunc( + CheckResourceJson("data.elasticstack_elasticsearch_ingest_processor_fingerprint.test", "json", expectedJsonFingerprint), + ), + }, + }, + }) +} + +const expectedJsonFingerprint = `{ + "fingerprint": { + "fields": [ + "user" + ], + "ignore_failure": false, + "ignore_missing": false, + "method": "SHA-1", + "target_field": "fingerprint" + } +} +` + +const testAccDataSourceIngestProcessorFingerprint = ` +provider "elasticstack" { + elasticsearch {} +} + +data "elasticstack_elasticsearch_ingest_processor_fingerprint" "test" { + fields = ["user"] +} +` diff --git a/internal/elasticsearch/ingest/processor_foreach_data_source.go b/internal/elasticsearch/ingest/processor_foreach_data_source.go new file mode 100644 index 000000000..136cc1b8a --- /dev/null +++ b/internal/elasticsearch/ingest/processor_foreach_data_source.go @@ -0,0 +1,141 @@ +package ingest + +import ( + "context" + "encoding/json" + "strings" + + "github.com/elastic/terraform-provider-elasticstack/internal/models" + "github.com/elastic/terraform-provider-elasticstack/internal/utils" + "github.com/hashicorp/terraform-plugin-sdk/v2/diag" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation" +) + +func DataSourceProcessorForeach() *schema.Resource { + processorSchema := map[string]*schema.Schema{ + "id": { + Description: "Internal identifier of the resource", + Type: schema.TypeString, + Computed: true, + }, + "field": { + Description: "Field containing array or object values.", + Type: schema.TypeString, + Required: true, + }, + "processor": { + Description: "Ingest processor to run on each element.", + Type: schema.TypeString, + Required: true, + ValidateFunc: validation.StringIsJSON, + DiffSuppressFunc: utils.DiffJsonSuppress, + }, + "ignore_missing": { + Description: "If `true`, the processor silently exits without changing the document if the `field` is `null` or missing.", + Type: schema.TypeBool, + Optional: true, + Default: false, + }, + "description": { + Description: "Description of the processor. ", + Type: schema.TypeString, + Optional: true, + }, + "if": { + Description: "Conditionally execute the processor", + Type: schema.TypeString, + Optional: true, + }, + "ignore_failure": { + Description: "Ignore failures for the processor. ", + Type: schema.TypeBool, + Optional: true, + Default: false, + }, + "on_failure": { + Description: "Handle failures for the processor.", + Type: schema.TypeList, + Optional: true, + MinItems: 1, + Elem: &schema.Schema{ + Type: schema.TypeString, + ValidateFunc: validation.StringIsJSON, + DiffSuppressFunc: utils.DiffJsonSuppress, + }, + }, + "tag": { + Description: "Identifier for the processor.", + Type: schema.TypeString, + Optional: true, + }, + "json": { + Description: "JSON representation of this data source.", + Type: schema.TypeString, + Computed: true, + }, + } + + return &schema.Resource{ + Description: "Runs an ingest processor on each element of an array or object. See: https://www.elastic.co/guide/en/elasticsearch/reference/current/foreach-processor.html", + + ReadContext: dataSourceProcessorForeachRead, + + Schema: processorSchema, + } +} + +func dataSourceProcessorForeachRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { + var diags diag.Diagnostics + + processor := &models.ProcessorForeach{} + + processor.Field = d.Get("field").(string) + processor.IgnoreFailure = d.Get("ignore_failure").(bool) + processor.IgnoreMissing = d.Get("ignore_missing").(bool) + + proc := d.Get("processor").(string) + tProc := make(map[string]interface{}) + if err := json.NewDecoder(strings.NewReader(proc)).Decode(&tProc); err != nil { + return diag.FromErr(err) + } + processor.Processor = tProc + + if v, ok := d.GetOk("description"); ok { + processor.Description = v.(string) + } + if v, ok := d.GetOk("if"); ok { + processor.If = v.(string) + } + if v, ok := d.GetOk("tag"); ok { + processor.Tag = v.(string) + } + if v, ok := d.GetOk("on_failure"); ok { + onFailure := make([]map[string]interface{}, len(v.([]interface{}))) + for i, f := range v.([]interface{}) { + item := make(map[string]interface{}) + if err := json.NewDecoder(strings.NewReader(f.(string))).Decode(&item); err != nil { + return diag.FromErr(err) + } + onFailure[i] = item + } + processor.OnFailure = onFailure + } + + processorJson, err := json.MarshalIndent(map[string]*models.ProcessorForeach{"foreach": processor}, "", " ") + if err != nil { + diag.FromErr(err) + } + if err := d.Set("json", string(processorJson)); err != nil { + diag.FromErr(err) + } + + hash, err := utils.StringToHash(string(processorJson)) + if err != nil { + return diag.FromErr(err) + } + + d.SetId(*hash) + + return diags +} diff --git a/internal/elasticsearch/ingest/processor_foreach_data_source_test.go b/internal/elasticsearch/ingest/processor_foreach_data_source_test.go new file mode 100644 index 000000000..463e0a3fd --- /dev/null +++ b/internal/elasticsearch/ingest/processor_foreach_data_source_test.go @@ -0,0 +1,56 @@ +package ingest_test + +import ( + "testing" + + "github.com/elastic/terraform-provider-elasticstack/internal/acctest" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/resource" +) + +func TestAccDataSourceIngestProcessorForeach(t *testing.T) { + resource.Test(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(t) }, + ProviderFactories: acctest.Providers, + Steps: []resource.TestStep{ + { + Config: testAccDataSourceIngestProcessorForeach, + Check: resource.ComposeTestCheckFunc( + CheckResourceJson("data.elasticstack_elasticsearch_ingest_processor_foreach.test", "json", expectedJsonForeach), + ), + }, + }, + }) +} + +const expectedJsonForeach = `{ + "foreach": { + "field": "values", + "ignore_failure": false, + "ignore_missing": false, + "processor": { + "convert": { + "field": "_ingest._value", + "ignore_failure": false, + "ignore_missing": false, + "type": "integer" + } + } + } +} +` + +const testAccDataSourceIngestProcessorForeach = ` +provider "elasticstack" { + elasticsearch {} +} + +data "elasticstack_elasticsearch_ingest_processor_convert" "test" { + field = "_ingest._value" + type = "integer" +} + +data "elasticstack_elasticsearch_ingest_processor_foreach" "test" { + field = "values" + processor = data.elasticstack_elasticsearch_ingest_processor_convert.test.json +} +` diff --git a/internal/elasticsearch/ingest/processor_geoip_data_source.go b/internal/elasticsearch/ingest/processor_geoip_data_source.go new file mode 100644 index 000000000..5693d70d1 --- /dev/null +++ b/internal/elasticsearch/ingest/processor_geoip_data_source.go @@ -0,0 +1,111 @@ +package ingest + +import ( + "context" + "encoding/json" + + "github.com/elastic/terraform-provider-elasticstack/internal/models" + "github.com/elastic/terraform-provider-elasticstack/internal/utils" + "github.com/hashicorp/terraform-plugin-sdk/v2/diag" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" +) + +func DataSourceProcessorGeoip() *schema.Resource { + processorSchema := map[string]*schema.Schema{ + "id": { + Description: "Internal identifier of the resource", + Type: schema.TypeString, + Computed: true, + }, + "field": { + Description: "The field to get the ip address from for the geographical lookup.", + Type: schema.TypeString, + Required: true, + }, + "target_field": { + Description: "The field that will hold the geographical information looked up from the MaxMind database.", + Type: schema.TypeString, + Optional: true, + Default: "geoip", + }, + "database_file": { + Description: "The database filename referring to a database the module ships with (GeoLite2-City.mmdb, GeoLite2-Country.mmdb, or GeoLite2-ASN.mmdb) or a custom database in the `ingest-geoip` config directory.", + Type: schema.TypeString, + Optional: true, + }, + "properties": { + Description: "Controls what properties are added to the `target_field` based on the geoip lookup.", + Type: schema.TypeSet, + Optional: true, + Elem: &schema.Schema{ + Type: schema.TypeString, + }, + }, + "ignore_missing": { + Description: "If `true` and `field` does not exist, the processor quietly exits without modifying the document.", + Type: schema.TypeBool, + Optional: true, + Default: false, + }, + "first_only": { + Description: "If `true` only first found geoip data will be returned, even if field contains array.", + Type: schema.TypeBool, + Optional: true, + Default: true, + }, + "json": { + Description: "JSON representation of this data source.", + Type: schema.TypeString, + Computed: true, + }, + } + + return &schema.Resource{ + Description: "The geoip processor adds information about the geographical location of an IPv4 or IPv6 address. See: https://www.elastic.co/guide/en/elasticsearch/reference/current/geoip-processor.html", + + ReadContext: dataSourceProcessorGeoipRead, + + Schema: processorSchema, + } +} + +func dataSourceProcessorGeoipRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { + var diags diag.Diagnostics + + processor := &models.ProcessorGeoip{} + + processor.IgnoreMissing = d.Get("ignore_missing").(bool) + processor.FirstOnly = d.Get("first_only").(bool) + processor.Field = d.Get("field").(string) + processor.TargetField = d.Get("target_field").(string) + + if v, ok := d.GetOk("properties"); ok { + props := v.(*schema.Set) + properties := make([]string, props.Len()) + for i, p := range props.List() { + properties[i] = p.(string) + } + processor.Properties = properties + } + + if v, ok := d.GetOk("database_file"); ok { + processor.DatabaseFile = v.(string) + } + + processorJson, err := json.MarshalIndent(map[string]*models.ProcessorGeoip{"geoip": processor}, "", " ") + if err != nil { + diag.FromErr(err) + } + if err := d.Set("json", string(processorJson)); err != nil { + diag.FromErr(err) + } + + hash, err := utils.StringToHash(string(processorJson)) + if err != nil { + return diag.FromErr(err) + } + + d.SetId(*hash) + + return diags +} diff --git a/internal/elasticsearch/ingest/processor_geoip_data_source_test.go b/internal/elasticsearch/ingest/processor_geoip_data_source_test.go new file mode 100644 index 000000000..6d4bf96de --- /dev/null +++ b/internal/elasticsearch/ingest/processor_geoip_data_source_test.go @@ -0,0 +1,44 @@ +package ingest_test + +import ( + "testing" + + "github.com/elastic/terraform-provider-elasticstack/internal/acctest" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/resource" +) + +func TestAccDataSourceIngestProcessorGeoip(t *testing.T) { + resource.Test(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(t) }, + ProviderFactories: acctest.Providers, + Steps: []resource.TestStep{ + { + Config: testAccDataSourceIngestProcessorGeoip, + Check: resource.ComposeTestCheckFunc( + resource.TestCheckResourceAttr("data.elasticstack_elasticsearch_ingest_processor_geoip.test", "field", "ip"), + CheckResourceJson("data.elasticstack_elasticsearch_ingest_processor_geoip.test", "json", expectedJsonGeoip), + ), + }, + }, + }) +} + +const expectedJsonGeoip = `{ + "geoip": { + "field": "ip", + "first_only": true, + "ignore_missing": false, + "target_field": "geoip" + } +} +` + +const testAccDataSourceIngestProcessorGeoip = ` +provider "elasticstack" { + elasticsearch {} +} + +data "elasticstack_elasticsearch_ingest_processor_geoip" "test" { + field = "ip" +} +` diff --git a/internal/elasticsearch/ingest/processor_grok_data_source.go b/internal/elasticsearch/ingest/processor_grok_data_source.go new file mode 100644 index 000000000..b51feb372 --- /dev/null +++ b/internal/elasticsearch/ingest/processor_grok_data_source.go @@ -0,0 +1,176 @@ +package ingest + +import ( + "context" + "encoding/json" + "strings" + + "github.com/elastic/terraform-provider-elasticstack/internal/models" + "github.com/elastic/terraform-provider-elasticstack/internal/utils" + "github.com/hashicorp/terraform-plugin-sdk/v2/diag" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation" +) + +func DataSourceProcessorGrok() *schema.Resource { + processorSchema := map[string]*schema.Schema{ + "id": { + Description: "Internal identifier of the resource", + Type: schema.TypeString, + Computed: true, + }, + "field": { + Description: "The field to use for grok expression parsing", + Type: schema.TypeString, + Required: true, + }, + "patterns": { + Description: "An ordered list of grok expression to match and extract named captures with. Returns on the first expression in the list that matches.", + Type: schema.TypeList, + Required: true, + MinItems: 1, + Elem: &schema.Schema{ + Type: schema.TypeString, + }, + }, + "pattern_definitions": { + Description: "A map of pattern-name and pattern tuples defining custom patterns to be used by the current processor. Patterns matching existing names will override the pre-existing definition.", + Type: schema.TypeMap, + Optional: true, + Elem: &schema.Schema{ + Type: schema.TypeString, + }, + }, + "ecs_compatibility": { + Description: "Must be disabled or v1. If v1, the processor uses patterns with Elastic Common Schema (ECS) field names. **NOTE:** Supported only starting from version of Elasticsearch **7.16.x**.", + Type: schema.TypeString, + Optional: true, + ValidateFunc: validation.StringInSlice([]string{"disabled", "v1"}, false), + }, + "trace_match": { + Description: "when true, `_ingest._grok_match_index` will be inserted into your matched document’s metadata with the index into the pattern found in `patterns` that matched.", + Type: schema.TypeBool, + Optional: true, + Default: false, + }, + "ignore_missing": { + Description: "If `true` and `field` does not exist or is `null`, the processor quietly exits without modifying the document", + Type: schema.TypeBool, + Optional: true, + Default: false, + }, + "description": { + Description: "Description of the processor. ", + Type: schema.TypeString, + Optional: true, + }, + "if": { + Description: "Conditionally execute the processor", + Type: schema.TypeString, + Optional: true, + }, + "ignore_failure": { + Description: "Ignore failures for the processor. ", + Type: schema.TypeBool, + Optional: true, + Default: false, + }, + "on_failure": { + Description: "Handle failures for the processor.", + Type: schema.TypeList, + Optional: true, + MinItems: 1, + Elem: &schema.Schema{ + Type: schema.TypeString, + ValidateFunc: validation.StringIsJSON, + DiffSuppressFunc: utils.DiffJsonSuppress, + }, + }, + "tag": { + Description: "Identifier for the processor.", + Type: schema.TypeString, + Optional: true, + }, + "json": { + Description: "JSON representation of this data source.", + Type: schema.TypeString, + Computed: true, + }, + } + + return &schema.Resource{ + Description: "Extracts structured fields out of a single text field within a document. See: https://www.elastic.co/guide/en/elasticsearch/reference/current/grok-processor.html", + + ReadContext: dataSourceProcessorGrokRead, + + Schema: processorSchema, + } +} + +func dataSourceProcessorGrokRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { + var diags diag.Diagnostics + + processor := &models.ProcessorGrok{} + + processor.Field = d.Get("field").(string) + processor.TraceMatch = d.Get("trace_match").(bool) + processor.IgnoreMissing = d.Get("ignore_missing").(bool) + processor.IgnoreFailure = d.Get("ignore_failure").(bool) + + pats := d.Get("patterns").([]interface{}) + patterns := make([]string, len(pats)) + for i, v := range pats { + patterns[i] = v.(string) + } + processor.Patterns = patterns + + if v, ok := d.GetOk("ecs_compatibility"); ok { + processor.EcsCompatibility = v.(string) + } + if v, ok := d.GetOk("pattern_definitions"); ok { + pd := v.(map[string]interface{}) + defs := make(map[string]string) + for k, p := range pd { + defs[k] = p.(string) + } + processor.PatternDefinitions = defs + } + + if v, ok := d.GetOk("description"); ok { + processor.Description = v.(string) + } + if v, ok := d.GetOk("if"); ok { + processor.If = v.(string) + } + if v, ok := d.GetOk("tag"); ok { + processor.Tag = v.(string) + } + if v, ok := d.GetOk("on_failure"); ok { + onFailure := make([]map[string]interface{}, len(v.([]interface{}))) + for i, f := range v.([]interface{}) { + item := make(map[string]interface{}) + if err := json.NewDecoder(strings.NewReader(f.(string))).Decode(&item); err != nil { + return diag.FromErr(err) + } + onFailure[i] = item + } + processor.OnFailure = onFailure + } + + processorJson, err := json.MarshalIndent(map[string]*models.ProcessorGrok{"grok": processor}, "", " ") + if err != nil { + diag.FromErr(err) + } + if err := d.Set("json", string(processorJson)); err != nil { + diag.FromErr(err) + } + + hash, err := utils.StringToHash(string(processorJson)) + if err != nil { + return diag.FromErr(err) + } + + d.SetId(*hash) + + return diags +} diff --git a/internal/elasticsearch/ingest/processor_grok_data_source_test.go b/internal/elasticsearch/ingest/processor_grok_data_source_test.go new file mode 100644 index 000000000..6da39c76c --- /dev/null +++ b/internal/elasticsearch/ingest/processor_grok_data_source_test.go @@ -0,0 +1,57 @@ +package ingest_test + +import ( + "testing" + + "github.com/elastic/terraform-provider-elasticstack/internal/acctest" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/resource" +) + +func TestAccDataSourceIngestProcessorGrok(t *testing.T) { + resource.Test(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(t) }, + ProviderFactories: acctest.Providers, + Steps: []resource.TestStep{ + { + Config: testAccDataSourceIngestProcessorGrok, + Check: resource.ComposeTestCheckFunc( + resource.TestCheckResourceAttr("data.elasticstack_elasticsearch_ingest_processor_grok.test", "field", "message"), + CheckResourceJson("data.elasticstack_elasticsearch_ingest_processor_grok.test", "json", expectedJsonGrok), + ), + }, + }, + }) +} + +const expectedJsonGrok = `{ + "grok": { + "field": "message", + "ignore_failure": false, + "ignore_missing": false, + "pattern_definitions": { + "FAVORITE_CAT": "burmese", + "FAVORITE_DOG": "beagle" + }, + "patterns": [ + "%{FAVORITE_DOG:pet}", + "%{FAVORITE_CAT:pet}" + ], + "trace_match": false + } +} +` + +const testAccDataSourceIngestProcessorGrok = ` +provider "elasticstack" { + elasticsearch {} +} + +data "elasticstack_elasticsearch_ingest_processor_grok" "test" { + field = "message" + patterns = ["%%{FAVORITE_DOG:pet}", "%%{FAVORITE_CAT:pet}"] + pattern_definitions = { + FAVORITE_DOG = "beagle" + FAVORITE_CAT = "burmese" + } +} +` diff --git a/internal/elasticsearch/ingest/processor_gsub_data_source.go b/internal/elasticsearch/ingest/processor_gsub_data_source.go new file mode 100644 index 000000000..df1053de4 --- /dev/null +++ b/internal/elasticsearch/ingest/processor_gsub_data_source.go @@ -0,0 +1,147 @@ +package ingest + +import ( + "context" + "encoding/json" + "strings" + + "github.com/elastic/terraform-provider-elasticstack/internal/models" + "github.com/elastic/terraform-provider-elasticstack/internal/utils" + "github.com/hashicorp/terraform-plugin-sdk/v2/diag" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation" +) + +func DataSourceProcessorGsub() *schema.Resource { + processorSchema := map[string]*schema.Schema{ + "id": { + Description: "Internal identifier of the resource.", + Type: schema.TypeString, + Computed: true, + }, + "field": { + Description: "The field to apply the replacement to.", + Type: schema.TypeString, + Required: true, + }, + "pattern": { + Description: "The pattern to be replaced.", + Type: schema.TypeString, + Required: true, + }, + "replacement": { + Description: "The string to replace the matching patterns with.", + Type: schema.TypeString, + Required: true, + }, + "target_field": { + Description: "The field to assign the converted value to, by default `field` is updated in-place.", + Type: schema.TypeString, + Optional: true, + }, + "ignore_missing": { + Description: "If `true` and `field` does not exist or is `null`, the processor quietly exits without modifying the document.", + Type: schema.TypeBool, + Optional: true, + Default: false, + }, + "description": { + Description: "Description of the processor. ", + Type: schema.TypeString, + Optional: true, + }, + "if": { + Description: "Conditionally execute the processor", + Type: schema.TypeString, + Optional: true, + }, + "ignore_failure": { + Description: "Ignore failures for the processor. ", + Type: schema.TypeBool, + Optional: true, + Default: false, + }, + "on_failure": { + Description: "Handle failures for the processor.", + Type: schema.TypeList, + Optional: true, + MinItems: 1, + Elem: &schema.Schema{ + Type: schema.TypeString, + ValidateFunc: validation.StringIsJSON, + DiffSuppressFunc: utils.DiffJsonSuppress, + }, + }, + "tag": { + Description: "Identifier for the processor.", + Type: schema.TypeString, + Optional: true, + }, + "json": { + Description: "JSON representation of this data source.", + Type: schema.TypeString, + Computed: true, + }, + } + + return &schema.Resource{ + Description: "Converts a string field by applying a regular expression and a replacement. See: https://www.elastic.co/guide/en/elasticsearch/reference/current/gsub-processor.html", + + ReadContext: dataSourceProcessorGsubRead, + + Schema: processorSchema, + } +} + +func dataSourceProcessorGsubRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { + var diags diag.Diagnostics + + processor := &models.ProcessorGsub{} + + processor.Field = d.Get("field").(string) + processor.IgnoreFailure = d.Get("ignore_failure").(bool) + processor.IgnoreMissing = d.Get("ignore_missing").(bool) + processor.Pattern = d.Get("pattern").(string) + processor.Replacement = d.Get("replacement").(string) + + if v, ok := d.GetOk("target_field"); ok { + processor.TargetField = v.(string) + } + if v, ok := d.GetOk("description"); ok { + processor.Description = v.(string) + } + if v, ok := d.GetOk("if"); ok { + processor.If = v.(string) + } + if v, ok := d.GetOk("tag"); ok { + processor.Tag = v.(string) + } + if v, ok := d.GetOk("on_failure"); ok { + onFailure := make([]map[string]interface{}, len(v.([]interface{}))) + for i, f := range v.([]interface{}) { + item := make(map[string]interface{}) + if err := json.NewDecoder(strings.NewReader(f.(string))).Decode(&item); err != nil { + return diag.FromErr(err) + } + onFailure[i] = item + } + processor.OnFailure = onFailure + } + + processorJson, err := json.MarshalIndent(map[string]*models.ProcessorGsub{"gsub": processor}, "", " ") + if err != nil { + diag.FromErr(err) + } + if err := d.Set("json", string(processorJson)); err != nil { + diag.FromErr(err) + } + + hash, err := utils.StringToHash(string(processorJson)) + if err != nil { + return diag.FromErr(err) + } + + d.SetId(*hash) + + return diags +} diff --git a/internal/elasticsearch/ingest/processor_gsub_data_source_test.go b/internal/elasticsearch/ingest/processor_gsub_data_source_test.go new file mode 100644 index 000000000..4057ff282 --- /dev/null +++ b/internal/elasticsearch/ingest/processor_gsub_data_source_test.go @@ -0,0 +1,46 @@ +package ingest_test + +import ( + "testing" + + "github.com/elastic/terraform-provider-elasticstack/internal/acctest" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/resource" +) + +func TestAccDataSourceIngestProcessorGsub(t *testing.T) { + resource.Test(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(t) }, + ProviderFactories: acctest.Providers, + Steps: []resource.TestStep{ + { + Config: testAccDataSourceIngestProcessorGsub, + Check: resource.ComposeTestCheckFunc( + resource.TestCheckResourceAttr("data.elasticstack_elasticsearch_ingest_processor_gsub.test", "field", "field1"), + CheckResourceJson("data.elasticstack_elasticsearch_ingest_processor_gsub.test", "json", expectedJsonGsub), + ), + }, + }, + }) +} + +const expectedJsonGsub = `{ + "gsub": { + "field": "field1", + "ignore_failure": false, + "ignore_missing": false, + "pattern": "\\.", + "replacement": "-" + } +}` + +const testAccDataSourceIngestProcessorGsub = ` +provider "elasticstack" { + elasticsearch {} +} + +data "elasticstack_elasticsearch_ingest_processor_gsub" "test" { + field = "field1" + pattern = "\\." + replacement = "-" +} +` diff --git a/internal/elasticsearch/ingest/processor_html_strip_data_source.go b/internal/elasticsearch/ingest/processor_html_strip_data_source.go new file mode 100644 index 000000000..e3d41ebde --- /dev/null +++ b/internal/elasticsearch/ingest/processor_html_strip_data_source.go @@ -0,0 +1,135 @@ +package ingest + +import ( + "context" + "encoding/json" + "strings" + + "github.com/elastic/terraform-provider-elasticstack/internal/models" + "github.com/elastic/terraform-provider-elasticstack/internal/utils" + "github.com/hashicorp/terraform-plugin-sdk/v2/diag" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation" +) + +func DataSourceProcessorHtmlStrip() *schema.Resource { + processorSchema := map[string]*schema.Schema{ + "id": { + Description: "Internal identifier of the resource.", + Type: schema.TypeString, + Computed: true, + }, + "field": { + Description: "The field to apply the replacement to.", + Type: schema.TypeString, + Required: true, + }, + "target_field": { + Description: "The field to assign the converted value to, by default `field` is updated in-place.", + Type: schema.TypeString, + Optional: true, + }, + "ignore_missing": { + Description: "If `true` and `field` does not exist or is `null`, the processor quietly exits without modifying the document.", + Type: schema.TypeBool, + Optional: true, + Default: false, + }, + "description": { + Description: "Description of the processor. ", + Type: schema.TypeString, + Optional: true, + }, + "if": { + Description: "Conditionally execute the processor", + Type: schema.TypeString, + Optional: true, + }, + "ignore_failure": { + Description: "Ignore failures for the processor. ", + Type: schema.TypeBool, + Optional: true, + Default: false, + }, + "on_failure": { + Description: "Handle failures for the processor.", + Type: schema.TypeList, + Optional: true, + MinItems: 1, + Elem: &schema.Schema{ + Type: schema.TypeString, + ValidateFunc: validation.StringIsJSON, + DiffSuppressFunc: utils.DiffJsonSuppress, + }, + }, + "tag": { + Description: "Identifier for the processor.", + Type: schema.TypeString, + Optional: true, + }, + "json": { + Description: "JSON representation of this data source.", + Type: schema.TypeString, + Computed: true, + }, + } + + return &schema.Resource{ + Description: "Removes HTML tags from the field. See: https://www.elastic.co/guide/en/elasticsearch/reference/current/htmlstrip-processor.html", + + ReadContext: dataSourceProcessorHtmlStripRead, + + Schema: processorSchema, + } +} + +func dataSourceProcessorHtmlStripRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { + var diags diag.Diagnostics + + processor := &models.ProcessorHtmlStrip{} + + processor.Field = d.Get("field").(string) + processor.IgnoreFailure = d.Get("ignore_failure").(bool) + processor.IgnoreMissing = d.Get("ignore_missing").(bool) + + if v, ok := d.GetOk("target_field"); ok { + processor.TargetField = v.(string) + } + if v, ok := d.GetOk("description"); ok { + processor.Description = v.(string) + } + if v, ok := d.GetOk("if"); ok { + processor.If = v.(string) + } + if v, ok := d.GetOk("tag"); ok { + processor.Tag = v.(string) + } + if v, ok := d.GetOk("on_failure"); ok { + onFailure := make([]map[string]interface{}, len(v.([]interface{}))) + for i, f := range v.([]interface{}) { + item := make(map[string]interface{}) + if err := json.NewDecoder(strings.NewReader(f.(string))).Decode(&item); err != nil { + return diag.FromErr(err) + } + onFailure[i] = item + } + processor.OnFailure = onFailure + } + + processorJson, err := json.MarshalIndent(map[string]*models.ProcessorHtmlStrip{"html_strip": processor}, "", " ") + if err != nil { + diag.FromErr(err) + } + if err := d.Set("json", string(processorJson)); err != nil { + diag.FromErr(err) + } + + hash, err := utils.StringToHash(string(processorJson)) + if err != nil { + return diag.FromErr(err) + } + + d.SetId(*hash) + + return diags +} diff --git a/internal/elasticsearch/ingest/processor_html_strip_data_source_test.go b/internal/elasticsearch/ingest/processor_html_strip_data_source_test.go new file mode 100644 index 000000000..a72433952 --- /dev/null +++ b/internal/elasticsearch/ingest/processor_html_strip_data_source_test.go @@ -0,0 +1,42 @@ +package ingest_test + +import ( + "testing" + + "github.com/elastic/terraform-provider-elasticstack/internal/acctest" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/resource" +) + +func TestAccDataSourceIngestProcessorHtmlStrip(t *testing.T) { + resource.Test(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(t) }, + ProviderFactories: acctest.Providers, + Steps: []resource.TestStep{ + { + Config: testAccDataSourceIngestProcessorHtmlStrip, + Check: resource.ComposeTestCheckFunc( + resource.TestCheckResourceAttr("data.elasticstack_elasticsearch_ingest_processor_html_strip.test", "field", "foo"), + CheckResourceJson("data.elasticstack_elasticsearch_ingest_processor_html_strip.test", "json", expectedJsonHtmlStrip), + ), + }, + }, + }) +} + +const expectedJsonHtmlStrip = `{ + "html_strip": { + "field": "foo", + "ignore_failure": false, + "ignore_missing": false + } +}` + +const testAccDataSourceIngestProcessorHtmlStrip = ` +provider "elasticstack" { + elasticsearch {} +} + +data "elasticstack_elasticsearch_ingest_processor_html_strip" "test" { + field = "foo" +} +` diff --git a/internal/elasticsearch/ingest/processor_join_data_source.go b/internal/elasticsearch/ingest/processor_join_data_source.go new file mode 100644 index 000000000..1c20bad14 --- /dev/null +++ b/internal/elasticsearch/ingest/processor_join_data_source.go @@ -0,0 +1,134 @@ +package ingest + +import ( + "context" + "encoding/json" + "strings" + + "github.com/elastic/terraform-provider-elasticstack/internal/models" + "github.com/elastic/terraform-provider-elasticstack/internal/utils" + "github.com/hashicorp/terraform-plugin-sdk/v2/diag" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation" +) + +func DataSourceProcessorJoin() *schema.Resource { + processorSchema := map[string]*schema.Schema{ + "id": { + Description: "Internal identifier of the resource.", + Type: schema.TypeString, + Computed: true, + }, + "field": { + Description: "Field containing array values to join.", + Type: schema.TypeString, + Required: true, + }, + "separator": { + Description: "The separator character.", + Type: schema.TypeString, + Required: true, + }, + "target_field": { + Description: "The field to assign the converted value to, by default `field` is updated in-place.", + Type: schema.TypeString, + Optional: true, + }, + "description": { + Description: "Description of the processor. ", + Type: schema.TypeString, + Optional: true, + }, + "if": { + Description: "Conditionally execute the processor", + Type: schema.TypeString, + Optional: true, + }, + "ignore_failure": { + Description: "Ignore failures for the processor. ", + Type: schema.TypeBool, + Optional: true, + Default: false, + }, + "on_failure": { + Description: "Handle failures for the processor.", + Type: schema.TypeList, + Optional: true, + MinItems: 1, + Elem: &schema.Schema{ + Type: schema.TypeString, + ValidateFunc: validation.StringIsJSON, + DiffSuppressFunc: utils.DiffJsonSuppress, + }, + }, + "tag": { + Description: "Identifier for the processor.", + Type: schema.TypeString, + Optional: true, + }, + "json": { + Description: "JSON representation of this data source.", + Type: schema.TypeString, + Computed: true, + }, + } + + return &schema.Resource{ + Description: "Joins each element of an array into a single string using a separator character between each element. See: https://www.elastic.co/guide/en/elasticsearch/reference/current/join-processor.html", + + ReadContext: dataSourceProcessorJoinRead, + + Schema: processorSchema, + } +} + +func dataSourceProcessorJoinRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { + var diags diag.Diagnostics + + processor := &models.ProcessorJoin{} + + processor.Field = d.Get("field").(string) + processor.IgnoreFailure = d.Get("ignore_failure").(bool) + processor.Separator = d.Get("separator").(string) + + if v, ok := d.GetOk("target_field"); ok { + processor.TargetField = v.(string) + } + if v, ok := d.GetOk("description"); ok { + processor.Description = v.(string) + } + if v, ok := d.GetOk("if"); ok { + processor.If = v.(string) + } + if v, ok := d.GetOk("tag"); ok { + processor.Tag = v.(string) + } + if v, ok := d.GetOk("on_failure"); ok { + onFailure := make([]map[string]interface{}, len(v.([]interface{}))) + for i, f := range v.([]interface{}) { + item := make(map[string]interface{}) + if err := json.NewDecoder(strings.NewReader(f.(string))).Decode(&item); err != nil { + return diag.FromErr(err) + } + onFailure[i] = item + } + processor.OnFailure = onFailure + } + + processorJson, err := json.MarshalIndent(map[string]*models.ProcessorJoin{"join": processor}, "", " ") + if err != nil { + diag.FromErr(err) + } + if err := d.Set("json", string(processorJson)); err != nil { + diag.FromErr(err) + } + + hash, err := utils.StringToHash(string(processorJson)) + if err != nil { + return diag.FromErr(err) + } + + d.SetId(*hash) + + return diags +} diff --git a/internal/elasticsearch/ingest/processor_join_data_source_test.go b/internal/elasticsearch/ingest/processor_join_data_source_test.go new file mode 100644 index 000000000..3d0c032d8 --- /dev/null +++ b/internal/elasticsearch/ingest/processor_join_data_source_test.go @@ -0,0 +1,43 @@ +package ingest_test + +import ( + "testing" + + "github.com/elastic/terraform-provider-elasticstack/internal/acctest" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/resource" +) + +func TestAccDataSourceIngestProcessorJoin(t *testing.T) { + resource.Test(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(t) }, + ProviderFactories: acctest.Providers, + Steps: []resource.TestStep{ + { + Config: testAccDataSourceIngestProcessorJoin, + Check: resource.ComposeTestCheckFunc( + resource.TestCheckResourceAttr("data.elasticstack_elasticsearch_ingest_processor_join.test", "field", "joined_array_field"), + CheckResourceJson("data.elasticstack_elasticsearch_ingest_processor_join.test", "json", expectedJsonJoin), + ), + }, + }, + }) +} + +const expectedJsonJoin = `{ + "join": { + "field": "joined_array_field", + "ignore_failure": false, + "separator": "-" + } +}` + +const testAccDataSourceIngestProcessorJoin = ` +provider "elasticstack" { + elasticsearch {} +} + +data "elasticstack_elasticsearch_ingest_processor_join" "test" { + field = "joined_array_field" + separator = "-" +} +` diff --git a/internal/elasticsearch/ingest/processor_json_data_source.go b/internal/elasticsearch/ingest/processor_json_data_source.go new file mode 100644 index 000000000..321b4c775 --- /dev/null +++ b/internal/elasticsearch/ingest/processor_json_data_source.go @@ -0,0 +1,156 @@ +package ingest + +import ( + "context" + "encoding/json" + "strings" + + "github.com/elastic/terraform-provider-elasticstack/internal/models" + "github.com/elastic/terraform-provider-elasticstack/internal/utils" + "github.com/hashicorp/terraform-plugin-sdk/v2/diag" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation" +) + +func DataSourceProcessorJson() *schema.Resource { + processorSchema := map[string]*schema.Schema{ + "id": { + Description: "Internal identifier of the resource.", + Type: schema.TypeString, + Computed: true, + }, + "field": { + Description: "The field to be parsed.", + Type: schema.TypeString, + Required: true, + }, + "target_field": { + Description: "The field that the converted structured object will be written into. Any existing content in this field will be overwritten.", + Type: schema.TypeString, + Optional: true, + }, + "add_to_root": { + Description: "Flag that forces the parsed JSON to be added at the top level of the document. `target_field` must not be set when this option is chosen.", + Type: schema.TypeBool, + Optional: true, + }, + "add_to_root_conflict_strategy": { + Description: "When set to `replace`, root fields that conflict with fields from the parsed JSON will be overridden. When set to `merge`, conflicting fields will be merged. Only applicable if `add_to_root` is set to `true`.", + Type: schema.TypeString, + Optional: true, + ValidateFunc: validation.StringInSlice([]string{"replace", "merge"}, false), + }, + "allow_duplicate_keys": { + Description: "When set to `true`, the JSON parser will not fail if the JSON contains duplicate keys. Instead, the last encountered value for any duplicate key wins.", + Type: schema.TypeBool, + Optional: true, + Default: false, + }, + "description": { + Description: "Description of the processor. ", + Type: schema.TypeString, + Optional: true, + }, + "if": { + Description: "Conditionally execute the processor", + Type: schema.TypeString, + Optional: true, + }, + "ignore_failure": { + Description: "Ignore failures for the processor. ", + Type: schema.TypeBool, + Optional: true, + Default: false, + }, + "on_failure": { + Description: "Handle failures for the processor.", + Type: schema.TypeList, + Optional: true, + MinItems: 1, + Elem: &schema.Schema{ + Type: schema.TypeString, + ValidateFunc: validation.StringIsJSON, + DiffSuppressFunc: utils.DiffJsonSuppress, + }, + }, + "tag": { + Description: "Identifier for the processor.", + Type: schema.TypeString, + Optional: true, + }, + "json": { + Description: "JSON representation of this data source.", + Type: schema.TypeString, + Computed: true, + }, + } + + return &schema.Resource{ + Description: "Converts a JSON string into a structured JSON object. See: https://www.elastic.co/guide/en/elasticsearch/reference/current/json-processor.html", + + ReadContext: dataSourceProcessorJsonRead, + + Schema: processorSchema, + } +} + +func dataSourceProcessorJsonRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { + var diags diag.Diagnostics + + processor := &models.ProcessorJson{} + + processor.Field = d.Get("field").(string) + processor.IgnoreFailure = d.Get("ignore_failure").(bool) + + if v, ok := d.GetOk("add_to_root_conflict_strategy"); ok { + processor.AddToRootConflictStrategy = v.(string) + } + if v, ok := d.GetOk("add_to_root"); ok { + ar := v.(bool) + processor.AddToRoot = &ar + } + if v, ok := d.GetOk("allow_duplicate_keys"); ok { + ar := v.(bool) + processor.AllowDuplicateKeys = &ar + } + if v, ok := d.GetOk("target_field"); ok { + processor.TargetField = v.(string) + } + if v, ok := d.GetOk("description"); ok { + processor.Description = v.(string) + } + if v, ok := d.GetOk("if"); ok { + processor.If = v.(string) + } + if v, ok := d.GetOk("tag"); ok { + processor.Tag = v.(string) + } + if v, ok := d.GetOk("on_failure"); ok { + onFailure := make([]map[string]interface{}, len(v.([]interface{}))) + for i, f := range v.([]interface{}) { + item := make(map[string]interface{}) + if err := json.NewDecoder(strings.NewReader(f.(string))).Decode(&item); err != nil { + return diag.FromErr(err) + } + onFailure[i] = item + } + processor.OnFailure = onFailure + } + + processorJson, err := json.MarshalIndent(map[string]*models.ProcessorJson{"json": processor}, "", " ") + if err != nil { + diag.FromErr(err) + } + if err := d.Set("json", string(processorJson)); err != nil { + diag.FromErr(err) + } + + hash, err := utils.StringToHash(string(processorJson)) + if err != nil { + return diag.FromErr(err) + } + + d.SetId(*hash) + + return diags +} diff --git a/internal/elasticsearch/ingest/processor_json_data_source_test.go b/internal/elasticsearch/ingest/processor_json_data_source_test.go new file mode 100644 index 000000000..3df2216fb --- /dev/null +++ b/internal/elasticsearch/ingest/processor_json_data_source_test.go @@ -0,0 +1,43 @@ +package ingest_test + +import ( + "testing" + + "github.com/elastic/terraform-provider-elasticstack/internal/acctest" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/resource" +) + +func TestAccDataSourceIngestProcessorJson(t *testing.T) { + resource.Test(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(t) }, + ProviderFactories: acctest.Providers, + Steps: []resource.TestStep{ + { + Config: testAccDataSourceIngestProcessorJson, + Check: resource.ComposeTestCheckFunc( + resource.TestCheckResourceAttr("data.elasticstack_elasticsearch_ingest_processor_json.test", "field", "string_source"), + CheckResourceJson("data.elasticstack_elasticsearch_ingest_processor_json.test", "json", expectedJsonJson), + ), + }, + }, + }) +} + +const expectedJsonJson = `{ + "json": { + "field": "string_source", + "ignore_failure": false, + "target_field": "json_target" + } +}` + +const testAccDataSourceIngestProcessorJson = ` +provider "elasticstack" { + elasticsearch {} +} + +data "elasticstack_elasticsearch_ingest_processor_json" "test" { + field = "string_source" + target_field = "json_target" +} +` diff --git a/internal/elasticsearch/ingest/processor_kv_data_source.go b/internal/elasticsearch/ingest/processor_kv_data_source.go new file mode 100644 index 000000000..69a153f5a --- /dev/null +++ b/internal/elasticsearch/ingest/processor_kv_data_source.go @@ -0,0 +1,212 @@ +package ingest + +import ( + "context" + "encoding/json" + "strings" + + "github.com/elastic/terraform-provider-elasticstack/internal/models" + "github.com/elastic/terraform-provider-elasticstack/internal/utils" + "github.com/hashicorp/terraform-plugin-sdk/v2/diag" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation" +) + +func DataSourceProcessorKV() *schema.Resource { + processorSchema := map[string]*schema.Schema{ + "id": { + Description: "Internal identifier of the resource", + Type: schema.TypeString, + Computed: true, + }, + "field": { + Description: "The field to be parsed. Supports template snippets.", + Type: schema.TypeString, + Required: true, + }, + "field_split": { + Description: "Regex pattern to use for splitting key-value pairs.", + Type: schema.TypeString, + Required: true, + }, + "value_split": { + Description: "Regex pattern to use for splitting the key from the value within a key-value pair.", + Type: schema.TypeString, + Required: true, + }, + "target_field": { + Description: "The field to insert the extracted keys into. Defaults to the root of the document.", + Type: schema.TypeString, + Optional: true, + }, + "include_keys": { + Description: "List of keys to filter and insert into document. Defaults to including all keys", + Type: schema.TypeSet, + Optional: true, + MinItems: 1, + Elem: &schema.Schema{ + Type: schema.TypeString, + }, + }, + "exclude_keys": { + Description: "List of keys to exclude from document", + Type: schema.TypeSet, + Optional: true, + MinItems: 1, + Elem: &schema.Schema{ + Type: schema.TypeString, + }, + }, + "ignore_missing": { + Description: "If `true` and `field` does not exist or is `null`, the processor quietly exits without modifying the document.", + Type: schema.TypeBool, + Optional: true, + Default: false, + }, + "prefix": { + Description: "Prefix to be added to extracted keys.", + Type: schema.TypeString, + Optional: true, + }, + "trim_key": { + Description: "String of characters to trim from extracted keys.", + Type: schema.TypeString, + Optional: true, + }, + "trim_value": { + Description: "String of characters to trim from extracted values.", + Type: schema.TypeString, + Optional: true, + }, + "strip_brackets": { + Description: "If `true` strip brackets `()`, `<>`, `[]` as well as quotes `'` and `\"` from extracted values.", + Type: schema.TypeBool, + Optional: true, + Default: false, + }, + "description": { + Description: "Description of the processor. ", + Type: schema.TypeString, + Optional: true, + }, + "if": { + Description: "Conditionally execute the processor", + Type: schema.TypeString, + Optional: true, + }, + "ignore_failure": { + Description: "Ignore failures for the processor. ", + Type: schema.TypeBool, + Optional: true, + Default: false, + }, + "on_failure": { + Description: "Handle failures for the processor.", + Type: schema.TypeList, + Optional: true, + MinItems: 1, + Elem: &schema.Schema{ + Type: schema.TypeString, + ValidateFunc: validation.StringIsJSON, + DiffSuppressFunc: utils.DiffJsonSuppress, + }, + }, + "tag": { + Description: "Identifier for the processor.", + Type: schema.TypeString, + Optional: true, + }, + "json": { + Description: "JSON representation of this data source.", + Type: schema.TypeString, + Computed: true, + }, + } + + return &schema.Resource{ + Description: "This processor helps automatically parse messages (or specific event fields) which are of the foo=bar variety. See: https://www.elastic.co/guide/en/elasticsearch/reference/current/kv-processor.html", + + ReadContext: dataSourceProcessorKVRead, + + Schema: processorSchema, + } +} + +func dataSourceProcessorKVRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { + var diags diag.Diagnostics + + processor := &models.ProcessorKV{} + + processor.Field = d.Get("field").(string) + processor.FieldSplit = d.Get("field_split").(string) + processor.ValueSplit = d.Get("value_split").(string) + processor.IgnoreFailure = d.Get("ignore_failure").(bool) + processor.IgnoreMissing = d.Get("ignore_missing").(bool) + processor.StripBrackets = d.Get("strip_brackets").(bool) + + if v, ok := d.GetOk("include_keys"); ok { + kk := v.(*schema.Set) + keys := make([]string, kk.Len()) + for i, k := range kk.List() { + keys[i] = k.(string) + } + processor.IncludeKeys = keys + } + if v, ok := d.GetOk("exclude_keys"); ok { + kk := v.(*schema.Set) + keys := make([]string, kk.Len()) + for i, k := range kk.List() { + keys[i] = k.(string) + } + processor.ExcludeKeys = keys + } + if v, ok := d.GetOk("target_field"); ok { + processor.TargetField = v.(string) + } + if v, ok := d.GetOk("prefix"); ok { + processor.Prefix = v.(string) + } + if v, ok := d.GetOk("trim_key"); ok { + processor.TrimKey = v.(string) + } + if v, ok := d.GetOk("trim_value"); ok { + processor.TrimValue = v.(string) + } + if v, ok := d.GetOk("description"); ok { + processor.Description = v.(string) + } + if v, ok := d.GetOk("if"); ok { + processor.If = v.(string) + } + if v, ok := d.GetOk("tag"); ok { + processor.Tag = v.(string) + } + if v, ok := d.GetOk("on_failure"); ok { + onFailure := make([]map[string]interface{}, len(v.([]interface{}))) + for i, f := range v.([]interface{}) { + item := make(map[string]interface{}) + if err := json.NewDecoder(strings.NewReader(f.(string))).Decode(&item); err != nil { + return diag.FromErr(err) + } + onFailure[i] = item + } + processor.OnFailure = onFailure + } + + processorJson, err := json.MarshalIndent(map[string]*models.ProcessorKV{"kv": processor}, "", " ") + if err != nil { + diag.FromErr(err) + } + if err := d.Set("json", string(processorJson)); err != nil { + diag.FromErr(err) + } + + hash, err := utils.StringToHash(string(processorJson)) + if err != nil { + return diag.FromErr(err) + } + + d.SetId(*hash) + + return diags +} diff --git a/internal/elasticsearch/ingest/processor_kv_data_source_test.go b/internal/elasticsearch/ingest/processor_kv_data_source_test.go new file mode 100644 index 000000000..f10f40379 --- /dev/null +++ b/internal/elasticsearch/ingest/processor_kv_data_source_test.go @@ -0,0 +1,54 @@ +package ingest_test + +import ( + "testing" + + "github.com/elastic/terraform-provider-elasticstack/internal/acctest" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/resource" +) + +func TestAccDataSourceIngestProcessorKV(t *testing.T) { + resource.Test(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(t) }, + ProviderFactories: acctest.Providers, + Steps: []resource.TestStep{ + { + Config: testAccDataSourceIngestProcessorKV, + Check: resource.ComposeTestCheckFunc( + CheckResourceJson("data.elasticstack_elasticsearch_ingest_processor_kv.test", "json", expectedJsonKV), + ), + }, + }, + }) +} + +const expectedJsonKV = `{ + "kv": { + "exclude_keys": [ + "tags" + ], + "field": "message", + "field_split": " ", + "ignore_failure": false, + "ignore_missing": false, + "prefix": "setting_", + "strip_brackets": false, + "value_split": "=" + } +} +` + +const testAccDataSourceIngestProcessorKV = ` +provider "elasticstack" { + elasticsearch {} +} + +data "elasticstack_elasticsearch_ingest_processor_kv" "test" { + field = "message" + field_split = " " + value_split = "=" + + exclude_keys = ["tags"] + prefix = "setting_" +} +` diff --git a/internal/elasticsearch/ingest/processor_lowercase_data_source.go b/internal/elasticsearch/ingest/processor_lowercase_data_source.go new file mode 100644 index 000000000..fcd39c708 --- /dev/null +++ b/internal/elasticsearch/ingest/processor_lowercase_data_source.go @@ -0,0 +1,135 @@ +package ingest + +import ( + "context" + "encoding/json" + "strings" + + "github.com/elastic/terraform-provider-elasticstack/internal/models" + "github.com/elastic/terraform-provider-elasticstack/internal/utils" + "github.com/hashicorp/terraform-plugin-sdk/v2/diag" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation" +) + +func DataSourceProcessorLowercase() *schema.Resource { + processorSchema := map[string]*schema.Schema{ + "id": { + Description: "Internal identifier of the resource.", + Type: schema.TypeString, + Computed: true, + }, + "field": { + Description: "The field to make lowercase.", + Type: schema.TypeString, + Required: true, + }, + "target_field": { + Description: "The field to assign the converted value to, by default `field` is updated in-place.", + Type: schema.TypeString, + Optional: true, + }, + "ignore_missing": { + Description: "If `true` and `field` does not exist or is `null`, the processor quietly exits without modifying the document.", + Type: schema.TypeBool, + Optional: true, + Default: false, + }, + "description": { + Description: "Description of the processor. ", + Type: schema.TypeString, + Optional: true, + }, + "if": { + Description: "Conditionally execute the processor", + Type: schema.TypeString, + Optional: true, + }, + "ignore_failure": { + Description: "Ignore failures for the processor. ", + Type: schema.TypeBool, + Optional: true, + Default: false, + }, + "on_failure": { + Description: "Handle failures for the processor.", + Type: schema.TypeList, + Optional: true, + MinItems: 1, + Elem: &schema.Schema{ + Type: schema.TypeString, + ValidateFunc: validation.StringIsJSON, + DiffSuppressFunc: utils.DiffJsonSuppress, + }, + }, + "tag": { + Description: "Identifier for the processor.", + Type: schema.TypeString, + Optional: true, + }, + "json": { + Description: "JSON representation of this data source.", + Type: schema.TypeString, + Computed: true, + }, + } + + return &schema.Resource{ + Description: "Converts a string to its lowercase equivalent. If the field is an array of strings, all members of the array will be converted. See: https://www.elastic.co/guide/en/elasticsearch/reference/current/lowercase-processor.html", + + ReadContext: dataSourceProcessorLowercaseRead, + + Schema: processorSchema, + } +} + +func dataSourceProcessorLowercaseRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { + var diags diag.Diagnostics + + processor := &models.ProcessorLowercase{} + + processor.Field = d.Get("field").(string) + processor.IgnoreFailure = d.Get("ignore_failure").(bool) + processor.IgnoreMissing = d.Get("ignore_missing").(bool) + + if v, ok := d.GetOk("target_field"); ok { + processor.TargetField = v.(string) + } + if v, ok := d.GetOk("description"); ok { + processor.Description = v.(string) + } + if v, ok := d.GetOk("if"); ok { + processor.If = v.(string) + } + if v, ok := d.GetOk("tag"); ok { + processor.Tag = v.(string) + } + if v, ok := d.GetOk("on_failure"); ok { + onFailure := make([]map[string]interface{}, len(v.([]interface{}))) + for i, f := range v.([]interface{}) { + item := make(map[string]interface{}) + if err := json.NewDecoder(strings.NewReader(f.(string))).Decode(&item); err != nil { + return diag.FromErr(err) + } + onFailure[i] = item + } + processor.OnFailure = onFailure + } + + processorJson, err := json.MarshalIndent(map[string]*models.ProcessorLowercase{"lowercase": processor}, "", " ") + if err != nil { + diag.FromErr(err) + } + if err := d.Set("json", string(processorJson)); err != nil { + diag.FromErr(err) + } + + hash, err := utils.StringToHash(string(processorJson)) + if err != nil { + return diag.FromErr(err) + } + + d.SetId(*hash) + + return diags +} diff --git a/internal/elasticsearch/ingest/processor_lowercase_data_source_test.go b/internal/elasticsearch/ingest/processor_lowercase_data_source_test.go new file mode 100644 index 000000000..f15d256c1 --- /dev/null +++ b/internal/elasticsearch/ingest/processor_lowercase_data_source_test.go @@ -0,0 +1,42 @@ +package ingest_test + +import ( + "testing" + + "github.com/elastic/terraform-provider-elasticstack/internal/acctest" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/resource" +) + +func TestAccDataSourceIngestProcessorLowercase(t *testing.T) { + resource.Test(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(t) }, + ProviderFactories: acctest.Providers, + Steps: []resource.TestStep{ + { + Config: testAccDataSourceIngestProcessorLowercase, + Check: resource.ComposeTestCheckFunc( + resource.TestCheckResourceAttr("data.elasticstack_elasticsearch_ingest_processor_lowercase.test", "field", "foo"), + CheckResourceJson("data.elasticstack_elasticsearch_ingest_processor_lowercase.test", "json", expectedJsonLowercase), + ), + }, + }, + }) +} + +const expectedJsonLowercase = `{ + "lowercase": { + "field": "foo", + "ignore_failure": false, + "ignore_missing": false + } +}` + +const testAccDataSourceIngestProcessorLowercase = ` +provider "elasticstack" { + elasticsearch {} +} + +data "elasticstack_elasticsearch_ingest_processor_lowercase" "test" { + field = "foo" +} +` diff --git a/internal/elasticsearch/ingest/processor_network_direction_data_source.go b/internal/elasticsearch/ingest/processor_network_direction_data_source.go new file mode 100644 index 000000000..69ee3a95e --- /dev/null +++ b/internal/elasticsearch/ingest/processor_network_direction_data_source.go @@ -0,0 +1,174 @@ +package ingest + +import ( + "context" + "encoding/json" + "strings" + + "github.com/elastic/terraform-provider-elasticstack/internal/models" + "github.com/elastic/terraform-provider-elasticstack/internal/utils" + "github.com/hashicorp/terraform-plugin-sdk/v2/diag" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation" +) + +func DataSourceProcessorNetworkDirection() *schema.Resource { + processorSchema := map[string]*schema.Schema{ + "id": { + Description: "Internal identifier of the resource.", + Type: schema.TypeString, + Computed: true, + }, + "source_ip": { + Description: "Field containing the source IP address.", + Type: schema.TypeString, + Optional: true, + }, + "destination_ip": { + Description: "Field containing the destination IP address.", + Type: schema.TypeString, + Optional: true, + }, + "target_field": { + Description: "Output field for the network direction.", + Type: schema.TypeString, + Optional: true, + }, + "internal_networks": { + Description: "List of internal networks.", + Type: schema.TypeSet, + MinItems: 1, + Optional: true, + Elem: &schema.Schema{ + Type: schema.TypeString, + }, + ConflictsWith: []string{"internal_networks_field"}, + ExactlyOneOf: []string{"internal_networks", "internal_networks_field"}, + }, + "internal_networks_field": { + Description: "A field on the given document to read the internal_networks configuration from.", + Type: schema.TypeString, + Optional: true, + ConflictsWith: []string{"internal_networks"}, + ExactlyOneOf: []string{"internal_networks", "internal_networks_field"}, + }, + "ignore_missing": { + Description: "If `true` and `field` does not exist or is `null`, the processor quietly exits without modifying the document.", + Type: schema.TypeBool, + Optional: true, + Default: true, + }, + "description": { + Description: "Description of the processor. ", + Type: schema.TypeString, + Optional: true, + }, + "if": { + Description: "Conditionally execute the processor", + Type: schema.TypeString, + Optional: true, + }, + "ignore_failure": { + Description: "Ignore failures for the processor. ", + Type: schema.TypeBool, + Optional: true, + Default: false, + }, + "on_failure": { + Description: "Handle failures for the processor.", + Type: schema.TypeList, + Optional: true, + MinItems: 1, + Elem: &schema.Schema{ + Type: schema.TypeString, + ValidateFunc: validation.StringIsJSON, + DiffSuppressFunc: utils.DiffJsonSuppress, + }, + }, + "tag": { + Description: "Identifier for the processor.", + Type: schema.TypeString, + Optional: true, + }, + "json": { + Description: "JSON representation of this data source.", + Type: schema.TypeString, + Computed: true, + }, + } + + return &schema.Resource{ + Description: "Calculates the network direction given a source IP address, destination IP address, and a list of internal networks. See: https://www.elastic.co/guide/en/elasticsearch/reference/current/network-direction-processor.html", + + ReadContext: dataSourceProcessorNetworkDirectionRead, + + Schema: processorSchema, + } +} + +func dataSourceProcessorNetworkDirectionRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { + var diags diag.Diagnostics + + processor := &models.ProcessorNetworkDirection{} + + processor.IgnoreFailure = d.Get("ignore_failure").(bool) + processor.IgnoreMissing = d.Get("ignore_missing").(bool) + + if v, ok := d.GetOk("source_ip"); ok { + processor.SourceIp = v.(string) + } + if v, ok := d.GetOk("destination_ip"); ok { + processor.DestinationIp = v.(string) + } + if v, ok := d.GetOk("internal_networks"); ok { + nets := v.(*schema.Set) + networks := make([]string, nets.Len()) + for i, n := range nets.List() { + networks[i] = n.(string) + } + processor.InternalNetworks = networks + } + if v, ok := d.GetOk("internal_networks_field"); ok { + processor.InternalNetworksField = v.(string) + } + if v, ok := d.GetOk("target_field"); ok { + processor.TargetField = v.(string) + } + if v, ok := d.GetOk("description"); ok { + processor.Description = v.(string) + } + if v, ok := d.GetOk("if"); ok { + processor.If = v.(string) + } + if v, ok := d.GetOk("tag"); ok { + processor.Tag = v.(string) + } + if v, ok := d.GetOk("on_failure"); ok { + onFailure := make([]map[string]interface{}, len(v.([]interface{}))) + for i, f := range v.([]interface{}) { + item := make(map[string]interface{}) + if err := json.NewDecoder(strings.NewReader(f.(string))).Decode(&item); err != nil { + return diag.FromErr(err) + } + onFailure[i] = item + } + processor.OnFailure = onFailure + } + + processorJson, err := json.MarshalIndent(map[string]*models.ProcessorNetworkDirection{"network_direction": processor}, "", " ") + if err != nil { + diag.FromErr(err) + } + if err := d.Set("json", string(processorJson)); err != nil { + diag.FromErr(err) + } + + hash, err := utils.StringToHash(string(processorJson)) + if err != nil { + return diag.FromErr(err) + } + + d.SetId(*hash) + + return diags +} diff --git a/internal/elasticsearch/ingest/processor_network_direction_data_source_test.go b/internal/elasticsearch/ingest/processor_network_direction_data_source_test.go new file mode 100644 index 000000000..1690c08d5 --- /dev/null +++ b/internal/elasticsearch/ingest/processor_network_direction_data_source_test.go @@ -0,0 +1,43 @@ +package ingest_test + +import ( + "testing" + + "github.com/elastic/terraform-provider-elasticstack/internal/acctest" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/resource" +) + +func TestAccDataSourceIngestProcessorNetworkDirection(t *testing.T) { + resource.Test(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(t) }, + ProviderFactories: acctest.Providers, + Steps: []resource.TestStep{ + { + Config: testAccDataSourceIngestProcessorNetworkDirection, + Check: resource.ComposeTestCheckFunc( + CheckResourceJson("data.elasticstack_elasticsearch_ingest_processor_network_direction.test", "json", expectedJsonNetworkDirection), + ), + }, + }, + }) +} + +const expectedJsonNetworkDirection = `{ + "network_direction": { + "ignore_failure": false, + "ignore_missing": true, + "internal_networks": [ + "private" + ] + } +}` + +const testAccDataSourceIngestProcessorNetworkDirection = ` +provider "elasticstack" { + elasticsearch {} +} + +data "elasticstack_elasticsearch_ingest_processor_network_direction" "test" { + internal_networks = ["private"] +} +` diff --git a/internal/elasticsearch/ingest/processor_pipeline_data_source.go b/internal/elasticsearch/ingest/processor_pipeline_data_source.go new file mode 100644 index 000000000..951db4e41 --- /dev/null +++ b/internal/elasticsearch/ingest/processor_pipeline_data_source.go @@ -0,0 +1,120 @@ +package ingest + +import ( + "context" + "encoding/json" + "strings" + + "github.com/elastic/terraform-provider-elasticstack/internal/models" + "github.com/elastic/terraform-provider-elasticstack/internal/utils" + "github.com/hashicorp/terraform-plugin-sdk/v2/diag" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation" +) + +func DataSourceProcessorPipeline() *schema.Resource { + processorSchema := map[string]*schema.Schema{ + "id": { + Description: "Internal identifier of the resource.", + Type: schema.TypeString, + Computed: true, + }, + "name": { + Description: "The name of the pipeline to execute.", + Type: schema.TypeString, + Required: true, + }, + "description": { + Description: "Description of the processor. ", + Type: schema.TypeString, + Optional: true, + }, + "if": { + Description: "Conditionally execute the processor", + Type: schema.TypeString, + Optional: true, + }, + "ignore_failure": { + Description: "Ignore failures for the processor. ", + Type: schema.TypeBool, + Optional: true, + Default: false, + }, + "on_failure": { + Description: "Handle failures for the processor.", + Type: schema.TypeList, + Optional: true, + MinItems: 1, + Elem: &schema.Schema{ + Type: schema.TypeString, + ValidateFunc: validation.StringIsJSON, + DiffSuppressFunc: utils.DiffJsonSuppress, + }, + }, + "tag": { + Description: "Identifier for the processor.", + Type: schema.TypeString, + Optional: true, + }, + "json": { + Description: "JSON representation of this data source.", + Type: schema.TypeString, + Computed: true, + }, + } + + return &schema.Resource{ + Description: "Executes another pipeline. See: https://www.elastic.co/guide/en/elasticsearch/reference/current/pipeline-processor.html", + + ReadContext: dataSourceProcessorPipelineRead, + + Schema: processorSchema, + } +} + +func dataSourceProcessorPipelineRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { + var diags diag.Diagnostics + + processor := &models.ProcessorPipeline{} + + processor.Name = d.Get("name").(string) + processor.IgnoreFailure = d.Get("ignore_failure").(bool) + + if v, ok := d.GetOk("description"); ok { + processor.Description = v.(string) + } + if v, ok := d.GetOk("if"); ok { + processor.If = v.(string) + } + if v, ok := d.GetOk("tag"); ok { + processor.Tag = v.(string) + } + if v, ok := d.GetOk("on_failure"); ok { + onFailure := make([]map[string]interface{}, len(v.([]interface{}))) + for i, f := range v.([]interface{}) { + item := make(map[string]interface{}) + if err := json.NewDecoder(strings.NewReader(f.(string))).Decode(&item); err != nil { + return diag.FromErr(err) + } + onFailure[i] = item + } + processor.OnFailure = onFailure + } + + processorJson, err := json.MarshalIndent(map[string]*models.ProcessorPipeline{"pipeline": processor}, "", " ") + if err != nil { + diag.FromErr(err) + } + if err := d.Set("json", string(processorJson)); err != nil { + diag.FromErr(err) + } + + hash, err := utils.StringToHash(string(processorJson)) + if err != nil { + return diag.FromErr(err) + } + + d.SetId(*hash) + + return diags +} diff --git a/internal/elasticsearch/ingest/processor_pipeline_data_source_test.go b/internal/elasticsearch/ingest/processor_pipeline_data_source_test.go new file mode 100644 index 000000000..6eed8b199 --- /dev/null +++ b/internal/elasticsearch/ingest/processor_pipeline_data_source_test.go @@ -0,0 +1,53 @@ +package ingest_test + +import ( + "testing" + + "github.com/elastic/terraform-provider-elasticstack/internal/acctest" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/resource" +) + +func TestAccDataSourceIngestProcessorPipeline(t *testing.T) { + resource.Test(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(t) }, + ProviderFactories: acctest.Providers, + Steps: []resource.TestStep{ + { + Config: testAccDataSourceIngestProcessorPipeline, + Check: resource.ComposeTestCheckFunc( + CheckResourceJson("data.elasticstack_elasticsearch_ingest_processor_pipeline.test", "json", expectedJsonPipeline), + ), + }, + }, + }) +} + +const expectedJsonPipeline = `{ + "pipeline": { + "name": "pipeline_a", + "ignore_failure": false + } +}` + +const testAccDataSourceIngestProcessorPipeline = ` +provider "elasticstack" { + elasticsearch {} +} + +data "elasticstack_elasticsearch_ingest_processor_append" "tags" { + field = "tags" + value = ["production", "{{{app}}}", "{{{owner}}}"] +} + +resource "elasticstack_elasticsearch_ingest_pipeline" "pipeline_a" { + name = "pipeline_a" + + processors = [ + data.elasticstack_elasticsearch_ingest_processor_append.tags.json + ] +} + +data "elasticstack_elasticsearch_ingest_processor_pipeline" "test" { + name = elasticstack_elasticsearch_ingest_pipeline.pipeline_a.name +} +` diff --git a/internal/elasticsearch/ingest/processor_registered_domain_data_source.go b/internal/elasticsearch/ingest/processor_registered_domain_data_source.go new file mode 100644 index 000000000..6473eb91c --- /dev/null +++ b/internal/elasticsearch/ingest/processor_registered_domain_data_source.go @@ -0,0 +1,135 @@ +package ingest + +import ( + "context" + "encoding/json" + "strings" + + "github.com/elastic/terraform-provider-elasticstack/internal/models" + "github.com/elastic/terraform-provider-elasticstack/internal/utils" + "github.com/hashicorp/terraform-plugin-sdk/v2/diag" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation" +) + +func DataSourceProcessorRegisteredDomain() *schema.Resource { + processorSchema := map[string]*schema.Schema{ + "id": { + Description: "Internal identifier of the resource.", + Type: schema.TypeString, + Computed: true, + }, + "field": { + Description: "Field containing the source FQDN.", + Type: schema.TypeString, + Required: true, + }, + "target_field": { + Description: "Object field containing extracted domain components. If an ``, the processor adds components to the document’s root.", + Type: schema.TypeString, + Optional: true, + }, + "ignore_missing": { + Description: "If `true` and `field` does not exist or is `null`, the processor quietly exits without modifying the document.", + Type: schema.TypeBool, + Optional: true, + Default: false, + }, + "description": { + Description: "Description of the processor. ", + Type: schema.TypeString, + Optional: true, + }, + "if": { + Description: "Conditionally execute the processor", + Type: schema.TypeString, + Optional: true, + }, + "ignore_failure": { + Description: "Ignore failures for the processor. ", + Type: schema.TypeBool, + Optional: true, + Default: false, + }, + "on_failure": { + Description: "Handle failures for the processor.", + Type: schema.TypeList, + Optional: true, + MinItems: 1, + Elem: &schema.Schema{ + Type: schema.TypeString, + ValidateFunc: validation.StringIsJSON, + DiffSuppressFunc: utils.DiffJsonSuppress, + }, + }, + "tag": { + Description: "Identifier for the processor.", + Type: schema.TypeString, + Optional: true, + }, + "json": { + Description: "JSON representation of this data source.", + Type: schema.TypeString, + Computed: true, + }, + } + + return &schema.Resource{ + Description: "Extracts the registered domain (also known as the effective top-level domain or eTLD), sub-domain, and top-level domain from a fully qualified domain name (FQDN). See: https://www.elastic.co/guide/en/elasticsearch/reference/current/registered-domain-processor.html", + + ReadContext: dataSourceProcessorRegisteredDomainRead, + + Schema: processorSchema, + } +} + +func dataSourceProcessorRegisteredDomainRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { + var diags diag.Diagnostics + + processor := &models.ProcessorRegisteredDomain{} + + processor.Field = d.Get("field").(string) + processor.IgnoreFailure = d.Get("ignore_failure").(bool) + processor.IgnoreMissing = d.Get("ignore_missing").(bool) + + if v, ok := d.GetOk("target_field"); ok { + processor.TargetField = v.(string) + } + if v, ok := d.GetOk("description"); ok { + processor.Description = v.(string) + } + if v, ok := d.GetOk("if"); ok { + processor.If = v.(string) + } + if v, ok := d.GetOk("tag"); ok { + processor.Tag = v.(string) + } + if v, ok := d.GetOk("on_failure"); ok { + onFailure := make([]map[string]interface{}, len(v.([]interface{}))) + for i, f := range v.([]interface{}) { + item := make(map[string]interface{}) + if err := json.NewDecoder(strings.NewReader(f.(string))).Decode(&item); err != nil { + return diag.FromErr(err) + } + onFailure[i] = item + } + processor.OnFailure = onFailure + } + + processorJson, err := json.MarshalIndent(map[string]*models.ProcessorRegisteredDomain{"registered_domain": processor}, "", " ") + if err != nil { + diag.FromErr(err) + } + if err := d.Set("json", string(processorJson)); err != nil { + diag.FromErr(err) + } + + hash, err := utils.StringToHash(string(processorJson)) + if err != nil { + return diag.FromErr(err) + } + + d.SetId(*hash) + + return diags +} diff --git a/internal/elasticsearch/ingest/processor_registered_domain_data_source_test.go b/internal/elasticsearch/ingest/processor_registered_domain_data_source_test.go new file mode 100644 index 000000000..309a2d9ff --- /dev/null +++ b/internal/elasticsearch/ingest/processor_registered_domain_data_source_test.go @@ -0,0 +1,45 @@ +package ingest_test + +import ( + "testing" + + "github.com/elastic/terraform-provider-elasticstack/internal/acctest" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/resource" +) + +func TestAccDataSourceIngestProcessorRegisteredDomain(t *testing.T) { + resource.Test(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(t) }, + ProviderFactories: acctest.Providers, + Steps: []resource.TestStep{ + { + Config: testAccDataSourceIngestProcessorRegisteredDomain, + Check: resource.ComposeTestCheckFunc( + resource.TestCheckResourceAttr("data.elasticstack_elasticsearch_ingest_processor_registered_domain.test", "field", "fqdn"), + resource.TestCheckResourceAttr("data.elasticstack_elasticsearch_ingest_processor_registered_domain.test", "target_field", "url"), + CheckResourceJson("data.elasticstack_elasticsearch_ingest_processor_registered_domain.test", "json", expectedJsonRegisteredDomain), + ), + }, + }, + }) +} + +const expectedJsonRegisteredDomain = `{ + "registered_domain": { + "field": "fqdn", + "target_field": "url", + "ignore_failure": false, + "ignore_missing": false + } +}` + +const testAccDataSourceIngestProcessorRegisteredDomain = ` +provider "elasticstack" { + elasticsearch {} +} + +data "elasticstack_elasticsearch_ingest_processor_registered_domain" "test" { + field = "fqdn" + target_field = "url" +} +` diff --git a/internal/elasticsearch/ingest/processor_remove_data_source.go b/internal/elasticsearch/ingest/processor_remove_data_source.go new file mode 100644 index 000000000..62c796037 --- /dev/null +++ b/internal/elasticsearch/ingest/processor_remove_data_source.go @@ -0,0 +1,137 @@ +package ingest + +import ( + "context" + "encoding/json" + "strings" + + "github.com/elastic/terraform-provider-elasticstack/internal/models" + "github.com/elastic/terraform-provider-elasticstack/internal/utils" + "github.com/hashicorp/terraform-plugin-sdk/v2/diag" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation" +) + +func DataSourceProcessorRemove() *schema.Resource { + processorSchema := map[string]*schema.Schema{ + "id": { + Description: "Internal identifier of the resource.", + Type: schema.TypeString, + Computed: true, + }, + "field": { + Description: "Fields to be removed.", + Type: schema.TypeSet, + Required: true, + MinItems: 1, + Elem: &schema.Schema{ + Type: schema.TypeString, + }, + }, + "ignore_missing": { + Description: "If `true` and `field` does not exist or is `null`, the processor quietly exits without modifying the document.", + Type: schema.TypeBool, + Optional: true, + Default: false, + }, + "description": { + Description: "Description of the processor. ", + Type: schema.TypeString, + Optional: true, + }, + "if": { + Description: "Conditionally execute the processor", + Type: schema.TypeString, + Optional: true, + }, + "ignore_failure": { + Description: "Ignore failures for the processor. ", + Type: schema.TypeBool, + Optional: true, + Default: false, + }, + "on_failure": { + Description: "Handle failures for the processor.", + Type: schema.TypeList, + Optional: true, + MinItems: 1, + Elem: &schema.Schema{ + Type: schema.TypeString, + ValidateFunc: validation.StringIsJSON, + DiffSuppressFunc: utils.DiffJsonSuppress, + }, + }, + "tag": { + Description: "Identifier for the processor.", + Type: schema.TypeString, + Optional: true, + }, + "json": { + Description: "JSON representation of this data source.", + Type: schema.TypeString, + Computed: true, + }, + } + + return &schema.Resource{ + Description: "Removes existing fields. See: https://www.elastic.co/guide/en/elasticsearch/reference/current/remove-processor.html", + + ReadContext: dataSourceProcessorRemoveRead, + + Schema: processorSchema, + } +} + +func dataSourceProcessorRemoveRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { + var diags diag.Diagnostics + + processor := &models.ProcessorRemove{} + + processor.IgnoreFailure = d.Get("ignore_failure").(bool) + processor.IgnoreMissing = d.Get("ignore_missing").(bool) + + fields := d.Get("field").(*schema.Set) + flds := make([]string, fields.Len()) + for i, f := range fields.List() { + flds[i] = f.(string) + } + processor.Field = flds + + if v, ok := d.GetOk("description"); ok { + processor.Description = v.(string) + } + if v, ok := d.GetOk("if"); ok { + processor.If = v.(string) + } + if v, ok := d.GetOk("tag"); ok { + processor.Tag = v.(string) + } + if v, ok := d.GetOk("on_failure"); ok { + onFailure := make([]map[string]interface{}, len(v.([]interface{}))) + for i, f := range v.([]interface{}) { + item := make(map[string]interface{}) + if err := json.NewDecoder(strings.NewReader(f.(string))).Decode(&item); err != nil { + return diag.FromErr(err) + } + onFailure[i] = item + } + processor.OnFailure = onFailure + } + + processorJson, err := json.MarshalIndent(map[string]*models.ProcessorRemove{"remove": processor}, "", " ") + if err != nil { + diag.FromErr(err) + } + if err := d.Set("json", string(processorJson)); err != nil { + diag.FromErr(err) + } + + hash, err := utils.StringToHash(string(processorJson)) + if err != nil { + return diag.FromErr(err) + } + + d.SetId(*hash) + + return diags +} diff --git a/internal/elasticsearch/ingest/processor_remove_data_source_test.go b/internal/elasticsearch/ingest/processor_remove_data_source_test.go new file mode 100644 index 000000000..142212566 --- /dev/null +++ b/internal/elasticsearch/ingest/processor_remove_data_source_test.go @@ -0,0 +1,41 @@ +package ingest_test + +import ( + "testing" + + "github.com/elastic/terraform-provider-elasticstack/internal/acctest" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/resource" +) + +func TestAccDataSourceIngestProcessorRemove(t *testing.T) { + resource.Test(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(t) }, + ProviderFactories: acctest.Providers, + Steps: []resource.TestStep{ + { + Config: testAccDataSourceIngestProcessorRemove, + Check: resource.ComposeTestCheckFunc( + CheckResourceJson("data.elasticstack_elasticsearch_ingest_processor_remove.test", "json", expectedJsonRemove), + ), + }, + }, + }) +} + +const expectedJsonRemove = `{ + "remove": { + "field": ["user_agent"], + "ignore_failure": false, + "ignore_missing": false + } +}` + +const testAccDataSourceIngestProcessorRemove = ` +provider "elasticstack" { + elasticsearch {} +} + +data "elasticstack_elasticsearch_ingest_processor_remove" "test" { + field = ["user_agent"] +} +` diff --git a/internal/elasticsearch/ingest/processor_rename_data_source.go b/internal/elasticsearch/ingest/processor_rename_data_source.go new file mode 100644 index 000000000..0d351e5ae --- /dev/null +++ b/internal/elasticsearch/ingest/processor_rename_data_source.go @@ -0,0 +1,133 @@ +package ingest + +import ( + "context" + "encoding/json" + "strings" + + "github.com/elastic/terraform-provider-elasticstack/internal/models" + "github.com/elastic/terraform-provider-elasticstack/internal/utils" + "github.com/hashicorp/terraform-plugin-sdk/v2/diag" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation" +) + +func DataSourceProcessorRename() *schema.Resource { + processorSchema := map[string]*schema.Schema{ + "id": { + Description: "Internal identifier of the resource.", + Type: schema.TypeString, + Computed: true, + }, + "field": { + Description: "The field to be renamed.", + Type: schema.TypeString, + Required: true, + }, + "target_field": { + Description: "The new name of the field.", + Type: schema.TypeString, + Required: true, + }, + "ignore_missing": { + Description: "If `true` and `field` does not exist or is `null`, the processor quietly exits without modifying the document.", + Type: schema.TypeBool, + Optional: true, + Default: false, + }, + "description": { + Description: "Description of the processor. ", + Type: schema.TypeString, + Optional: true, + }, + "if": { + Description: "Conditionally execute the processor", + Type: schema.TypeString, + Optional: true, + }, + "ignore_failure": { + Description: "Ignore failures for the processor. ", + Type: schema.TypeBool, + Optional: true, + Default: false, + }, + "on_failure": { + Description: "Handle failures for the processor.", + Type: schema.TypeList, + Optional: true, + MinItems: 1, + Elem: &schema.Schema{ + Type: schema.TypeString, + ValidateFunc: validation.StringIsJSON, + DiffSuppressFunc: utils.DiffJsonSuppress, + }, + }, + "tag": { + Description: "Identifier for the processor.", + Type: schema.TypeString, + Optional: true, + }, + "json": { + Description: "JSON representation of this data source.", + Type: schema.TypeString, + Computed: true, + }, + } + + return &schema.Resource{ + Description: "Renames an existing field. See: https://www.elastic.co/guide/en/elasticsearch/reference/current/rename-processor.html", + + ReadContext: dataSourceProcessorRenameRead, + + Schema: processorSchema, + } +} + +func dataSourceProcessorRenameRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { + var diags diag.Diagnostics + + processor := &models.ProcessorRename{} + + processor.Field = d.Get("field").(string) + processor.TargetField = d.Get("target_field").(string) + processor.IgnoreFailure = d.Get("ignore_failure").(bool) + processor.IgnoreMissing = d.Get("ignore_missing").(bool) + + if v, ok := d.GetOk("description"); ok { + processor.Description = v.(string) + } + if v, ok := d.GetOk("if"); ok { + processor.If = v.(string) + } + if v, ok := d.GetOk("tag"); ok { + processor.Tag = v.(string) + } + if v, ok := d.GetOk("on_failure"); ok { + onFailure := make([]map[string]interface{}, len(v.([]interface{}))) + for i, f := range v.([]interface{}) { + item := make(map[string]interface{}) + if err := json.NewDecoder(strings.NewReader(f.(string))).Decode(&item); err != nil { + return diag.FromErr(err) + } + onFailure[i] = item + } + processor.OnFailure = onFailure + } + + processorJson, err := json.MarshalIndent(map[string]*models.ProcessorRename{"rename": processor}, "", " ") + if err != nil { + diag.FromErr(err) + } + if err := d.Set("json", string(processorJson)); err != nil { + diag.FromErr(err) + } + + hash, err := utils.StringToHash(string(processorJson)) + if err != nil { + return diag.FromErr(err) + } + + d.SetId(*hash) + + return diags +} diff --git a/internal/elasticsearch/ingest/processor_rename_data_source_test.go b/internal/elasticsearch/ingest/processor_rename_data_source_test.go new file mode 100644 index 000000000..ae4714a69 --- /dev/null +++ b/internal/elasticsearch/ingest/processor_rename_data_source_test.go @@ -0,0 +1,44 @@ +package ingest_test + +import ( + "testing" + + "github.com/elastic/terraform-provider-elasticstack/internal/acctest" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/resource" +) + +func TestAccDataSourceIngestProcessorRename(t *testing.T) { + resource.Test(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(t) }, + ProviderFactories: acctest.Providers, + Steps: []resource.TestStep{ + { + Config: testAccDataSourceIngestProcessorRename, + Check: resource.ComposeTestCheckFunc( + resource.TestCheckResourceAttr("data.elasticstack_elasticsearch_ingest_processor_rename.test", "field", "provider"), + CheckResourceJson("data.elasticstack_elasticsearch_ingest_processor_rename.test", "json", expectedJsonRename), + ), + }, + }, + }) +} + +const expectedJsonRename = `{ + "rename": { + "field": "provider", + "target_field": "cloud.provider", + "ignore_failure": false, + "ignore_missing": false + } +}` + +const testAccDataSourceIngestProcessorRename = ` +provider "elasticstack" { + elasticsearch {} +} + +data "elasticstack_elasticsearch_ingest_processor_rename" "test" { + field = "provider" + target_field = "cloud.provider" +} +` diff --git a/internal/elasticsearch/ingest/processor_script_data_source.go b/internal/elasticsearch/ingest/processor_script_data_source.go new file mode 100644 index 000000000..205dfc937 --- /dev/null +++ b/internal/elasticsearch/ingest/processor_script_data_source.go @@ -0,0 +1,156 @@ +package ingest + +import ( + "context" + "encoding/json" + "strings" + + "github.com/elastic/terraform-provider-elasticstack/internal/models" + "github.com/elastic/terraform-provider-elasticstack/internal/utils" + "github.com/hashicorp/terraform-plugin-sdk/v2/diag" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation" +) + +func DataSourceProcessorScript() *schema.Resource { + processorSchema := map[string]*schema.Schema{ + "id": { + Description: "Internal identifier of the resource.", + Type: schema.TypeString, + Computed: true, + }, + "lang": { + Description: "Script language.", + Type: schema.TypeString, + Optional: true, + }, + "script_id": { + Description: "ID of a stored script. If no `source` is specified, this parameter is required.", + Type: schema.TypeString, + Optional: true, + ConflictsWith: []string{"source"}, + ExactlyOneOf: []string{"script_id", "source"}, + }, + "source": { + Description: "Inline script. If no id is specified, this parameter is required.", + Type: schema.TypeString, + Optional: true, + ConflictsWith: []string{"script_id"}, + ExactlyOneOf: []string{"script_id", "source"}, + }, + "params": { + Description: "Object containing parameters for the script.", + Type: schema.TypeString, + Optional: true, + ValidateFunc: validation.StringIsJSON, + DiffSuppressFunc: utils.DiffJsonSuppress, + }, + "description": { + Description: "Description of the processor. ", + Type: schema.TypeString, + Optional: true, + }, + "if": { + Description: "Conditionally execute the processor", + Type: schema.TypeString, + Optional: true, + }, + "ignore_failure": { + Description: "Ignore failures for the processor. ", + Type: schema.TypeBool, + Optional: true, + Default: false, + }, + "on_failure": { + Description: "Handle failures for the processor.", + Type: schema.TypeList, + Optional: true, + MinItems: 1, + Elem: &schema.Schema{ + Type: schema.TypeString, + ValidateFunc: validation.StringIsJSON, + DiffSuppressFunc: utils.DiffJsonSuppress, + }, + }, + "tag": { + Description: "Identifier for the processor.", + Type: schema.TypeString, + Optional: true, + }, + "json": { + Description: "JSON representation of this data source.", + Type: schema.TypeString, + Computed: true, + }, + } + + return &schema.Resource{ + Description: "Runs an inline or stored script on incoming documents. See: https://www.elastic.co/guide/en/elasticsearch/reference/current/script-processor.html", + + ReadContext: dataSourceProcessorScriptRead, + + Schema: processorSchema, + } +} + +func dataSourceProcessorScriptRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { + var diags diag.Diagnostics + + processor := &models.ProcessorScript{} + + processor.IgnoreFailure = d.Get("ignore_failure").(bool) + + if v, ok := d.GetOk("lang"); ok { + processor.Lang = v.(string) + } + if v, ok := d.GetOk("script_id"); ok { + processor.ScriptId = v.(string) + } + if v, ok := d.GetOk("source"); ok { + processor.Source = v.(string) + } + if v, ok := d.GetOk("params"); ok { + params := make(map[string]interface{}) + if err := json.NewDecoder(strings.NewReader(v.(string))).Decode(¶ms); err != nil { + return diag.FromErr(err) + } + processor.Params = params + } + if v, ok := d.GetOk("description"); ok { + processor.Description = v.(string) + } + if v, ok := d.GetOk("if"); ok { + processor.If = v.(string) + } + if v, ok := d.GetOk("tag"); ok { + processor.Tag = v.(string) + } + if v, ok := d.GetOk("on_failure"); ok { + onFailure := make([]map[string]interface{}, len(v.([]interface{}))) + for i, f := range v.([]interface{}) { + item := make(map[string]interface{}) + if err := json.NewDecoder(strings.NewReader(f.(string))).Decode(&item); err != nil { + return diag.FromErr(err) + } + onFailure[i] = item + } + processor.OnFailure = onFailure + } + + processorJson, err := json.MarshalIndent(map[string]*models.ProcessorScript{"script": processor}, "", " ") + if err != nil { + diag.FromErr(err) + } + if err := d.Set("json", string(processorJson)); err != nil { + diag.FromErr(err) + } + + hash, err := utils.StringToHash(string(processorJson)) + if err != nil { + return diag.FromErr(err) + } + + d.SetId(*hash) + + return diags +} diff --git a/internal/elasticsearch/ingest/processor_script_data_source_test.go b/internal/elasticsearch/ingest/processor_script_data_source_test.go new file mode 100644 index 000000000..4a772f60d --- /dev/null +++ b/internal/elasticsearch/ingest/processor_script_data_source_test.go @@ -0,0 +1,60 @@ +package ingest_test + +import ( + "testing" + + "github.com/elastic/terraform-provider-elasticstack/internal/acctest" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/resource" +) + +func TestAccDataSourceIngestProcessorScript(t *testing.T) { + resource.Test(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(t) }, + ProviderFactories: acctest.Providers, + Steps: []resource.TestStep{ + { + Config: testAccDataSourceIngestProcessorScript, + Check: resource.ComposeTestCheckFunc( + CheckResourceJson("data.elasticstack_elasticsearch_ingest_processor_script.test", "json", expectedJsonScript), + ), + }, + }, + }) +} + +const expectedJsonScript = `{ + "script": { + "description": "Extract 'tags' from 'env' field", + "ignore_failure": false, + "lang": "painless", + "params": { + "delimiter": "-", + "position": 1 + }, + "source": "String[] envSplit = ctx['env'].splitOnToken(params['delimiter']);\nArrayList tags = new ArrayList();\ntags.add(envSplit[params['position']].trim());\nctx['tags'] = tags;\n" + } +}` + +const testAccDataSourceIngestProcessorScript = ` +provider "elasticstack" { + elasticsearch {} +} + +data "elasticstack_elasticsearch_ingest_processor_script" "test" { + description = "Extract 'tags' from 'env' field" + lang = "painless" + + source = <.original.`", + Type: schema.TypeBool, + Optional: true, + Default: true, + }, + "remove_if_successful": { + Description: "If `true`, the processor removes the `field` after parsing the URI string. If parsing fails, the processor does not remove the `field`.", + Type: schema.TypeBool, + Optional: true, + Default: false, + }, + "description": { + Description: "Description of the processor. ", + Type: schema.TypeString, + Optional: true, + }, + "if": { + Description: "Conditionally execute the processor", + Type: schema.TypeString, + Optional: true, + }, + "ignore_failure": { + Description: "Ignore failures for the processor. ", + Type: schema.TypeBool, + Optional: true, + Default: false, + }, + "on_failure": { + Description: "Handle failures for the processor.", + Type: schema.TypeList, + Optional: true, + MinItems: 1, + Elem: &schema.Schema{ + Type: schema.TypeString, + ValidateFunc: validation.StringIsJSON, + DiffSuppressFunc: utils.DiffJsonSuppress, + }, + }, + "tag": { + Description: "Identifier for the processor.", + Type: schema.TypeString, + Optional: true, + }, + "json": { + Description: "JSON representation of this data source.", + Type: schema.TypeString, + Computed: true, + }, + } + + return &schema.Resource{ + Description: "Parses a Uniform Resource Identifier (URI) string and extracts its components as an object. See: https://www.elastic.co/guide/en/elasticsearch/reference/current/uri-parts-processor.html", + + ReadContext: dataSourceProcessorUriPartsRead, + + Schema: processorSchema, + } +} + +func dataSourceProcessorUriPartsRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { + var diags diag.Diagnostics + + processor := &models.ProcessorUriParts{} + + processor.Field = d.Get("field").(string) + processor.IgnoreFailure = d.Get("ignore_failure").(bool) + processor.KeepOriginal = d.Get("keep_original").(bool) + processor.RemoveIfSuccessful = d.Get("remove_if_successful").(bool) + + if v, ok := d.GetOk("target_field"); ok { + processor.TargetField = v.(string) + } + if v, ok := d.GetOk("description"); ok { + processor.Description = v.(string) + } + if v, ok := d.GetOk("if"); ok { + processor.If = v.(string) + } + if v, ok := d.GetOk("tag"); ok { + processor.Tag = v.(string) + } + if v, ok := d.GetOk("on_failure"); ok { + onFailure := make([]map[string]interface{}, len(v.([]interface{}))) + for i, f := range v.([]interface{}) { + item := make(map[string]interface{}) + if err := json.NewDecoder(strings.NewReader(f.(string))).Decode(&item); err != nil { + return diag.FromErr(err) + } + onFailure[i] = item + } + processor.OnFailure = onFailure + } + + processorJson, err := json.MarshalIndent(map[string]*models.ProcessorUriParts{"uri_parts": processor}, "", " ") + if err != nil { + diag.FromErr(err) + } + if err := d.Set("json", string(processorJson)); err != nil { + diag.FromErr(err) + } + + hash, err := utils.StringToHash(string(processorJson)) + if err != nil { + return diag.FromErr(err) + } + + d.SetId(*hash) + + return diags +} diff --git a/internal/elasticsearch/ingest/processor_uri_parts_data_source_test.go b/internal/elasticsearch/ingest/processor_uri_parts_data_source_test.go new file mode 100644 index 000000000..3894e6521 --- /dev/null +++ b/internal/elasticsearch/ingest/processor_uri_parts_data_source_test.go @@ -0,0 +1,47 @@ +package ingest_test + +import ( + "testing" + + "github.com/elastic/terraform-provider-elasticstack/internal/acctest" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/resource" +) + +func TestAccDataSourceIngestProcessorUriParts(t *testing.T) { + resource.Test(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(t) }, + ProviderFactories: acctest.Providers, + Steps: []resource.TestStep{ + { + Config: testAccDataSourceIngestProcessorUriParts, + Check: resource.ComposeTestCheckFunc( + resource.TestCheckResourceAttr("data.elasticstack_elasticsearch_ingest_processor_uri_parts.test", "field", "input_field"), + CheckResourceJson("data.elasticstack_elasticsearch_ingest_processor_uri_parts.test", "json", expectedJsonUriParts), + ), + }, + }, + }) +} + +const expectedJsonUriParts = `{ + "uri_parts": { + "field": "input_field", + "ignore_failure": false, + "keep_original": true, + "remove_if_successful": false, + "target_field": "url" + } +}` + +const testAccDataSourceIngestProcessorUriParts = ` +provider "elasticstack" { + elasticsearch {} +} + +data "elasticstack_elasticsearch_ingest_processor_uri_parts" "test" { + field = "input_field" + target_field = "url" + keep_original = true + remove_if_successful = false +} +` diff --git a/internal/elasticsearch/ingest/processor_urldecode_data_source.go b/internal/elasticsearch/ingest/processor_urldecode_data_source.go new file mode 100644 index 000000000..0c903ce20 --- /dev/null +++ b/internal/elasticsearch/ingest/processor_urldecode_data_source.go @@ -0,0 +1,135 @@ +package ingest + +import ( + "context" + "encoding/json" + "strings" + + "github.com/elastic/terraform-provider-elasticstack/internal/models" + "github.com/elastic/terraform-provider-elasticstack/internal/utils" + "github.com/hashicorp/terraform-plugin-sdk/v2/diag" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation" +) + +func DataSourceProcessorUrldecode() *schema.Resource { + processorSchema := map[string]*schema.Schema{ + "id": { + Description: "Internal identifier of the resource.", + Type: schema.TypeString, + Computed: true, + }, + "field": { + Description: "The field to decode", + Type: schema.TypeString, + Required: true, + }, + "target_field": { + Description: "The field to assign the converted value to, by default `field` is updated in-place.", + Type: schema.TypeString, + Optional: true, + }, + "ignore_missing": { + Description: "If `true` and `field` does not exist or is `null`, the processor quietly exits without modifying the document.", + Type: schema.TypeBool, + Optional: true, + Default: false, + }, + "description": { + Description: "Description of the processor. ", + Type: schema.TypeString, + Optional: true, + }, + "if": { + Description: "Conditionally execute the processor", + Type: schema.TypeString, + Optional: true, + }, + "ignore_failure": { + Description: "Ignore failures for the processor. ", + Type: schema.TypeBool, + Optional: true, + Default: false, + }, + "on_failure": { + Description: "Handle failures for the processor.", + Type: schema.TypeList, + Optional: true, + MinItems: 1, + Elem: &schema.Schema{ + Type: schema.TypeString, + ValidateFunc: validation.StringIsJSON, + DiffSuppressFunc: utils.DiffJsonSuppress, + }, + }, + "tag": { + Description: "Identifier for the processor.", + Type: schema.TypeString, + Optional: true, + }, + "json": { + Description: "JSON representation of this data source.", + Type: schema.TypeString, + Computed: true, + }, + } + + return &schema.Resource{ + Description: "URL-decodes a string. See: https://www.elastic.co/guide/en/elasticsearch/reference/current/urldecode-processor.html", + + ReadContext: dataSourceProcessorUrldecodeRead, + + Schema: processorSchema, + } +} + +func dataSourceProcessorUrldecodeRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { + var diags diag.Diagnostics + + processor := &models.ProcessorUrldecode{} + + processor.Field = d.Get("field").(string) + processor.IgnoreFailure = d.Get("ignore_failure").(bool) + processor.IgnoreMissing = d.Get("ignore_missing").(bool) + + if v, ok := d.GetOk("target_field"); ok { + processor.TargetField = v.(string) + } + if v, ok := d.GetOk("description"); ok { + processor.Description = v.(string) + } + if v, ok := d.GetOk("if"); ok { + processor.If = v.(string) + } + if v, ok := d.GetOk("tag"); ok { + processor.Tag = v.(string) + } + if v, ok := d.GetOk("on_failure"); ok { + onFailure := make([]map[string]interface{}, len(v.([]interface{}))) + for i, f := range v.([]interface{}) { + item := make(map[string]interface{}) + if err := json.NewDecoder(strings.NewReader(f.(string))).Decode(&item); err != nil { + return diag.FromErr(err) + } + onFailure[i] = item + } + processor.OnFailure = onFailure + } + + processorJson, err := json.MarshalIndent(map[string]*models.ProcessorUrldecode{"urldecode": processor}, "", " ") + if err != nil { + diag.FromErr(err) + } + if err := d.Set("json", string(processorJson)); err != nil { + diag.FromErr(err) + } + + hash, err := utils.StringToHash(string(processorJson)) + if err != nil { + return diag.FromErr(err) + } + + d.SetId(*hash) + + return diags +} diff --git a/internal/elasticsearch/ingest/processor_urldecode_data_source_test.go b/internal/elasticsearch/ingest/processor_urldecode_data_source_test.go new file mode 100644 index 000000000..81436225e --- /dev/null +++ b/internal/elasticsearch/ingest/processor_urldecode_data_source_test.go @@ -0,0 +1,42 @@ +package ingest_test + +import ( + "testing" + + "github.com/elastic/terraform-provider-elasticstack/internal/acctest" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/resource" +) + +func TestAccDataSourceIngestProcessorUrldecode(t *testing.T) { + resource.Test(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(t) }, + ProviderFactories: acctest.Providers, + Steps: []resource.TestStep{ + { + Config: testAccDataSourceIngestProcessorUrldecode, + Check: resource.ComposeTestCheckFunc( + resource.TestCheckResourceAttr("data.elasticstack_elasticsearch_ingest_processor_urldecode.test", "field", "my_url_to_decode"), + CheckResourceJson("data.elasticstack_elasticsearch_ingest_processor_urldecode.test", "json", expectedJsonUrldecode), + ), + }, + }, + }) +} + +const expectedJsonUrldecode = `{ + "urldecode": { + "field": "my_url_to_decode", + "ignore_failure": false, + "ignore_missing": false + } +}` + +const testAccDataSourceIngestProcessorUrldecode = ` +provider "elasticstack" { + elasticsearch {} +} + +data "elasticstack_elasticsearch_ingest_processor_urldecode" "test" { + field = "my_url_to_decode" +} +` diff --git a/internal/elasticsearch/ingest/processor_user_agent_data_source.go b/internal/elasticsearch/ingest/processor_user_agent_data_source.go new file mode 100644 index 000000000..70437312b --- /dev/null +++ b/internal/elasticsearch/ingest/processor_user_agent_data_source.go @@ -0,0 +1,115 @@ +package ingest + +import ( + "context" + "encoding/json" + + "github.com/elastic/terraform-provider-elasticstack/internal/models" + "github.com/elastic/terraform-provider-elasticstack/internal/utils" + "github.com/hashicorp/terraform-plugin-sdk/v2/diag" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" +) + +func DataSourceProcessorUserAgent() *schema.Resource { + processorSchema := map[string]*schema.Schema{ + "id": { + Description: "Internal identifier of the resource.", + Type: schema.TypeString, + Computed: true, + }, + "field": { + Description: "The field containing the user agent string.", + Type: schema.TypeString, + Required: true, + }, + "target_field": { + Description: "The field that will be filled with the user agent details.", + Type: schema.TypeString, + Optional: true, + }, + "regex_file": { + Description: "The name of the file in the `config/ingest-user-agent` directory containing the regular expressions for parsing the user agent string.", + Type: schema.TypeString, + Optional: true, + }, + "properties": { + Description: "Controls what properties are added to `target_field`.", + Type: schema.TypeSet, + Optional: true, + MinItems: 1, + Elem: &schema.Schema{ + Type: schema.TypeString, + }, + }, + "extract_device_type": { + Description: "Extracts device type from the user agent string on a best-effort basis. Supported only starting from Elasticsearch version **8.0**", + Type: schema.TypeBool, + Optional: true, + Default: false, + }, + "ignore_missing": { + Description: "If `true` and `field` does not exist or is `null`, the processor quietly exits without modifying the document.", + Type: schema.TypeBool, + Optional: true, + Default: false, + }, + "json": { + Description: "JSON representation of this data source.", + Type: schema.TypeString, + Computed: true, + }, + } + + return &schema.Resource{ + Description: "Extracts details from the user agent string a browser sends with its web requests. See: https://www.elastic.co/guide/en/elasticsearch/reference/current/user-agent-processor.html", + + ReadContext: dataSourceProcessorUserAgentRead, + + Schema: processorSchema, + } +} + +func dataSourceProcessorUserAgentRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { + var diags diag.Diagnostics + + processor := &models.ProcessorUserAgent{} + + processor.Field = d.Get("field").(string) + processor.IgnoreMissing = d.Get("ignore_missing").(bool) + + if v, ok := d.GetOk("target_field"); ok { + processor.TargetField = v.(string) + } + if v, ok := d.GetOk("regex_file"); ok { + processor.RegexFile = v.(string) + } + if v, ok := d.GetOk("properties"); ok { + props := v.(*schema.Set) + properties := make([]string, props.Len()) + for i, p := range props.List() { + properties[i] = p.(string) + } + processor.Properties = properties + } + if v, ok := d.GetOk("extract_device_type"); ok { + dev := v.(bool) + processor.ExtractDeviceType = &dev + } + + processorJson, err := json.MarshalIndent(map[string]*models.ProcessorUserAgent{"user_agent": processor}, "", " ") + if err != nil { + diag.FromErr(err) + } + if err := d.Set("json", string(processorJson)); err != nil { + diag.FromErr(err) + } + + hash, err := utils.StringToHash(string(processorJson)) + if err != nil { + return diag.FromErr(err) + } + + d.SetId(*hash) + + return diags +} diff --git a/internal/elasticsearch/ingest/processor_user_agent_data_source_test.go b/internal/elasticsearch/ingest/processor_user_agent_data_source_test.go new file mode 100644 index 000000000..99f569975 --- /dev/null +++ b/internal/elasticsearch/ingest/processor_user_agent_data_source_test.go @@ -0,0 +1,41 @@ +package ingest_test + +import ( + "testing" + + "github.com/elastic/terraform-provider-elasticstack/internal/acctest" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/resource" +) + +func TestAccDataSourceIngestProcessorUserAgent(t *testing.T) { + resource.Test(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(t) }, + ProviderFactories: acctest.Providers, + Steps: []resource.TestStep{ + { + Config: testAccDataSourceIngestProcessorUserAgent, + Check: resource.ComposeTestCheckFunc( + resource.TestCheckResourceAttr("data.elasticstack_elasticsearch_ingest_processor_user_agent.test", "field", "agent"), + CheckResourceJson("data.elasticstack_elasticsearch_ingest_processor_user_agent.test", "json", expectedJsonUserAgent), + ), + }, + }, + }) +} + +const expectedJsonUserAgent = `{ + "user_agent": { + "field": "agent", + "ignore_missing": false + } +}` + +const testAccDataSourceIngestProcessorUserAgent = ` +provider "elasticstack" { + elasticsearch {} +} + +data "elasticstack_elasticsearch_ingest_processor_user_agent" "test" { + field = "agent" +} +` diff --git a/internal/elasticsearch/security/user_data_source.go b/internal/elasticsearch/security/user_data_source.go index 76e8b5aae..24225de11 100644 --- a/internal/elasticsearch/security/user_data_source.go +++ b/internal/elasticsearch/security/user_data_source.go @@ -12,6 +12,11 @@ import ( func DataSourceUser() *schema.Resource { userSchema := map[string]*schema.Schema{ + "id": { + Description: "Internal identifier of the resource", + Type: schema.TypeString, + Computed: true, + }, "username": { Description: "An identifier for the user", Type: schema.TypeString, diff --git a/internal/models/ingest.go b/internal/models/ingest.go new file mode 100644 index 000000000..f11e8616d --- /dev/null +++ b/internal/models/ingest.go @@ -0,0 +1,336 @@ +package models + +type IngestPipeline struct { + Name string `json:"-"` + Description *string `json:"description,omitempty"` + OnFailure []map[string]interface{} `json:"on_failure,omitempty"` + Processors []map[string]interface{} `json:"processors"` + Metadata map[string]interface{} `json:"_meta,omitempty"` +} + +type CommonProcessor struct { + Description string `json:"description,omitempty"` + If string `json:"if,omitempty"` + IgnoreFailure bool `json:"ignore_failure"` + OnFailure []map[string]interface{} `json:"on_failure,omitempty"` + Tag string `json:"tag,omitempty"` +} + +type ProcessortFields struct { + Field string `json:"field"` + TargetField string `json:"target_field,omitempty"` + IgnoreMissing bool `json:"ignore_missing"` +} + +type ProcessorAppend struct { + CommonProcessor + + Field string `json:"field"` + Value []string `json:"value"` + AllowDuplicates bool `json:"allow_duplicates"` + MediaType string `json:"media_type"` +} + +type ProcessorBytes struct { + CommonProcessor + ProcessortFields +} + +type ProcessorCircle struct { + CommonProcessor + ProcessortFields + + ErrorDistance float64 `json:"error_distance"` + ShapeType string `json:"shape_type"` +} + +type ProcessorCommunityId struct { + CommonProcessor + + SourceIp string `json:"source_ip,omitempty"` + SourcePort *int `json:"source_port,omitempty"` + DestinationIp string `json:"destination_ip,omitempty"` + DestinationPort *int `json:"destination_port,omitempty"` + IanaNumber string `json:"iana_number,omitempty"` + IcmpType *int `json:"icmp_type,omitempty"` + IcmpCode *int `json:"icmp_code,omitempty"` + Transport string `json:"transport,omitempty"` + TargetField string `json:"target_field,omitempty"` + Seed *int `json:"seed"` + IgnoreMissing bool `json:"ignore_missing"` +} + +type ProcessorConvert struct { + CommonProcessor + ProcessortFields + + Type string `json:"type"` +} + +type ProcessorCSV struct { + CommonProcessor + + Field string `json:"field"` + TargetFields []string `json:"target_fields"` + IgnoreMissing bool `json:"ignore_missing"` + Separator string `json:"separator"` + Quote string `json:"quote"` + Trim bool `json:"trim"` + EmptyValue string `json:"empty_value,omitempty"` +} + +type ProcessorDate struct { + CommonProcessor + + Field string `json:"field"` + TargetField string `json:"target_field,omitempty"` + Formats []string `json:"formats"` + Timezone string `json:"timezone,omitempty"` + Locale string `json:"locale,omitempty"` + OutputFormat string `json:"output_format,omitempty"` +} + +type ProcessorDateIndexName struct { + CommonProcessor + + Field string `json:"field"` + IndexNamePrefix string `json:"index_name_prefix,omitempty"` + DateRounding string `json:"date_rounding"` + DateFormats []string `json:"date_formats,omitempty"` + Timezone string `json:"timezone,omitempty"` + Locale string `json:"locale,omitempty"` + IndexNameFormat string `json:"index_name_format,omitempty"` +} + +type ProcessorDissect struct { + CommonProcessor + + Field string `json:"field"` + Pattern string `json:"pattern"` + AppendSeparator string `json:"append_separator"` + IgnoreMissing bool `json:"ignore_missing"` +} + +type ProcessorDotExpander struct { + CommonProcessor + + Field string `json:"field"` + Path string `json:"path,omitempty"` + Override bool `json:"override"` +} + +type ProcessorDrop struct { + CommonProcessor +} + +type ProcessorEnrich struct { + CommonProcessor + ProcessortFields + + PolicyName string `json:"policy_name"` + Override bool `json:"override"` + MaxMatches int `json:"max_matches"` + ShapeRelation string `json:"shape_relation,omitempty"` +} + +type ProcessorFail struct { + CommonProcessor + + Message string `json:"message"` +} + +type ProcessorFingerprint struct { + CommonProcessor + + Fields []string `json:"fields"` + TargetField string `json:"target_field,omitempty"` + IgnoreMissing bool `json:"ignore_missing"` + Salt string `json:"salt,omitempty"` + Method string `json:"method,omitempty"` +} + +type ProcessorForeach struct { + CommonProcessor + + Field string `json:"field"` + IgnoreMissing bool `json:"ignore_missing"` + Processor map[string]interface{} `json:"processor"` +} + +type ProcessorGeoip struct { + ProcessortFields + + DatabaseFile string `json:"database_file,omitempty"` + Properties []string `json:"properties,omitempty"` + FirstOnly bool `json:"first_only"` +} + +type ProcessorGrok struct { + CommonProcessor + + Field string `json:"field"` + Patterns []string `json:"patterns"` + PatternDefinitions map[string]string `json:"pattern_definitions,omitempty"` + EcsCompatibility string `json:"ecs_compatibility,omitempty"` + TraceMatch bool `json:"trace_match"` + IgnoreMissing bool `json:"ignore_missing"` +} + +type ProcessorGsub struct { + CommonProcessor + ProcessortFields + + Pattern string `json:"pattern"` + Replacement string `json:"replacement"` +} + +type ProcessorHtmlStrip struct { + CommonProcessor + ProcessortFields +} + +type ProcessorJoin struct { + CommonProcessor + + Field string `json:"field"` + Separator string `json:"separator"` + TargetField string `json:"target_field,omitempty"` +} + +type ProcessorJson struct { + CommonProcessor + + Field string `json:"field"` + TargetField string `json:"target_field,omitempty"` + AddToRoot *bool `json:"add_to_root,omitempty"` + AddToRootConflictStrategy string `json:"add_to_root_conflict_strategy,omitempty"` + AllowDuplicateKeys *bool `json:"allow_duplicate_keys,omitempty"` +} + +type ProcessorKV struct { + CommonProcessor + ProcessortFields + + FieldSplit string `json:"field_split"` + ValueSplit string `json:"value_split"` + IncludeKeys []string `json:"include_keys,omitempty"` + ExcludeKeys []string `json:"exclude_keys,omitempty"` + Prefix string `json:"prefix,omitempty"` + TrimKey string `json:"trim_key,omitempty"` + TrimValue string `json:"trim_value,omitempty"` + StripBrackets bool `json:"strip_brackets"` +} + +type ProcessorLowercase struct { + CommonProcessor + ProcessortFields +} + +type ProcessorNetworkDirection struct { + CommonProcessor + + SourceIp string `json:"source_ip,omitempty"` + DestinationIp string `json:"destination_ip,omitempty"` + TargetField string `json:"target_field,omitempty"` + InternalNetworks []string `json:"internal_networks,omitempty"` + InternalNetworksField string `json:"internal_networks_field,omitempty"` + IgnoreMissing bool `json:"ignore_missing"` +} + +type ProcessorPipeline struct { + CommonProcessor + + Name string `json:"name"` +} + +type ProcessorRegisteredDomain struct { + CommonProcessor + ProcessortFields +} + +type ProcessorRemove struct { + CommonProcessor + + Field []string `json:"field"` + IgnoreMissing bool `json:"ignore_missing"` +} + +type ProcessorRename struct { + CommonProcessor + ProcessortFields +} + +type ProcessorScript struct { + CommonProcessor + + Lang string `json:"lang,omitempty"` + ScriptId string `json:"id,omitempty"` + Source string `json:"source,omitempty"` + Params map[string]interface{} `json:"params,omitempty"` +} + +type ProcessorSet struct { + CommonProcessor + + Field string `json:"field"` + Value string `json:"value,omitempty"` + CopyFrom string `json:"copy_from,omitempty"` + Override bool `json:"override"` + IgnoreEmptyValue bool `json:"ignore_empty_value"` + MediaType string `json:"media_type,omitempty"` +} + +type ProcessorSetSecurityUser struct { + CommonProcessor + + Field string `json:"field"` + Properties []string `json:"properties,omitempty"` +} + +type ProcessorSort struct { + CommonProcessor + + Field string `json:"field"` + Order string `json:"order,omitempty"` + TargetField string `json:"target_field,omitempty"` +} + +type ProcessorSplit struct { + CommonProcessor + ProcessortFields + + Separator string `json:"separator"` + PreserveTrailing bool `json:"preserve_trailing"` +} + +type ProcessorTrim struct { + CommonProcessor + ProcessortFields +} + +type ProcessorUppercase struct { + CommonProcessor + ProcessortFields +} + +type ProcessorUrldecode struct { + CommonProcessor + ProcessortFields +} + +type ProcessorUriParts struct { + CommonProcessor + + Field string `json:"field"` + TargetField string `json:"target_field,omitempty"` + KeepOriginal bool `json:"keep_original"` + RemoveIfSuccessful bool `json:"remove_if_successful"` +} + +type ProcessorUserAgent struct { + ProcessortFields + + RegexFile string `json:"regex_file,omitempty"` + Properties []string `json:"properties,omitempty"` + ExtractDeviceType *bool `json:"extract_device_type,omitempty"` +} diff --git a/internal/models/models.go b/internal/models/models.go index 2ad83ca95..0a5d2bf8e 100644 --- a/internal/models/models.go +++ b/internal/models/models.go @@ -172,11 +172,3 @@ type DataStreamIndex struct { type TimestampField struct { Name string `json:"name"` } - -type IngestPipeline struct { - Name string `json:"-"` - Description *string `json:"description,omitempty"` - OnFailure []map[string]interface{} `json:"on_failure,omitempty"` - Processors []map[string]interface{} `json:"processors"` - Metadata map[string]interface{} `json:"_meta,omitempty"` -} diff --git a/internal/provider/provider.go b/internal/provider/provider.go index b693f4165..69b27723a 100644 --- a/internal/provider/provider.go +++ b/internal/provider/provider.go @@ -65,8 +65,46 @@ func New(version string) func() *schema.Provider { }, }, DataSourcesMap: map[string]*schema.Resource{ - "elasticstack_elasticsearch_security_user": security.DataSourceUser(), - "elasticstack_elasticsearch_snapshot_repository": cluster.DataSourceSnapshotRespository(), + "elasticstack_elasticsearch_ingest_processor_append": ingest.DataSourceProcessorAppend(), + "elasticstack_elasticsearch_ingest_processor_bytes": ingest.DataSourceProcessorBytes(), + "elasticstack_elasticsearch_ingest_processor_circle": ingest.DataSourceProcessorCircle(), + "elasticstack_elasticsearch_ingest_processor_community_id": ingest.DataSourceProcessorCommunityId(), + "elasticstack_elasticsearch_ingest_processor_convert": ingest.DataSourceProcessorConvert(), + "elasticstack_elasticsearch_ingest_processor_csv": ingest.DataSourceProcessorCSV(), + "elasticstack_elasticsearch_ingest_processor_date": ingest.DataSourceProcessorDate(), + "elasticstack_elasticsearch_ingest_processor_date_index_name": ingest.DataSourceProcessorDateIndexName(), + "elasticstack_elasticsearch_ingest_processor_dissect": ingest.DataSourceProcessorDissect(), + "elasticstack_elasticsearch_ingest_processor_dot_expander": ingest.DataSourceProcessorDotExpander(), + "elasticstack_elasticsearch_ingest_processor_drop": ingest.DataSourceProcessorDrop(), + "elasticstack_elasticsearch_ingest_processor_enrich": ingest.DataSourceProcessorEnrich(), + "elasticstack_elasticsearch_ingest_processor_fail": ingest.DataSourceProcessorFail(), + "elasticstack_elasticsearch_ingest_processor_fingerprint": ingest.DataSourceProcessorFingerprint(), + "elasticstack_elasticsearch_ingest_processor_foreach": ingest.DataSourceProcessorForeach(), + "elasticstack_elasticsearch_ingest_processor_geoip": ingest.DataSourceProcessorGeoip(), + "elasticstack_elasticsearch_ingest_processor_grok": ingest.DataSourceProcessorGrok(), + "elasticstack_elasticsearch_ingest_processor_gsub": ingest.DataSourceProcessorGsub(), + "elasticstack_elasticsearch_ingest_processor_html_strip": ingest.DataSourceProcessorHtmlStrip(), + "elasticstack_elasticsearch_ingest_processor_join": ingest.DataSourceProcessorJoin(), + "elasticstack_elasticsearch_ingest_processor_json": ingest.DataSourceProcessorJson(), + "elasticstack_elasticsearch_ingest_processor_kv": ingest.DataSourceProcessorKV(), + "elasticstack_elasticsearch_ingest_processor_lowercase": ingest.DataSourceProcessorLowercase(), + "elasticstack_elasticsearch_ingest_processor_network_direction": ingest.DataSourceProcessorNetworkDirection(), + "elasticstack_elasticsearch_ingest_processor_pipeline": ingest.DataSourceProcessorPipeline(), + "elasticstack_elasticsearch_ingest_processor_registered_domain": ingest.DataSourceProcessorRegisteredDomain(), + "elasticstack_elasticsearch_ingest_processor_remove": ingest.DataSourceProcessorRemove(), + "elasticstack_elasticsearch_ingest_processor_rename": ingest.DataSourceProcessorRename(), + "elasticstack_elasticsearch_ingest_processor_script": ingest.DataSourceProcessorScript(), + "elasticstack_elasticsearch_ingest_processor_set": ingest.DataSourceProcessorSet(), + "elasticstack_elasticsearch_ingest_processor_set_security_user": ingest.DataSourceProcessorSetSecurityUser(), + "elasticstack_elasticsearch_ingest_processor_sort": ingest.DataSourceProcessorSort(), + "elasticstack_elasticsearch_ingest_processor_split": ingest.DataSourceProcessorSplit(), + "elasticstack_elasticsearch_ingest_processor_trim": ingest.DataSourceProcessorTrim(), + "elasticstack_elasticsearch_ingest_processor_uppercase": ingest.DataSourceProcessorUppercase(), + "elasticstack_elasticsearch_ingest_processor_urldecode": ingest.DataSourceProcessorUrldecode(), + "elasticstack_elasticsearch_ingest_processor_uri_parts": ingest.DataSourceProcessorUriParts(), + "elasticstack_elasticsearch_ingest_processor_user_agent": ingest.DataSourceProcessorUserAgent(), + "elasticstack_elasticsearch_security_user": security.DataSourceUser(), + "elasticstack_elasticsearch_snapshot_repository": cluster.DataSourceSnapshotRespository(), }, ResourcesMap: map[string]*schema.Resource{ "elasticstack_elasticsearch_cluster_settings": cluster.ResourceSettings(), diff --git a/internal/utils/utils.go b/internal/utils/utils.go index c209d3f81..80cddd80e 100644 --- a/internal/utils/utils.go +++ b/internal/utils/utils.go @@ -1,6 +1,7 @@ package utils import ( + "crypto/sha1" "encoding/json" "fmt" "io" @@ -160,3 +161,14 @@ func AddConnectionSchema(providedSchema map[string]*schema.Schema) { }, } } + +func StringToHash(s string) (*string, error) { + h := sha1.New() + _, err := h.Write([]byte(s)) + if err != nil { + return nil, err + } + bs := h.Sum(nil) + hash := fmt.Sprintf("%x", bs) + return &hash, nil +} diff --git a/templates/data-sources/elasticsearch_ingest_processor_append.md.tmpl b/templates/data-sources/elasticsearch_ingest_processor_append.md.tmpl new file mode 100644 index 000000000..1d9572505 --- /dev/null +++ b/templates/data-sources/elasticsearch_ingest_processor_append.md.tmpl @@ -0,0 +1,21 @@ +--- +subcategory: "Ingest" +layout: "" +page_title: "Elasticstack: elasticstack_elasticsearch_ingest_processor_append Data Source" +description: |- + Helper data source to create a processor which appends one or more values to an existing array if the field already exists and it is an array. +--- + +# Data Source: elasticstack_elasticsearch_ingest_processor_append + +Helper data source to which can be used to create a processor to append one or more values to an existing array if the field already exists and it is an array. +Converts a scalar to an array and appends one or more values to it if the field exists and it is a scalar. Creates an array containing the provided values if the field doesn’t exist. + +See: https://www.elastic.co/guide/en/elasticsearch/reference/current/append-processor.html + +## Example Usage + +{{ tffile "examples/data-sources/elasticstack_elasticsearch_ingest_processor_append/data-source.tf" }} + +{{ .SchemaMarkdown | trimspace }} + diff --git a/templates/data-sources/elasticsearch_ingest_processor_bytes.md.tmpl b/templates/data-sources/elasticsearch_ingest_processor_bytes.md.tmpl new file mode 100644 index 000000000..ef26a59e5 --- /dev/null +++ b/templates/data-sources/elasticsearch_ingest_processor_bytes.md.tmpl @@ -0,0 +1,22 @@ +--- +subcategory: "Ingest" +layout: "" +page_title: "Elasticstack: elasticstack_elasticsearch_ingest_processor_bytes Data Source" +description: |- + Helper data source to create a processor which converts a human readable byte value (e.g. 1kb) to its value in bytes (e.g. 1024). +--- + +# Data Source: elasticstack_elasticsearch_ingest_processor_bytes + +Helper data source to which can be used to create a processor to convert a human readable byte value (e.g. 1kb) to its value in bytes (e.g. 1024). If the field is an array of strings, all members of the array will be converted. + +Supported human readable units are "b", "kb", "mb", "gb", "tb", "pb" case insensitive. An error will occur if the field is not a supported format or resultant value exceeds 2^63. + +See: https://www.elastic.co/guide/en/elasticsearch/reference/current/bytes-processor.html + +## Example Usage + +{{ tffile "examples/data-sources/elasticstack_elasticsearch_ingest_processor_bytes/data-source.tf" }} + +{{ .SchemaMarkdown | trimspace }} + diff --git a/templates/data-sources/elasticsearch_ingest_processor_circle.md.tmpl b/templates/data-sources/elasticsearch_ingest_processor_circle.md.tmpl new file mode 100644 index 000000000..4f3320884 --- /dev/null +++ b/templates/data-sources/elasticsearch_ingest_processor_circle.md.tmpl @@ -0,0 +1,20 @@ +--- +subcategory: "Ingest" +layout: "" +page_title: "Elasticstack: elasticstack_elasticsearch_ingest_processor_circle Data Source" +description: |- + Helper data source to create a processor which converts circle definitions of shapes to regular polygons which approximate them. +--- + +# Data Source: elasticstack_elasticsearch_ingest_processor_circle + +Helper data source to which can be used to create a processor to convert circle definitions of shapes to regular polygons which approximate them. + +See: https://www.elastic.co/guide/en/elasticsearch/reference/current/ingest-circle-processor.html + +## Example Usage + +{{ tffile "examples/data-sources/elasticstack_elasticsearch_ingest_processor_circle/data-source.tf" }} + +{{ .SchemaMarkdown | trimspace }} + diff --git a/templates/data-sources/elasticsearch_ingest_processor_community_id.md.tmpl b/templates/data-sources/elasticsearch_ingest_processor_community_id.md.tmpl new file mode 100644 index 000000000..cfde53513 --- /dev/null +++ b/templates/data-sources/elasticsearch_ingest_processor_community_id.md.tmpl @@ -0,0 +1,23 @@ +--- +subcategory: "Ingest" +layout: "" +page_title: "Elasticstack: elasticstack_elasticsearch_ingest_processor_community_id Data Source" +description: |- + Helper data source to create a processor which computes the Community ID for network flow data as defined in the Community ID Specification. +--- + +# Data Source: elasticstack_elasticsearch_ingest_processor_community_id + +Helper data source to which can be used to create a processor to compute the Community ID for network flow data as defined in the [Community ID Specification](https://github.com/corelight/community-id-spec). +You can use a community ID to correlate network events related to a single flow. + +The community ID processor reads network flow data from related [Elastic Common Schema (ECS)](https://www.elastic.co/guide/en/ecs/1.12) fields by default. If you use the ECS, no configuration is required. + +See: https://www.elastic.co/guide/en/elasticsearch/reference/current/community-id-processor.html + +## Example Usage + +{{ tffile "examples/data-sources/elasticstack_elasticsearch_ingest_processor_community_id/data-source.tf" }} + +{{ .SchemaMarkdown | trimspace }} + diff --git a/templates/data-sources/elasticsearch_ingest_processor_convert.md.tmpl b/templates/data-sources/elasticsearch_ingest_processor_convert.md.tmpl new file mode 100644 index 000000000..d7cadc6bd --- /dev/null +++ b/templates/data-sources/elasticsearch_ingest_processor_convert.md.tmpl @@ -0,0 +1,28 @@ +--- +subcategory: "Ingest" +layout: "" +page_title: "Elasticstack: elasticstack_elasticsearch_ingest_processor_convert Data Source" +description: |- + Helper data source to create a processor which converts a field in the currently ingested document to a different type, such as converting a string to an integer. +--- + +# Data Source: elasticstack_elasticsearch_ingest_processor_convert + +Helper data source to which can be used to convert a field in the currently ingested document to a different type, such as converting a string to an integer. If the field value is an array, all members will be converted. + +The supported types include: `integer`, `long`, `float`, `double`, `string`, `boolean`, `ip`, and `auto`. + +Specifying `boolean` will set the field to true if its string value is equal to true (ignore case), to false if its string value is equal to false (ignore case), or it will throw an exception otherwise. + +Specifying `ip` will set the target field to the value of `field` if it contains a valid IPv4 or IPv6 address that can be indexed into an IP field type. + +Specifying `auto` will attempt to convert the string-valued `field` into the closest non-string, non-IP type. For example, a field whose value is "true" will be converted to its respective boolean type: true. Do note that float takes precedence of double in auto. A value of "242.15" will "automatically" be converted to 242.15 of type `float`. If a provided field cannot be appropriately converted, the processor will still process successfully and leave the field value as-is. In such a case, `target_field` will be updated with the unconverted field value. + +See: https://www.elastic.co/guide/en/elasticsearch/reference/current/convert-processor.html + +## Example Usage + +{{ tffile "examples/data-sources/elasticstack_elasticsearch_ingest_processor_convert/data-source.tf" }} + +{{ .SchemaMarkdown | trimspace }} + diff --git a/templates/data-sources/elasticsearch_ingest_processor_csv.md.tmpl b/templates/data-sources/elasticsearch_ingest_processor_csv.md.tmpl new file mode 100644 index 000000000..19cb5536b --- /dev/null +++ b/templates/data-sources/elasticsearch_ingest_processor_csv.md.tmpl @@ -0,0 +1,22 @@ +--- +subcategory: "Ingest" +layout: "" +page_title: "Elasticstack: elasticstack_elasticsearch_ingest_processor_csv Data Source" +description: |- + Helper data source to create a processor which extracts fields from CSV line out of a single text field within a document. +--- + +# Data Source: elasticstack_elasticsearch_ingest_processor_csv + +Helper data source to which can be used to extract fields from CSV line out of a single text field within a document. Any empty field in CSV will be skipped. + +See: https://www.elastic.co/guide/en/elasticsearch/reference/current/csv-processor.html + +## Example Usage + +{{ tffile "examples/data-sources/elasticstack_elasticsearch_ingest_processor_csv/data-source.tf" }} + +If the `trim` option is enabled then any whitespace in the beginning and in the end of each unquoted field will be trimmed. For example with configuration above, a value of A, B will result in field field2 having value {nbsp}B (with space at the beginning). If trim is enabled A, B will result in field field2 having value B (no whitespace). Quoted fields will be left untouched. + +{{ .SchemaMarkdown | trimspace }} + diff --git a/templates/data-sources/elasticsearch_ingest_processor_date.md.tmpl b/templates/data-sources/elasticsearch_ingest_processor_date.md.tmpl new file mode 100644 index 000000000..6ce419f61 --- /dev/null +++ b/templates/data-sources/elasticsearch_ingest_processor_date.md.tmpl @@ -0,0 +1,22 @@ +--- +subcategory: "Ingest" +layout: "" +page_title: "Elasticstack: elasticstack_elasticsearch_ingest_processor_date Data Source" +description: |- + Helper data source to create a processor which parses dates from fields, and then uses the date or timestamp as the timestamp for the document. +--- + +# Data Source: elasticstack_elasticsearch_ingest_processor_date + +Helper data source to which can be used to parse dates from fields, and then uses the date or timestamp as the timestamp for the document. +By default, the date processor adds the parsed date as a new field called `@timestamp`. You can specify a different field by setting the `target_field` configuration parameter. Multiple date formats are supported as part of the same date processor definition. They will be used sequentially to attempt parsing the date field, in the same order they were defined as part of the processor definition. + +See: https://www.elastic.co/guide/en/elasticsearch/reference/current/date-processor.html + +## Example Usage + +Here is an example that adds the parsed date to the `timestamp` field based on the `initial_date` field: + +{{ tffile "examples/data-sources/elasticstack_elasticsearch_ingest_processor_date/data-source.tf" }} + +{{ .SchemaMarkdown | trimspace }} diff --git a/templates/data-sources/elasticsearch_ingest_processor_date_index_name.md.tmpl b/templates/data-sources/elasticsearch_ingest_processor_date_index_name.md.tmpl new file mode 100644 index 000000000..84b9e7529 --- /dev/null +++ b/templates/data-sources/elasticsearch_ingest_processor_date_index_name.md.tmpl @@ -0,0 +1,23 @@ +--- +subcategory: "Ingest" +layout: "" +page_title: "Elasticstack: elasticstack_elasticsearch_ingest_processor_date_index_name Data Source" +description: |- + Helper data source to create a processor which helps to point documents to the right time based index based on a date or timestamp field in a document by using the date math index name support. +--- + +# Data Source: elasticstack_elasticsearch_ingest_processor_date_index_name + +The purpose of this processor is to point documents to the right time based index based on a date or timestamp field in a document by using the date math index name support. + +The processor sets the _index metadata field with a date math index name expression based on the provided index name prefix, a date or timestamp field in the documents being processed and the provided date rounding. + +First, this processor fetches the date or timestamp from a field in the document being processed. Optionally, date formatting can be configured on how the field’s value should be parsed into a date. Then this date, the provided index name prefix and the provided date rounding get formatted into a date math index name expression. Also here optionally date formatting can be specified on how the date should be formatted into a date math index name expression. + +See: https://www.elastic.co/guide/en/elasticsearch/reference/current/date-index-name-processor.html + +## Example Usage + +{{ tffile "examples/data-sources/elasticstack_elasticsearch_ingest_processor_date_index_name/data-source.tf" }} + +{{ .SchemaMarkdown | trimspace }} diff --git a/templates/data-sources/elasticsearch_ingest_processor_dissect.md.tmpl b/templates/data-sources/elasticsearch_ingest_processor_dissect.md.tmpl new file mode 100644 index 000000000..d5e78b1fc --- /dev/null +++ b/templates/data-sources/elasticsearch_ingest_processor_dissect.md.tmpl @@ -0,0 +1,22 @@ +--- +subcategory: "Ingest" +layout: "" +page_title: "Elasticstack: elasticstack_elasticsearch_ingest_processor_dissect Data Source" +description: |- + Helper data source to create a processor which extracts structured fields out of a single text field within a document. +--- + +# Data Source: elasticstack_elasticsearch_ingest_processor_dissect + +Similar to the Grok Processor, dissect also extracts structured fields out of a single text field within a document. However unlike the Grok Processor, dissect does not use Regular Expressions. This allows dissect’s syntax to be simple and for some cases faster than the Grok Processor. + +Dissect matches a single text field against a defined pattern. + + +See: https://www.elastic.co/guide/en/elasticsearch/reference/current/dissect-processor.html + +## Example Usage + +{{ tffile "examples/data-sources/elasticstack_elasticsearch_ingest_processor_dissect/data-source.tf" }} + +{{ .SchemaMarkdown | trimspace }} diff --git a/templates/data-sources/elasticsearch_ingest_processor_dot_expander.md.tmpl b/templates/data-sources/elasticsearch_ingest_processor_dot_expander.md.tmpl new file mode 100644 index 000000000..09197e378 --- /dev/null +++ b/templates/data-sources/elasticsearch_ingest_processor_dot_expander.md.tmpl @@ -0,0 +1,20 @@ +--- +subcategory: "Ingest" +layout: "" +page_title: "Elasticstack: elasticstack_elasticsearch_ingest_processor_dot_expander Data Source" +description: |- + Helper data source to create a processor which expands a field with dots into an object field. +--- + +# Data Source: elasticstack_elasticsearch_ingest_processor_dot_expander + +Expands a field with dots into an object field. This processor allows fields with dots in the name to be accessible by other processors in the pipeline. Otherwise these fields can’t be accessed by any processor. + +See: elastic.co/guide/en/elasticsearch/reference/current/dot-expand-processor.html + + +## Example Usage + +{{ tffile "examples/data-sources/elasticstack_elasticsearch_ingest_processor_dot_expander/data-source.tf" }} + +{{ .SchemaMarkdown | trimspace }} diff --git a/templates/data-sources/elasticsearch_ingest_processor_drop.md.tmpl b/templates/data-sources/elasticsearch_ingest_processor_drop.md.tmpl new file mode 100644 index 000000000..9902e4d70 --- /dev/null +++ b/templates/data-sources/elasticsearch_ingest_processor_drop.md.tmpl @@ -0,0 +1,20 @@ +--- +subcategory: "Ingest" +layout: "" +page_title: "Elasticstack: elasticstack_elasticsearch_ingest_processor_drop Data Source" +description: |- + Helper data source to create a processor which drops the document without raising any errors. +--- + +# Data Source: elasticstack_elasticsearch_ingest_processor_drop + +Drops the document without raising any errors. This is useful to prevent the document from getting indexed based on some condition. + +See: https://www.elastic.co/guide/en/elasticsearch/reference/current/drop-processor.html + + +## Example Usage + +{{ tffile "examples/data-sources/elasticstack_elasticsearch_ingest_processor_drop/data-source.tf" }} + +{{ .SchemaMarkdown | trimspace }} diff --git a/templates/data-sources/elasticsearch_ingest_processor_enrich.md.tmpl b/templates/data-sources/elasticsearch_ingest_processor_enrich.md.tmpl new file mode 100644 index 000000000..c31c70eec --- /dev/null +++ b/templates/data-sources/elasticsearch_ingest_processor_enrich.md.tmpl @@ -0,0 +1,19 @@ +--- +subcategory: "Ingest" +layout: "" +page_title: "Elasticstack: elasticstack_elasticsearch_ingest_processor_enrich Data Source" +description: |- + Helper data source to create a processor which enriches documents with data from another index. +--- + +# Data Source: elasticstack_elasticsearch_ingest_processor_enrich + +The enrich processor can enrich documents with data from another index. See enrich data section for more information about how to set this up. + +See: https://www.elastic.co/guide/en/elasticsearch/reference/current/ingest-enriching-data.html and https://www.elastic.co/guide/en/elasticsearch/reference/current/enrich-processor.html + +## Example Usage + +{{ tffile "examples/data-sources/elasticstack_elasticsearch_ingest_processor_enrich/data-source.tf" }} + +{{ .SchemaMarkdown | trimspace }} diff --git a/templates/data-sources/elasticsearch_ingest_processor_fail.md.tmpl b/templates/data-sources/elasticsearch_ingest_processor_fail.md.tmpl new file mode 100644 index 000000000..f4f54453c --- /dev/null +++ b/templates/data-sources/elasticsearch_ingest_processor_fail.md.tmpl @@ -0,0 +1,20 @@ +--- +subcategory: "Ingest" +layout: "" +page_title: "Elasticstack: elasticstack_elasticsearch_ingest_processor_fail Data Source" +description: |- + Helper data source to create a processor which raises an exception. +--- + +# Data Source: elasticstack_elasticsearch_ingest_processor_fail + +Raises an exception. This is useful for when you expect a pipeline to fail and want to relay a specific message to the requester. + +See: https://www.elastic.co/guide/en/elasticsearch/reference/current/fail-processor.html + + +## Example Usage + +{{ tffile "examples/data-sources/elasticstack_elasticsearch_ingest_processor_fail/data-source.tf" }} + +{{ .SchemaMarkdown | trimspace }} diff --git a/templates/data-sources/elasticsearch_ingest_processor_fingerprint.md.tmpl b/templates/data-sources/elasticsearch_ingest_processor_fingerprint.md.tmpl new file mode 100644 index 000000000..ec74d337b --- /dev/null +++ b/templates/data-sources/elasticsearch_ingest_processor_fingerprint.md.tmpl @@ -0,0 +1,19 @@ +--- +subcategory: "Ingest" +layout: "" +page_title: "Elasticstack: elasticstack_elasticsearch_ingest_processor_fingerprint Data Source" +description: |- + Helper data source to create a processor which computes a hash of the document’s content. +--- + +# Data Source: elasticstack_elasticsearch_ingest_processor_fingerprint + +Computes a hash of the document’s content. You can use this hash for content fingerprinting. + +See: https://www.elastic.co/guide/en/elasticsearch/reference/current/fingerprint-processor.html + +## Example Usage + +{{ tffile "examples/data-sources/elasticstack_elasticsearch_ingest_processor_fingerprint/data-source.tf" }} + +{{ .SchemaMarkdown | trimspace }} diff --git a/templates/data-sources/elasticsearch_ingest_processor_foreach.md.tmpl b/templates/data-sources/elasticsearch_ingest_processor_foreach.md.tmpl new file mode 100644 index 000000000..6777bf275 --- /dev/null +++ b/templates/data-sources/elasticsearch_ingest_processor_foreach.md.tmpl @@ -0,0 +1,34 @@ +--- +subcategory: "Ingest" +layout: "" +page_title: "Elasticstack: elasticstack_elasticsearch_ingest_processor_foreach Data Source" +description: |- + Helper data source to create a processor which runs an ingest processor on each element of an array or object. +--- + +# Data Source: elasticstack_elasticsearch_ingest_processor_foreach + +Runs an ingest processor on each element of an array or object. + +All ingest processors can run on array or object elements. However, if the number of elements is unknown, it can be cumbersome to process each one in the same way. + +The `foreach` processor lets you specify a `field` containing array or object values and a `processor` to run on each element in the field. + +See: https://www.elastic.co/guide/en/elasticsearch/reference/current/foreach-processor.html + + +### Access keys and values + +When iterating through an array or object, the foreach processor stores the current element’s value in the `_ingest._value` ingest metadata field. `_ingest._value` contains the entire element value, including any child fields. You can access child field values using dot notation on the `_ingest._value` field. + +When iterating through an object, the foreach processor also stores the current element’s key as a string in `_ingest._key`. + +You can access and change `_ingest._key` and `_ingest._value` in the processor. + + + +## Example Usage + +{{ tffile "examples/data-sources/elasticstack_elasticsearch_ingest_processor_foreach/data-source.tf" }} + +{{ .SchemaMarkdown | trimspace }} diff --git a/templates/data-sources/elasticsearch_ingest_processor_geoip.md.tmpl b/templates/data-sources/elasticsearch_ingest_processor_geoip.md.tmpl new file mode 100644 index 000000000..56299df05 --- /dev/null +++ b/templates/data-sources/elasticsearch_ingest_processor_geoip.md.tmpl @@ -0,0 +1,27 @@ +--- +subcategory: "Ingest" +layout: "" +page_title: "Elasticstack: elasticstack_elasticsearch_ingest_processor_geoip Data Source" +description: |- + Helper data source to create a processor which adds information about the geographical location of an IPv4 or IPv6 address. +--- + +# Data Source: elasticstack_elasticsearch_ingest_processor_geoip + +The geoip processor adds information about the geographical location of an IPv4 or IPv6 address. + +By default, the processor uses the GeoLite2 City, GeoLite2 Country, and GeoLite2 ASN GeoIP2 databases from MaxMind, shared under the CC BY-SA 4.0 license. Elasticsearch automatically downloads updates for these databases from the Elastic GeoIP endpoint: https://geoip.elastic.co/v1/database. To get download statistics for these updates, use the GeoIP stats API. + +If your cluster can’t connect to the Elastic GeoIP endpoint or you want to manage your own updates, [see Manage your own GeoIP2 database updates](https://www.elastic.co/guide/en/elasticsearch/reference/current/geoip-processor.html#manage-geoip-database-updates). + +If Elasticsearch can’t connect to the endpoint for 30 days all updated databases will become invalid. Elasticsearch will stop enriching documents with geoip data and will add tags: ["_geoip_expired_database"] field instead. + + +See: https://www.elastic.co/guide/en/elasticsearch/reference/current/geoip-processor.html + + +## Example Usage + +{{ tffile "examples/data-sources/elasticstack_elasticsearch_ingest_processor_geoip/data-source.tf" }} + +{{ .SchemaMarkdown | trimspace }} diff --git a/templates/data-sources/elasticsearch_ingest_processor_grok.md.tmpl b/templates/data-sources/elasticsearch_ingest_processor_grok.md.tmpl new file mode 100644 index 000000000..1f6aa62be --- /dev/null +++ b/templates/data-sources/elasticsearch_ingest_processor_grok.md.tmpl @@ -0,0 +1,25 @@ +--- +subcategory: "Ingest" +layout: "" +page_title: "Elasticstack: elasticstack_elasticsearch_ingest_processor_grok Data Source" +description: |- + Helper data source to create a processor which extracts structured fields out of a single text field within a document. +--- + +# Data Source: elasticstack_elasticsearch_ingest_processor_grok + +Extracts structured fields out of a single text field within a document. You choose which field to extract matched fields from, as well as the grok pattern you expect will match. A grok pattern is like a regular expression that supports aliased expressions that can be reused. + +This processor comes packaged with many [reusable patterns](https://github.com/elastic/elasticsearch/blob/master/libs/grok/src/main/resources/patterns). + +If you need help building patterns to match your logs, you will find the [Grok Debugger](https://www.elastic.co/guide/en/kibana/master/xpack-grokdebugger.html) tool quite useful! [The Grok Constructor](https://grokconstructor.appspot.com/) is also a useful tool. + + +See: https://www.elastic.co/guide/en/elasticsearch/reference/current/grok-processor.html + + +## Example Usage + +{{ tffile "examples/data-sources/elasticstack_elasticsearch_ingest_processor_grok/data-source.tf" }} + +{{ .SchemaMarkdown | trimspace }} diff --git a/templates/data-sources/elasticsearch_ingest_processor_gsub.md.tmpl b/templates/data-sources/elasticsearch_ingest_processor_gsub.md.tmpl new file mode 100644 index 000000000..cd3ee2af2 --- /dev/null +++ b/templates/data-sources/elasticsearch_ingest_processor_gsub.md.tmpl @@ -0,0 +1,20 @@ +--- +subcategory: "Ingest" +layout: "" +page_title: "Elasticstack: elasticstack_elasticsearch_ingest_processor_gsub Data Source" +description: |- + Helper data source to create a processor which converts a string field by applying a regular expression and a replacement. +--- + +# Data Source: elasticstack_elasticsearch_ingest_processor_gsub + +Converts a string field by applying a regular expression and a replacement. If the field is an array of string, all members of the array will be converted. If any non-string values are encountered, the processor will throw an exception. + +See: https://www.elastic.co/guide/en/elasticsearch/reference/current/gsub-processor.html + + +## Example Usage + +{{ tffile "examples/data-sources/elasticstack_elasticsearch_ingest_processor_gsub/data-source.tf" }} + +{{ .SchemaMarkdown | trimspace }} diff --git a/templates/data-sources/elasticsearch_ingest_processor_html_strip.md.tmpl b/templates/data-sources/elasticsearch_ingest_processor_html_strip.md.tmpl new file mode 100644 index 000000000..ab8264261 --- /dev/null +++ b/templates/data-sources/elasticsearch_ingest_processor_html_strip.md.tmpl @@ -0,0 +1,20 @@ +--- +subcategory: "Ingest" +layout: "" +page_title: "Elasticstack: elasticstack_elasticsearch_ingest_processor_html_strip Data Source" +description: |- + Helper data source to create a processor which removes HTML tags from the field. +--- + +# Data Source: elasticstack_elasticsearch_ingest_processor_html_strip + +Removes HTML tags from the field. If the field is an array of strings, HTML tags will be removed from all members of the array. + +See: templates/data-sources/elasticsearch_ingest_processor_html_strip.md.tmpl + + +## Example Usage + +{{ tffile "examples/data-sources/elasticstack_elasticsearch_ingest_processor_html_strip/data-source.tf" }} + +{{ .SchemaMarkdown | trimspace }} diff --git a/templates/data-sources/elasticsearch_ingest_processor_join.md.tmpl b/templates/data-sources/elasticsearch_ingest_processor_join.md.tmpl new file mode 100644 index 000000000..84e2efe54 --- /dev/null +++ b/templates/data-sources/elasticsearch_ingest_processor_join.md.tmpl @@ -0,0 +1,20 @@ +--- +subcategory: "Ingest" +layout: "" +page_title: "Elasticstack: elasticstack_elasticsearch_ingest_processor_join Data Source" +description: |- + Helper data source to create a processor which joins each element of an array into a single string using a separator character between each element. +--- + +# Data Source: elasticstack_elasticsearch_ingest_processor_join + +Joins each element of an array into a single string using a separator character between each element. Throws an error when the field is not an array. + +See: https://www.elastic.co/guide/en/elasticsearch/reference/current/join-processor.html + + +## Example Usage + +{{ tffile "examples/data-sources/elasticstack_elasticsearch_ingest_processor_join/data-source.tf" }} + +{{ .SchemaMarkdown | trimspace }} diff --git a/templates/data-sources/elasticsearch_ingest_processor_json.md.tmpl b/templates/data-sources/elasticsearch_ingest_processor_json.md.tmpl new file mode 100644 index 000000000..7ad37aadb --- /dev/null +++ b/templates/data-sources/elasticsearch_ingest_processor_json.md.tmpl @@ -0,0 +1,19 @@ +--- +subcategory: "Ingest" +layout: "" +page_title: "Elasticstack: elasticstack_elasticsearch_ingest_processor_json Data Source" +description: |- + Helper data source to create a processor which converts a JSON string into a structured JSON object. +--- + +# Data Source: elasticstack_elasticsearch_ingest_processor_json + +Converts a JSON string into a structured JSON object. + +See: https://www.elastic.co/guide/en/elasticsearch/reference/current/json-processor.html + +## Example Usage + +{{ tffile "examples/data-sources/elasticstack_elasticsearch_ingest_processor_json/data-source.tf" }} + +{{ .SchemaMarkdown | trimspace }} diff --git a/templates/data-sources/elasticsearch_ingest_processor_kv.md.tmpl b/templates/data-sources/elasticsearch_ingest_processor_kv.md.tmpl new file mode 100644 index 000000000..edc9c83af --- /dev/null +++ b/templates/data-sources/elasticsearch_ingest_processor_kv.md.tmpl @@ -0,0 +1,20 @@ +--- +subcategory: "Ingest" +layout: "" +page_title: "Elasticstack: elasticstack_elasticsearch_ingest_processor_kv Data Source" +description: |- + Helper data source to create a processor which helps automatically parse messages (or specific event fields) which are of the `foo=bar` variety. +--- + +# Data Source: elasticstack_elasticsearch_ingest_processor_kv + +This processor helps automatically parse messages (or specific event fields) which are of the `foo=bar` variety. + +See: https://www.elastic.co/guide/en/elasticsearch/reference/current/kv-processor.html + + +## Example Usage + +{{ tffile "examples/data-sources/elasticstack_elasticsearch_ingest_processor_kv/data-source.tf" }} + +{{ .SchemaMarkdown | trimspace }} diff --git a/templates/data-sources/elasticsearch_ingest_processor_lowercase.md.tmpl b/templates/data-sources/elasticsearch_ingest_processor_lowercase.md.tmpl new file mode 100644 index 000000000..577494137 --- /dev/null +++ b/templates/data-sources/elasticsearch_ingest_processor_lowercase.md.tmpl @@ -0,0 +1,20 @@ +--- +subcategory: "Ingest" +layout: "" +page_title: "Elasticstack: elasticstack_elasticsearch_ingest_processor_lowercase Data Source" +description: |- + Helper data source to create a processor which converts a string to its lowercase equivalent. +--- + +# Data Source: elasticstack_elasticsearch_ingest_processor_lowercase + +Converts a string to its lowercase equivalent. If the field is an array of strings, all members of the array will be converted. + +See: https://www.elastic.co/guide/en/elasticsearch/reference/current/lowercase-processor.html + + +## Example Usage + +{{ tffile "examples/data-sources/elasticstack_elasticsearch_ingest_processor_lowercase/data-source.tf" }} + +{{ .SchemaMarkdown | trimspace }} diff --git a/templates/data-sources/elasticsearch_ingest_processor_network_direction.md.tmpl b/templates/data-sources/elasticsearch_ingest_processor_network_direction.md.tmpl new file mode 100644 index 000000000..457cb8ca9 --- /dev/null +++ b/templates/data-sources/elasticsearch_ingest_processor_network_direction.md.tmpl @@ -0,0 +1,40 @@ +--- +subcategory: "Ingest" +layout: "" +page_title: "Elasticstack: elasticstack_elasticsearch_ingest_processor_network_direction Data Source" +description: |- + Helper data source to create a processor which calculates the network direction given a source IP address, destination IP address, and a list of internal networks. +--- + +# Data Source: elasticstack_elasticsearch_ingest_processor_network_direction + +Calculates the network direction given a source IP address, destination IP address, and a list of internal networks. + +The network direction processor reads IP addresses from Elastic Common Schema (ECS) fields by default. If you use the ECS, only the `internal_networks` option must be specified. + + +One of either `internal_networks` or `internal_networks_field` must be specified. If `internal_networks_field` is specified, it follows the behavior specified by `ignore_missing`. + +### Supported named network rangese + +The named ranges supported for the internal_networks option are: + +* `loopback` - Matches loopback addresses in the range of 127.0.0.0/8 or ::1/128. +* `unicast` or `global_unicast` - Matches global unicast addresses defined in RFC 1122, RFC 4632, and RFC 4291 with the exception of the IPv4 broadcast address (255.255.255.255). This includes private address ranges. +* `multicast` - Matches multicast addresses. +* `interface_local_multicast` - Matches IPv6 interface-local multicast addresses. +* `link_local_unicast` - Matches link-local unicast addresses. +* `link_local_multicast` - Matches link-local multicast addresses. +* `private` - Matches private address ranges defined in RFC 1918 (IPv4) and RFC 4193 (IPv6). +* `public` - Matches addresses that are not loopback, unspecified, IPv4 broadcast, link local unicast, link local multicast, interface local multicast, or private. +* `unspecified` - Matches unspecified addresses (either the IPv4 address "0.0.0.0" or the IPv6 address "::"). + + +See: https://www.elastic.co/guide/en/elasticsearch/reference/current/network-direction-processor.html + + +## Example Usage + +{{ tffile "examples/data-sources/elasticstack_elasticsearch_ingest_processor_network_direction/data-source.tf" }} + +{{ .SchemaMarkdown | trimspace }} diff --git a/templates/data-sources/elasticsearch_ingest_processor_pipeline.md.tmpl b/templates/data-sources/elasticsearch_ingest_processor_pipeline.md.tmpl new file mode 100644 index 000000000..b9b9115c6 --- /dev/null +++ b/templates/data-sources/elasticsearch_ingest_processor_pipeline.md.tmpl @@ -0,0 +1,22 @@ +--- +subcategory: "Ingest" +layout: "" +page_title: "Elasticstack: elasticstack_elasticsearch_ingest_processor_pipeline Data Source" +description: |- + Helper data source to create a processor which executes another pipeline. +--- + +# Data Source: elasticstack_elasticsearch_ingest_processor_pipeline + +Executes another pipeline. + +The name of the current pipeline can be accessed from the `_ingest.pipeline` ingest metadata key. + +See: https://www.elastic.co/guide/en/elasticsearch/reference/current/pipeline-processor.html + + +## Example Usage + +{{ tffile "examples/data-sources/elasticstack_elasticsearch_ingest_processor_pipeline/data-source.tf" }} + +{{ .SchemaMarkdown | trimspace }} diff --git a/templates/data-sources/elasticsearch_ingest_processor_registered_domain.md.tmpl b/templates/data-sources/elasticsearch_ingest_processor_registered_domain.md.tmpl new file mode 100644 index 000000000..976397938 --- /dev/null +++ b/templates/data-sources/elasticsearch_ingest_processor_registered_domain.md.tmpl @@ -0,0 +1,20 @@ +--- +subcategory: "Ingest" +layout: "" +page_title: "Elasticstack: elasticstack_elasticsearch_ingest_processor_registered_domain Data Source" +description: |- + Helper data source to create a processor which Extracts the registered domain, sub-domain, and top-level domain from a fully qualified domain name. +--- + +# Data Source: elasticstack_elasticsearch_ingest_processor_registered_domain + +Extracts the registered domain (also known as the effective top-level domain or eTLD), sub-domain, and top-level domain from a fully qualified domain name (FQDN). Uses the registered domains defined in the Mozilla Public Suffix List. + +See: https://www.elastic.co/guide/en/elasticsearch/reference/current/registered-domain-processor.html + + +## Example Usage + +{{ tffile "examples/data-sources/elasticstack_elasticsearch_ingest_processor_registered_domain/data-source.tf" }} + +{{ .SchemaMarkdown | trimspace }} diff --git a/templates/data-sources/elasticsearch_ingest_processor_remove.md.tmpl b/templates/data-sources/elasticsearch_ingest_processor_remove.md.tmpl new file mode 100644 index 000000000..74a76d11c --- /dev/null +++ b/templates/data-sources/elasticsearch_ingest_processor_remove.md.tmpl @@ -0,0 +1,20 @@ +--- +subcategory: "Ingest" +layout: "" +page_title: "Elasticstack: elasticstack_elasticsearch_ingest_processor_remove Data Source" +description: |- + Helper data source to create a processor which removes existing fields. +--- + +# Data Source: elasticstack_elasticsearch_ingest_processor_remove + +Removes existing fields. If one field doesn’t exist, an exception will be thrown. + +See: https://www.elastic.co/guide/en/elasticsearch/reference/current/remove-processor.html + + +## Example Usage + +{{ tffile "examples/data-sources/elasticstack_elasticsearch_ingest_processor_remove/data-source.tf" }} + +{{ .SchemaMarkdown | trimspace }} diff --git a/templates/data-sources/elasticsearch_ingest_processor_rename.md.tmpl b/templates/data-sources/elasticsearch_ingest_processor_rename.md.tmpl new file mode 100644 index 000000000..207c5482f --- /dev/null +++ b/templates/data-sources/elasticsearch_ingest_processor_rename.md.tmpl @@ -0,0 +1,20 @@ +--- +subcategory: "Ingest" +layout: "" +page_title: "Elasticstack: elasticstack_elasticsearch_ingest_processor_rename Data Source" +description: |- + Helper data source to create a processor which renames an existing field. +--- + +# Data Source: elasticstack_elasticsearch_ingest_processor_rename + +Renames an existing field. If the field doesn’t exist or the new name is already used, an exception will be thrown. + +See: https://www.elastic.co/guide/en/elasticsearch/reference/current/rename-processor.html + + +## Example Usage + +{{ tffile "examples/data-sources/elasticstack_elasticsearch_ingest_processor_rename/data-source.tf" }} + +{{ .SchemaMarkdown | trimspace }} diff --git a/templates/data-sources/elasticsearch_ingest_processor_script.md.tmpl b/templates/data-sources/elasticsearch_ingest_processor_script.md.tmpl new file mode 100644 index 000000000..785ea671b --- /dev/null +++ b/templates/data-sources/elasticsearch_ingest_processor_script.md.tmpl @@ -0,0 +1,30 @@ +--- +subcategory: "Ingest" +layout: "" +page_title: "Elasticstack: elasticstack_elasticsearch_ingest_processor_script Data Source" +description: |- + Helper data source to create a processor which runs an inline or stored script on incoming documents. +--- + +# Data Source: elasticstack_elasticsearch_ingest_processor_script + +Runs an inline or stored script on incoming documents. The script runs in the ingest context. + +The script processor uses the script cache to avoid recompiling the script for each incoming document. To improve performance, ensure the script cache is properly sized before using a script processor in production. + +See: https://www.elastic.co/guide/en/elasticsearch/reference/current/script-processor.html + +### Access source fields + +The script processor parses each incoming document’s JSON source fields into a set of maps, lists, and primitives. To access these fields with a Painless script, use the map access operator: `ctx['my-field']`. You can also use the shorthand `ctx.` syntax. + +### Access metadata fields + +You can also use a script processor to access metadata fields. + + +## Example Usage + +{{ tffile "examples/data-sources/elasticstack_elasticsearch_ingest_processor_script/data-source.tf" }} + +{{ .SchemaMarkdown | trimspace }} diff --git a/templates/data-sources/elasticsearch_ingest_processor_set.md.tmpl b/templates/data-sources/elasticsearch_ingest_processor_set.md.tmpl new file mode 100644 index 000000000..55c921ee4 --- /dev/null +++ b/templates/data-sources/elasticsearch_ingest_processor_set.md.tmpl @@ -0,0 +1,20 @@ +--- +subcategory: "Ingest" +layout: "" +page_title: "Elasticstack: elasticstack_elasticsearch_ingest_processor_set Data Source" +description: |- + Helper data source to create a processor which sets one field and associates it with the specified value. +--- + +# Data Source: elasticstack_elasticsearch_ingest_processor_set + +Sets one field and associates it with the specified value. If the field already exists, its value will be replaced with the provided one. + +See: https://www.elastic.co/guide/en/elasticsearch/reference/current/set-processor.html + + +## Example Usage + +{{ tffile "examples/data-sources/elasticstack_elasticsearch_ingest_processor_set/data-source.tf" }} + +{{ .SchemaMarkdown | trimspace }} diff --git a/templates/data-sources/elasticsearch_ingest_processor_set_security_user.md.tmpl b/templates/data-sources/elasticsearch_ingest_processor_set_security_user.md.tmpl new file mode 100644 index 000000000..a77bcd208 --- /dev/null +++ b/templates/data-sources/elasticsearch_ingest_processor_set_security_user.md.tmpl @@ -0,0 +1,20 @@ +--- +subcategory: "Ingest" +layout: "" +page_title: "Elasticstack: elasticstack_elasticsearch_ingest_processor_set_security_user Data Source" +description: |- + Helper data source to create a processor which sets user-related details from the current authenticated user to the current document by pre-processing the ingest. +--- + +# Data Source: elasticstack_elasticsearch_ingest_processor_set_security_user + +Sets user-related details (such as `username`, `roles`, `email`, `full_name`, `metadata`, `api_key`, `realm` and `authentication_typ`e) from the current authenticated user to the current document by pre-processing the ingest. The `api_key` property exists only if the user authenticates with an API key. It is an object containing the id, name and metadata (if it exists and is non-empty) fields of the API key. The realm property is also an object with two fields, name and type. When using API key authentication, the realm property refers to the realm from which the API key is created. The `authentication_type property` is a string that can take value from `REALM`, `API_KEY`, `TOKEN` and `ANONYMOUS`. + +See: https://www.elastic.co/guide/en/elasticsearch/reference/current/ingest-node-set-security-user-processor.html + + +## Example Usage + +{{ tffile "examples/data-sources/elasticstack_elasticsearch_ingest_processor_set_security_user/data-source.tf" }} + +{{ .SchemaMarkdown | trimspace }} diff --git a/templates/data-sources/elasticsearch_ingest_processor_sort.md.tmpl b/templates/data-sources/elasticsearch_ingest_processor_sort.md.tmpl new file mode 100644 index 000000000..de6f37a05 --- /dev/null +++ b/templates/data-sources/elasticsearch_ingest_processor_sort.md.tmpl @@ -0,0 +1,20 @@ +--- +subcategory: "Ingest" +layout: "" +page_title: "Elasticstack: elasticstack_elasticsearch_ingest_processor_sort Data Source" +description: |- + Helper data source to create a processor which sorts the elements of an array ascending or descending. +--- + +# Data Source: elasticstack_elasticsearch_ingest_processor_sort + +Sorts the elements of an array ascending or descending. Homogeneous arrays of numbers will be sorted numerically, while arrays of strings or heterogeneous arrays of strings + numbers will be sorted lexicographically. Throws an error when the field is not an array. + +See: https://www.elastic.co/guide/en/elasticsearch/reference/current/sort-processor.html + + +## Example Usage + +{{ tffile "examples/data-sources/elasticstack_elasticsearch_ingest_processor_sort/data-source.tf" }} + +{{ .SchemaMarkdown | trimspace }} diff --git a/templates/data-sources/elasticsearch_ingest_processor_split.md.tmpl b/templates/data-sources/elasticsearch_ingest_processor_split.md.tmpl new file mode 100644 index 000000000..ed7e3764b --- /dev/null +++ b/templates/data-sources/elasticsearch_ingest_processor_split.md.tmpl @@ -0,0 +1,20 @@ +--- +subcategory: "Ingest" +layout: "" +page_title: "Elasticstack: elasticstack_elasticsearch_ingest_processor_split Data Source" +description: |- + Helper data source to create a processor which splits a field into an array using a separator character. +--- + +# Data Source: elasticstack_elasticsearch_ingest_processor_split + +Splits a field into an array using a separator character. Only works on string fields. + +See: https://www.elastic.co/guide/en/elasticsearch/reference/current/split-processor.html + + +## Example Usage + +{{ tffile "examples/data-sources/elasticstack_elasticsearch_ingest_processor_split/data-source.tf" }} + +{{ .SchemaMarkdown | trimspace }} diff --git a/templates/data-sources/elasticsearch_ingest_processor_trim.md.tmpl b/templates/data-sources/elasticsearch_ingest_processor_trim.md.tmpl new file mode 100644 index 000000000..1c1222aa0 --- /dev/null +++ b/templates/data-sources/elasticsearch_ingest_processor_trim.md.tmpl @@ -0,0 +1,22 @@ +--- +subcategory: "Ingest" +layout: "" +page_title: "Elasticstack: elasticstack_elasticsearch_ingest_processor_trim Data Source" +description: |- + Helper data source to create a processor which trims whitespace from field. +--- + +# Data Source: elasticstack_elasticsearch_ingest_processor_trim + +Trims whitespace from field. If the field is an array of strings, all members of the array will be trimmed. + +**NOTE:** This only works on leading and trailing whitespace. + +See: https://www.elastic.co/guide/en/elasticsearch/reference/current/trim-processor.html + + +## Example Usage + +{{ tffile "examples/data-sources/elasticstack_elasticsearch_ingest_processor_trim/data-source.tf" }} + +{{ .SchemaMarkdown | trimspace }} diff --git a/templates/data-sources/elasticsearch_ingest_processor_uppercase.md.tmpl b/templates/data-sources/elasticsearch_ingest_processor_uppercase.md.tmpl new file mode 100644 index 000000000..62a22f67d --- /dev/null +++ b/templates/data-sources/elasticsearch_ingest_processor_uppercase.md.tmpl @@ -0,0 +1,20 @@ +--- +subcategory: "Ingest" +layout: "" +page_title: "Elasticstack: elasticstack_elasticsearch_ingest_processor_uppercase Data Source" +description: |- + Helper data source to create a processor which converts a string to its uppercase equivalent. +--- + +# Data Source: elasticstack_elasticsearch_ingest_processor_uppercase + +Converts a string to its uppercase equivalent. If the field is an array of strings, all members of the array will be converted. + +See: https://www.elastic.co/guide/en/elasticsearch/reference/current/uppercase-processor.html + + +## Example Usage + +{{ tffile "examples/data-sources/elasticstack_elasticsearch_ingest_processor_uppercase/data-source.tf" }} + +{{ .SchemaMarkdown | trimspace }} diff --git a/templates/data-sources/elasticsearch_ingest_processor_uri_parts.md.tmpl b/templates/data-sources/elasticsearch_ingest_processor_uri_parts.md.tmpl new file mode 100644 index 000000000..98ebe3fa8 --- /dev/null +++ b/templates/data-sources/elasticsearch_ingest_processor_uri_parts.md.tmpl @@ -0,0 +1,20 @@ +--- +subcategory: "Ingest" +layout: "" +page_title: "Elasticstack: elasticstack_elasticsearch_ingest_processor_uri_parts Data Source" +description: |- + Helper data source to create a processor which parses a Uniform Resource Identifier (URI) string and extracts its components as an object. +--- + +# Data Source: elasticstack_elasticsearch_ingest_processor_uri_parts + +Parses a Uniform Resource Identifier (URI) string and extracts its components as an object. This URI object includes properties for the URI’s domain, path, fragment, port, query, scheme, user info, username, and password. + +See: https://www.elastic.co/guide/en/elasticsearch/reference/current/uri-parts-processor.html + + +## Example Usage + +{{ tffile "examples/data-sources/elasticstack_elasticsearch_ingest_processor_uri_parts/data-source.tf" }} + +{{ .SchemaMarkdown | trimspace }} diff --git a/templates/data-sources/elasticsearch_ingest_processor_urldecode.md.tmpl b/templates/data-sources/elasticsearch_ingest_processor_urldecode.md.tmpl new file mode 100644 index 000000000..52a270fb3 --- /dev/null +++ b/templates/data-sources/elasticsearch_ingest_processor_urldecode.md.tmpl @@ -0,0 +1,20 @@ +--- +subcategory: "Ingest" +layout: "" +page_title: "Elasticstack: elasticstack_elasticsearch_ingest_processor_urldecode Data Source" +description: |- + Helper data source to create a processor which URL-decodes a string. +--- + +# Data Source: elasticstack_elasticsearch_ingest_processor_urldecode + +URL-decodes a string. If the field is an array of strings, all members of the array will be decoded. + +See: https://www.elastic.co/guide/en/elasticsearch/reference/current/urldecode-processor.html + + +## Example Usage + +{{ tffile "examples/data-sources/elasticstack_elasticsearch_ingest_processor_urldecode/data-source.tf" }} + +{{ .SchemaMarkdown | trimspace }} diff --git a/templates/data-sources/elasticsearch_ingest_processor_user_agent.md.tmpl b/templates/data-sources/elasticsearch_ingest_processor_user_agent.md.tmpl new file mode 100644 index 000000000..145a66a1d --- /dev/null +++ b/templates/data-sources/elasticsearch_ingest_processor_user_agent.md.tmpl @@ -0,0 +1,23 @@ +--- +subcategory: "Ingest" +layout: "" +page_title: "Elasticstack: elasticstack_elasticsearch_ingest_processor_user_agent Data Source" +description: |- + Helper data source to create a processor which extracts details from the user agent string a browser sends with its web requests. +--- + +# Data Source: elasticstack_elasticsearch_ingest_processor_user_agent + +The `user_agent` processor extracts details from the user agent string a browser sends with its web requests. This processor adds this information by default under the `user_agent` field. + +The ingest-user-agent module ships by default with the regexes.yaml made available by uap-java with an Apache 2.0 license. For more details see https://github.com/ua-parser/uap-core. + + +See: https://www.elastic.co/guide/en/elasticsearch/reference/current/user-agent-processor.html + + +## Example Usage + +{{ tffile "examples/data-sources/elasticstack_elasticsearch_ingest_processor_user_agent/data-source.tf" }} + +{{ .SchemaMarkdown | trimspace }} diff --git a/templates/resources/elasticsearch_ingest_pipeline.md.tmpl b/templates/resources/elasticsearch_ingest_pipeline.md.tmpl index 477ee6ae0..0a4b83f66 100644 --- a/templates/resources/elasticsearch_ingest_pipeline.md.tmpl +++ b/templates/resources/elasticsearch_ingest_pipeline.md.tmpl @@ -12,8 +12,16 @@ Use ingest APIs to manage tasks and resources related to ingest pipelines and pr ## Example Usage +You can provide your custom JSON definitions for the ingest processors: + {{ tffile "examples/resources/elasticstack_elasticsearch_ingest_pipeline/resource.tf" }} + +Or you can use data sources and Terraform declarative way of defining the ingest processors: + +{{ tffile "examples/resources/elasticstack_elasticsearch_ingest_pipeline/resource2.tf" }} + + {{ .SchemaMarkdown | trimspace }} ## Import