You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I've recently enabled the es.indices_mappings flag to gain the ability to monitor and alert in the case of index field limits being exceeded. Unfortunately, I've noticed that the number reported by the exporter does not appear to be accurate. I've confirmed this by testing with a sample document observing the resulting response from Elasticsearch and then confirming via the field capabilities API. I've outlined the steps to reproduce this issue below.
The top-level field data, data2 and data2.nested_field6 are each considered objects and each count as 1 field (3 total)
The nested fields data.field1 to data.field8 each have a text type and a keyword type via the "fields" property. (8 fields x 2 = 16 total)
The data.field9 and data.field10 fields are typed as an integers therefore only counts as a each one field (2 total).
The nested fields data2.field1 to data.field5 each have a text type and a keyword type via the "fields" property. (5 fields x 2 = 10 total)
The nested fields data2.nested_field6.field1 to data.nested_field6.field5 each have a text type and a keyword type via the "fields" property. (4 fields x 2 = 8 total)
The nested fields data2.nested_field6.field5 is typed as an integer thus counts as one field . (1 total)
5. View the updated index mapping
I've excluding the mapping of the index in this case for brevity and instead opted to display the fields with the capabilities API. With it, we can confirm it has exactly 40 fields:
We previously confirmed in step 5 that there were in fact 40 fields present in the mapping by using the field capabilities API.
Attempting to add a single new field in step 6 caused a mapping error to be returned due to the limit being exceeded.
Possible Solution
I've started looking at the code and figuring out a possible fix although it
seems like a matter of updating the logic in the recursive counting function for the mappings. Another option that could eventually be possible is to completely switch over to use the field capabilities API although that wouldn't be efficient in the current state. The reason being is that API call returns a single list of fields that isn't grouped by index, regardless of how many indices are specified. In the meantime, if anyone agrees that this would be a beneficial improvement later on, I suggest you subscribe to this related Elastic github issue which outlines a possible API call that returns a list of unique fields for each index.
The text was updated successfully, but these errors were encountered:
I believe I've identified and resolved the issue. I've tested with a few sample indices and compared against the field capabilities API and the field count seem to be accurate now. I'll follow up shortly with a pull request.
Problem Description
I've recently enabled the
es.indices_mappings
flag to gain the ability to monitor and alert in the case of index field limits being exceeded. Unfortunately, I've noticed that the number reported by the exporter does not appear to be accurate. I've confirmed this by testing with a sample document observing the resulting response from Elasticsearch and then confirming via the field capabilities API. I've outlined the steps to reproduce this issue below.Relevant Software Versions
Reproducing the problem
1. Create a test index
response:
2. Confirm index mapping is empty:
response:
and also via the field capabilities API:
response:
3. Set the max number of fields in the index to 40
Take note that by default, each field has text type and a "keyword" via the "fields" property.
response:
and confirm the updated settings have been applied:
response:
4. Index a document that has exactly the max number of fields
The breakdown explaning the 40 fields. :
data
,data2
anddata2.nested_field6
are each considered objects and each count as 1 field (3 total)data.field1
todata.field8
each have a text type and a keyword type via the "fields" property. (8 fields x 2 = 16 total)data.field9
anddata.field10
fields are typed as an integers therefore only counts as a each one field (2 total).data2.field1
todata.field5
each have a text type and a keyword type via the "fields" property. (5 fields x 2 = 10 total)data2.nested_field6.field1
todata.nested_field6.field5
each have a text type and a keyword type via the "fields" property. (4 fields x 2 = 8 total)data2.nested_field6.field5
is typed as an integer thus counts as one field . (1 total)5. View the updated index mapping
I've excluding the mapping of the index in this case for brevity and instead opted to display the fields with the capabilities API. With it, we can confirm it has exactly 40 fields:
6. Attempt to index a document that will exceed than the max number of fields by 1 new field
In this case, the
data2.field7
field attempts to triggers a mapping update but consequently fails:7. Observe index field value reported for this index
Running the following PromQL query:
shows a result of 37 fields:
Which we know is incorrect because:
Possible Solution
I've started looking at the code and figuring out a possible fix although it
seems like a matter of updating the logic in the recursive counting function for the mappings. Another option that could eventually be possible is to completely switch over to use the field capabilities API although that wouldn't be efficient in the current state. The reason being is that API call returns a single list of fields that isn't grouped by index, regardless of how many indices are specified. In the meantime, if anyone agrees that this would be a beneficial improvement later on, I suggest you subscribe to this related Elastic github issue which outlines a possible API call that returns a list of unique fields for each index.
The text was updated successfully, but these errors were encountered: