-
Notifications
You must be signed in to change notification settings - Fork 8.5k
Description
Kibana version:
master
Describe the bug:
If the selected index-pattern doesn't have a configured time field, Lens still display the time filter and still use the time filter range (and its date histogram interval) to aggregate a date field.
This will to main issues:
- ES errors on too many buckets generated: the ES query is issued without a time filter (because the index-pattern doesn't have a default time filed) and with a
date_histograminterval relative to the time filter configured. - you will have a valid number of buckets, but the configuration issued to Elastic-Chart are wrong: the min and max domains refers to the min and max values of the time-filter, but the data itself is completely on a different domain (its full domain). This, in specific cases, can trigger an issue in elastic-charts that computes all the missing ticks (Generating too many buckets if the minInterval is bigger than the domain elastic-charts#517) generating a very big array of missing ticks that needs to be formatted and will freeze the browser for seconds. (this second issue will be fixed also in elastic-charts)
Steps to reproduce:
To reproduce the too many buckets issue follow these steps:
- add the
kibana_sample_data_logssamples data - create a new
index-patterncalledkibana_sample_data_l*without a time field specified (select theI don't want to use the Time Filteroption from the dropdown) - Create a lens visualization
- Select
Last 15 minuteson the time filter - Drag the
@timestampfield into the x axis.
The query sent to elastic search will contain a date_histogram agg with the 30s interval (coming from the time filter) that is not ideal. An auto interval should be configured in that case, or we should apply the time filter to that query too
To reproduce the freeze issue follow these steps, it's a bit long, but I wasn't able to properly reproduce the same with a different data set
- Download the following CSV: https://opensky-network.org/datasets/metadata/aircraftDatabase.csv
- Download and extract the latest Logstash 7.5.1 https://www.elastic.co/downloads/logstash
- Open the Kibana Dev Tools and add the mappings for a the index we need to build:
PUT opensky_aircrafts
{
"mappings": {
"properties": {
"built": {
"type": "date",
"format": "yyyy-MM-dd"
},
"registered": {
"type": "date",
"format": "yyyy-MM-dd"
},
"reguntil": {
"type": "date",
"format": "yyyy-MM-dd"
},
"firstflightdate": {
"type": "date",
"format": "yyyy-MM-dd"
}
}
}
}
- Create an logstash pipeline to ingest the CSV:
input {
file {
path => "/ABSOLUTE/PATH/AND/FILENAME/OF/THE/CSV"
mode => read
start_position => "beginning"
sincedb_path => "NULL"
}
}
filter {
csv {
skip_header => true
columns => ["icao24","registration","manufacturericao","manufacturername","model","typecode","serialnumber","linenumber","icaoaircrafttype","operator","operatorcallsign","operatoricao","operatoriata","owner","testreg","registered","reguntil","status","built","firstflightdate","seatconfiguration","engines","modes","adsb","acars","notes","categoryDescription"]
}
mutate {
convert => {
"acars" => "boolean"
"adsb" => "boolean"
"modes" => "boolean"
}
remove_field => ["message", "@timestamp", "path", "host", "@version"]
}
if [firstflightdate] == "" {
mutate {
remove_field => ["firstflightdate"]
}
}
if [registered] == "" {
mutate {
remove_field => ["registered"]
}
}
if [reguntil] == "" {
mutate {
remove_field => ["reguntil"]
}
}
if [built] == "" {
mutate {
remove_field => ["built"]
}
}
}
output {
elasticsearch {
hosts => ["http://localhost:9200"]
user => "elastic"
password => "changeme"
index => "opensky_aircrafts"
}
}
- ingest the CSV with logstash (there can be some errors on parsing but you can safety ignore them)
bin/logstash -f <path to above config>
-
Check that all the data is ingested
GET opensky_aircrafts/_countshould return a value near 460058 -
Set the timepicker to
Last 15 minutes -
Select the
opensky_aircraftsindex, drag thebuilttime field into the x field (lens will display the error of too many buckets, but you can go on) -
select count as the y field (the error still displayed)
-
Click on the
buildfield and selectCustomize min interval, select than 1 year. This will remove the error (because the data_histogram is now using a1 yearinterval, but the min and max value of the chart are not always restricted to the last15 minutes.
Expected behavior:
If displaying data from a index-pattern without a default time field, than:
- Avoid adding a custom min/max domain to elastic-charts and use
auto_date_histogramto create a nice histogram without issuing a time filter or, - if the user want's to display a date histogram, keep using the time filter range (in a way it's always a date field, also if it's not configured as the default in the index-pattern.
Issues can arrive when mixing multiple layers from different indices: in this case the 2. solution will is preferable, or if on the 1. solution you should in some way warn the user of what is going on behind the scene like: "oh since you are adding a time sensitive index-pattern, we are now limiting the NON_TIME_SENSITIVE_INDEX_P to the same time-range selected"