This repository has been archived by the owner on Aug 2, 2022. It is now read-only.
-
Notifications
You must be signed in to change notification settings - Fork 45
[BUG]Policy is not applied to indices #448
Labels
bug
Something isn't working
Comments
I found a bug that indicates that you cannot have more than 10 policies, otherwise the rest will not work. |
Can confirm, we have a similar issue. I can see 12 policies at the moment |
@TobiasSalzmann We ended up going with 3 policies for now and tailored it on the fluetnd side. Will wait for the fix to be releaesed |
Is the bug tracked somewhere else? Maybe better to leave it open otherwise. |
It’s already been fixed and merged and should be within the next release |
Is this bug resolved in the new release? I didn't find it in the new Release notes https://github.com/opendistro-for-elasticsearch/opendistro-build/blob/main/release-notes/opendistro-for-elasticsearch-release-notes-1.13.2.md |
Sign up for free
to subscribe to this conversation on GitHub.
Already have an account?
Sign in.
Describe the bug
We are creating a policy per namespace in our k8s cluster.
Each policy has an index pattern matching "namespace*"
Polices are created before the fluentd is deployed.
Out of 16 namespace, 10 of the newly created indices are getting attached to the respective policy while other 6 do not.
Moreover if I am creating an index using the dev tools in Kibana, the policies that do apply, applied to the respective index but the policies that do not, cant catch the created index. It seems that some policies are not operational so to speak.
Other plugins installed
Security
To Reproduce
Steps to reproduce the behavior:
`
${$ .kubernetes.namespace_name}-new
@type elasticsearch
@log_level "#{ENV['OUTPUT_LOG_LEVEL']}"
type_name fluentd
include_tag_key true
hosts "#{ENV['OUTPUT_HOSTS']}"
path "#{ENV['OUTPUT_PATH']}"
scheme "#{ENV['OUTPUT_SCHEME']}"
ssl_verify "#{ENV['OUTPUT_SSL_VERIFY']}"
ssl_version "#{ENV['OUTPUT_SSL_VERSION']}"
ca_file /certs/es-root-ca.crt
client_cert /certs/elk-rest-crt.pem
client_key /certs/elk-rest-key.pem
logstash_format false
reload_connections "#{ENV['OUTPUT_RELOAD_CONNECTIONS']}"
reconnect_on_error "#{ENV['OUTPUT_RECONNECT_ON_ERROR']}"
reload_on_failure "#{ENV['OUTPUT_RELOAD_ON_FAILURE']}"
suppress_type_name "#{ENV['OUTPUT_SUPPRESS_TYPE_NAME']}"
index_name
index_date_pattern ""
include_timestamp true
`
{ "index_patterns" : ["<<NAMESPACE>>*"], "settings" : { "number_of_shards": 3, "number_of_replicas": 2, "opendistro.index_state_management.policy_id": "<<POLICY>>", "opendistro.index_state_management.rollover_alias": "<<NAMESPACE>>-new" } }
The above will create an alias and an index on the fly for each namespace in the k8s cluster.
The indexes, template and the aliases are created and everything works smoothly except that not all the indexes will be assigned to the respective policy (in our case 10/16).
Expected behavior
Policies matching a specific index pattern will always get applied to newly created indexes that match that pattern
Screenshots
Working policy:
Non working policy:
Additional context
This is a complicated issue to reproduce and explain, if a meeting can be scheduled, I would gladly join
The text was updated successfully, but these errors were encountered: