[Kubernetes] Remove extra base fields for state datastreams#8393
[Kubernetes] Remove extra base fields for state datastreams#8393constanca-m merged 14 commits intoelastic:mainfrom
Conversation
Signed-off-by: constanca-m <constanca.manteigas@elastic.co>
Signed-off-by: constanca-m <constanca.manteigas@elastic.co>
Signed-off-by: constanca-m <constanca.manteigas@elastic.co>
Signed-off-by: constanca-m <constanca.manteigas@elastic.co>
🌐 Coverage report
|
Since elastic-agent-autodiscovery 0.6.4 (elastic/elastic-agent-autodiscover@285f0bb) we disable deployment and cronjob enrichemnt by default. So you need to enable those specifically in the advance options to make the deployment and cronjob name appear.
The annotations are not include by default. See documentation here: https://www.elastic.co/guide/en/fleet/current/add_kubernetes_metadata-processor.html So you need to add include_annotations under node and namespace fields FYI there is also this bug open that is related |
I see this is for node labels? I can see them in Discover. Only node annotations are not there |
Signed-off-by: constanca-m <constanca.manteigas@elastic.co>
Signed-off-by: constanca-m <constanca.manteigas@elastic.co>
Signed-off-by: constanca-m <constanca.manteigas@elastic.co>
ChrsMark
left a comment
There was a problem hiding this comment.
@constanca-m I would suggest moving the Changes part of the PR's description into a more obvious place (in the begging of the description).
I have left some comments for some fields that should not be removed most probably. We should check the code and ensure that we don't miss sth here. Just testing that the integration works with default settings is not enough for removing stuff.
Also since I see you have a section Bugs and warnings with some open questions/concerns why not to first open an issue and call for feedback/discussion there instead of directly opening a PR with "risky" changes? In this way a possible change is first discussed carefully before moving into the actual implementation.
| description: > | ||
| Kubernetes node name | ||
|
|
||
| - name: node.hostname |
There was a problem hiding this comment.
Should this really be remove? Do we know why it is here right now?
There was a problem hiding this comment.
I can see if I can find some background on this field. But even checking with kubectl describe node ..., there is nothing close to node.hostname
There was a problem hiding this comment.
I would say keep it for now as it might break old versions. If we are not 100% sure
There was a problem hiding this comment.
The PR that introduced this field is this one. I will be adding it again.
Signed-off-by: constanca-m <constanca.manteigas@elastic.co>
Signed-off-by: constanca-m <constanca.manteigas@elastic.co>
|
/test |
Signed-off-by: constanca-m <constanca.manteigas@elastic.co>
Signed-off-by: constanca-m <constanca.manteigas@elastic.co>
I tested all of them again @ChrsMark with EA standalone and I added in the description the manifest I used. In summary, I just added |
| description: > | ||
| Kubernetes annotations map | ||
|
|
||
| - name: selectors.* |
There was a problem hiding this comment.
Selectors should stay here
According to. https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/ daemonset can have selectors
There was a problem hiding this comment.
Ok I was thinking that we might need them in the future. But ok as long as only services support that we can remove it
| description: > | ||
| Kubernetes annotations map | ||
|
|
||
| - name: selectors.* |
There was a problem hiding this comment.
Same for deployment selectors should stay
There was a problem hiding this comment.
Here is the comment with the link to the implementation PR that introduces selectors.* only for state_service
| description: > | ||
| Kubernetes annotations map | ||
|
|
||
| - name: selectors.* |
There was a problem hiding this comment.
Here is the comment with the link to the implementation PR that introduces selectors.* only for state_service
| type: keyword | ||
| description: >- | ||
| Kubernetes container image | ||
| Kubernetes annotations map No newline at end of file |
There was a problem hiding this comment.
Yes, it is the description for the annotations.* above, the other lines were removed
| description: > | ||
| Kubernetes annotations map | ||
|
|
||
| - name: selectors.* |
Signed-off-by: constanca-m <constanca.manteigas@elastic.co>
ChrsMark
left a comment
There was a problem hiding this comment.
Anything missing from this one?
If the previous comments are covered maybe resolve them so as to have a clear view of what is pending and what is not.
Proposed commit message
state_*datastreams.Checklist
changelog.ymlfile.How this was tested
kind-config.yamlpresent in this repo.kubernetes/_dev/deploy/k8s(these are also the resources being used in testing).Note: labels for the namespace and node exist in the testing environment used.
This is the EA standalone manifest in full.
Changes
For each data stream, the following fields were removed:
state_container
kubernetes.selectors.*kubernetes.container.imagestate_cronjob
kubernetes.pod.namekubernetes.pod.uidkubernetes.pod.ipkubernetes.node.namekubernetes.node.hostnamekubernetes.selectors.*kubernetes.replicaset.namekubernetes.deployment.namekubernetes.statefulset.namekubernetes.container.namekubernetes.container.imagestate_daemonset
kubernetes.pod.namekubernetes.pod.uidkubernetes.pod.ipkubernetes.node.namekubernetes.node.hostnamekubernetes.selectors.*kubernetes.replicaset.namekubernetes.deployment.namekubernetes.statefulset.namekubernetes.container.namekubernetes.container.imagestate_deployment
kubernetes.pod.namekubernetes.pod.uidkubernetes.pod.ipkubernetes.node.namekubernetes.node.hostnamekubernetes.selectors.*kubernetes.replicaset.namekubernetes.statefulset.namekubernetes.container.namekubernetes.container.imagestate_job
kubernetes.pod.namekubernetes.pod.uidkubernetes.pod.ipkubernetes.node.namekubernetes.node.hostnamekubernetes.selectors.*kubernetes.replicaset.namekubernetes.deployment.namekubernetes.statefulset.namekubernetes.container.namekubernetes.container.imagestate_namespace
Newly introduced, nothing to delete.state_node
kubernetes.pod.namekubernetes.pod.uidkubernetes.pod.ipkubernetes.namespacekubernetes.node.hostnamekubernetes.selectors.*kubernetes.replicaset.namekubernetes.deployment.namekubernetes.statefulset.namekubernetes.container.namekubernetes.container.imagestate_persistentvolume
kubernetes.pod.namekubernetes.pod.uidkubernetes.pod.ipkubernetes.namespacekubernetes.node.namekubernetes.node.hostnamekubernetes.selectors.*kubernetes.replicaset.namekubernetes.deployment.namekubernetes.statefulset.namekubernetes.container.namekubernetes.container.imagestate_persistentvolumeclaim
kubernetes.pod.namekubernetes.pod.uidkubernetes.pod.ipkubernetes.namespacekubernetes.node.namekubernetes.selectors.*kubernetes.replicaset.namekubernetes.deployment.namekubernetes.statefulset.namekubernetes.container.namekubernetes.container.imagestate_pod
kubernetes.selectors.*kubernetes.container.namekubernetes.container.imagestate_replicaset
kubernetes.pod.namekubernetes.pod.uidkubernetes.pod.ipkubernetes.node.namekubernetes.node.hostnamekubernetes.selectors.*kubernetes.statefulset.namekubernetes.container.namekubernetes.container.imagestate_resourcequota
kubernetes.pod.namekubernetes.pod.uidkubernetes.pod.ipkubernetes.node.namekubernetes.selectors.*kubernetes.replicaset.namekubernetes.deployment.namekubernetes.statefulset.namekubernetes.container.namekubernetes.container.imagestate_service
kubernetes.pod.namekubernetes.pod.uidkubernetes.pod.ipkubernetes.replicaset.namekubernetes.deployment.namekubernetes.statefulset.namekubernetes.container.namekubernetes.container.imagestate_statefulset
kubernetes.pod.namekubernetes.pod.uidkubernetes.pod.ipkubernetes.node.namekubernetes.node.hostnamekubernetes.seletectors.*kubernetes.replicaset.namekubernetes.deployment.namekubernetes.seletectors.*kubernetes.container.namekubernetes.container.imagestate_storageclass
kubernetes.pod.namekubernetes.pod.uidkubernetes.pod.ipkubernetes.node.namekubernetes.node.hostnamekubernetes.namespacekubernetes.seletectors.*kubernetes.replicaset.namekubernetes.deployment.namekubernetes.statefulset.namekubernetes.container.namekubernetes.container.imageResults
Expected result is that nothing will be broken and everything will keep running as before.
I built the package and update the policy. It was updated as expected:

I also checked every dashboard and all were still working as before (not including screenshots to not overwhelm this description).