Skip to content

[Kubernetes] Remove extra base fields for state datastreams#8393

Merged
constanca-m merged 14 commits intoelastic:mainfrom
constanca-m:remove-base-fields
Nov 23, 2023
Merged

[Kubernetes] Remove extra base fields for state datastreams#8393
constanca-m merged 14 commits intoelastic:mainfrom
constanca-m:remove-base-fields

Conversation

@constanca-m
Copy link
Copy Markdown
Contributor

@constanca-m constanca-m commented Nov 3, 2023

Proposed commit message

  • WHAT: remove extra base fields for all state_* datastreams.
  • WHY: we have many fields for the data streams that never hold any value or are not related to it.

Checklist

  • I have reviewed tips for building integrations and this pull request is aligned with them.
  • I have verified that all data streams collect metrics or logs.
  • I have added an entry to my package's changelog.yml file.
  • I have verified that Kibana version constraints are current according to guidelines.

How this was tested

  1. Created a cluster with the configuration kind-config.yaml present in this repo.
  2. Deployed all resources in kubernetes/_dev/deploy/k8s (these are also the resources being used in testing).
  3. Deploy EA standalone with all data stream with the following:
add_metadata: true
add_resource_metadata:
  deployment: true
  cronjob: true
  namespace:
    include_labels:
      - kubernetes.io/metadata.name
    include_annotations:
      - example
  node:
    include_labels:
      - beta.kubernetes.io/arch
    include_annotations:
      - node.alpha.kubernetes.io/ttl

Note: labels for the namespace and node exist in the testing environment used.

This is the EA standalone manifest in full.
# For more information https://www.elastic.co/guide/en/fleet/current/running-on-kubernetes-standalone.html
apiVersion: v1
kind: ConfigMap
metadata:
  name: agent-node-datastreams
  namespace: kube-system
  labels:
    k8s-app: elastic-agent-standalone
data:
  agent.yml: |-
    outputs:
      default:
        type: elasticsearch
        ssl.verification_mode: none
        hosts:
          - >-
            ${ES_HOST}
        username: ${ES_USERNAME}
        password: ${ES_PASSWORD}
    agent:
      monitoring:
        enabled: true
        use_output: default
        logs: true
        metrics: true
    providers.kubernetes:
      node: ${NODE_NAME}
      scope: node
      #Uncomment to enable hints' support
      #hints.enabled: true
    inputs:
      - id: kubernetes-cluster-metrics
        condition: ${kubernetes_leaderelection.leader} == true
        type: kubernetes/metrics
        use_output: default
        meta:
          package:
            name: kubernetes
            version: 1.29.2
        data_stream:
          namespace: default
        streams:
          - data_stream:
              dataset: kubernetes.apiserver
              type: metrics
            metricsets:
              - apiserver
            bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
            hosts:
              - 'https://${env.KUBERNETES_SERVICE_HOST}:${env.KUBERNETES_SERVICE_PORT}'
            period: 30s
            ssl.certificate_authorities:
              - /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
          - data_stream:
              dataset: kubernetes.event
              type: metrics
            metricsets:
              - event
            period: 10s
            add_metadata: true
          - data_stream:
              dataset: kubernetes.state_container
              type: metrics
            metricsets:
              - state_container
            add_metadata: true
            add_resource_metadata:
              deployment: true
              cronjob: true
              namespace:
                include_labels:
                  - kubernetes.io/metadata.name
                include_annotations:
                  - example
              node:
                include_labels:
                  - beta.kubernetes.io/arch
                include_annotations:
                  - node.alpha.kubernetes.io/ttl
            hosts:
              - 'kube-state-metrics:8080'
            period: 10s
            # Openshift:
            # if to access 'kube-state-metrics' are used third party tools, like kube-rbac-proxy or similar, that perform RBAC authorization
            # and/or tls termination, then configuration below should be considered:
            # bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
            # ssl.certificate_authorities:
            #   - /var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt
          - data_stream:
              dataset: kubernetes.state_cronjob
              type: metrics
            metricsets:
              - state_cronjob
            add_metadata: true
            add_resource_metadata:
              deployment: true
              cronjob: true
              namespace:
                include_labels:
                  - kubernetes.io/metadata.name
                include_annotations:
                  - example
              node:
                include_labels:
                  - beta.kubernetes.io/arch
                include_annotations:
                  - node.alpha.kubernetes.io/ttl
            hosts:
              - 'kube-state-metrics:8080'
            period: 10s
            # Openshift:
            # if to access 'kube-state-metrics' are used third party tools, like kube-rbac-proxy or similar, that perform RBAC authorization
            # and/or tls termination, then configuration below should be considered:
            # bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
            # ssl.certificate_authorities:
            #   - /var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt
          - data_stream:
              dataset: kubernetes.state_daemonset
              type: metrics
            metricsets:
              - state_daemonset
            add_metadata: true
            add_resource_metadata:
              deployment: true
              cronjob: true
              namespace:
                include_labels:
                  - kubernetes.io/metadata.name
                include_annotations:
                  - example
              node:
                include_labels:
                  - beta.kubernetes.io/arch
                include_annotations:
                  - node.alpha.kubernetes.io/ttl
            hosts:
              - 'kube-state-metrics:8080'
            period: 10s
            # Openshift:
            # if to access 'kube-state-metrics' are used third party tools, like kube-rbac-proxy or similar, that perform RBAC authorization
            # and/or tls termination, then configuration below should be considered:
            # bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
            # ssl.certificate_authorities:
            #   - /var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt
          - data_stream:
              dataset: kubernetes.state_deployment
              type: metrics
            metricsets:
              - state_deployment
            add_metadata: true
            add_resource_metadata:
              deployment: true
              cronjob: true
              namespace:
                include_labels:
                  - kubernetes.io/metadata.name
                include_annotations:
                  - example
              node:
                include_labels:
                  - beta.kubernetes.io/arch
                include_annotations:
                  - node.alpha.kubernetes.io/ttl
            hosts:
              - 'kube-state-metrics:8080'
            period: 10s
            # Openshift:
            # if to access 'kube-state-metrics' are used third party tools, like kube-rbac-proxy or similar, that perform RBAC authorization
            # and/or tls termination, then configuration below should be considered:
            # bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
            # ssl.certificate_authorities:
            #   - /var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt
          - data_stream:
              dataset: kubernetes.state_job
              type: metrics
            metricsets:
              - state_job
            add_metadata: true
            add_resource_metadata:
              deployment: true
              cronjob: true
              namespace:
                include_labels:
                  - kubernetes.io/metadata.name
                include_annotations:
                  - example
              node:
                include_labels:
                  - beta.kubernetes.io/arch
                include_annotations:
                  - node.alpha.kubernetes.io/ttl
            hosts:
              - 'kube-state-metrics:8080'
            period: 10s
            # Openshift:
            # if to access 'kube-state-metrics' are used third party tools, like kube-rbac-proxy or similar, that perform RBAC authorization
            # and/or tls termination, then configuration below should be considered:
            # bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
            # ssl.certificate_authorities:
            #   - /var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt
          - data_stream:
              dataset: kubernetes.state_node
              type: metrics
            metricsets:
              - state_node
            add_metadata: true
            add_resource_metadata:
              deployment: true
              cronjob: true
              namespace:
                include_labels:
                  - kubernetes.io/metadata.name
                include_annotations:
                  - example
              node:
                include_labels:
                  - beta.kubernetes.io/arch
                include_annotations:
                  - node.alpha.kubernetes.io/ttl
            hosts:
              - 'kube-state-metrics:8080'
            period: 10s
            # Openshift:
            # if to access 'kube-state-metrics' are used third party tools, like kube-rbac-proxy or similar, that perform RBAC authorization
            # and/or tls termination, then configuration below should be considered:
            # bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
            # ssl.certificate_authorities:
            #   - /var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt
          - data_stream:
              dataset: kubernetes.state_persistentvolume
              type: metrics
            metricsets:
              - state_persistentvolume
            add_metadata: true
            add_resource_metadata:
              deployment: true
              cronjob: true
              namespace:
                include_labels:
                  - kubernetes.io/metadata.name
                include_annotations:
                  - example
              node:
                include_labels:
                  - beta.kubernetes.io/arch
                include_annotations:
                  - node.alpha.kubernetes.io/ttl
            hosts:
              - 'kube-state-metrics:8080'
            period: 10s
            # Openshift:
            # if to access 'kube-state-metrics' are used third party tools, like kube-rbac-proxy or similar, that perform RBAC authorization
            # and/or tls termination, then configuration below should be considered:
            # bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
            # ssl.certificate_authorities:
            #   - /var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt
          - data_stream:
              dataset: kubernetes.state_persistentvolumeclaim
              type: metrics
            metricsets:
              - state_persistentvolumeclaim
            add_metadata: true
            add_resource_metadata:
              deployment: true
              cronjob: true
              namespace:
                include_labels:
                  - kubernetes.io/metadata.name
                include_annotations:
                  - example
              node:
                include_labels:
                  - beta.kubernetes.io/arch
                include_annotations:
                  - node.alpha.kubernetes.io/ttl
            hosts:
              - 'kube-state-metrics:8080'
            period: 10s
            # Openshift:
            # if to access 'kube-state-metrics' are used third party tools, like kube-rbac-proxy or similar, that perform RBAC authorization
            # and/or tls termination, then configuration below should be considered:
            # bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
            # ssl.certificate_authorities:
            #   - /var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt
          - data_stream:
              dataset: kubernetes.state_pod
              type: metrics
            metricsets:
              - state_pod
            add_metadata: true
            add_resource_metadata:
              deployment: true
              cronjob: true
              namespace:
                include_labels:
                  - kubernetes.io/metadata.name
                include_annotations:
                  - example
              node:
                include_labels:
                  - beta.kubernetes.io/arch
                include_annotations:
                  - node.alpha.kubernetes.io/ttl
            hosts:
              - 'kube-state-metrics:8080'
            period: 10s
            # Openshift:
            # if to access 'kube-state-metrics' are used third party tools, like kube-rbac-proxy or similar, that perform RBAC authorization
            # and/or tls termination, then configuration below should be considered:
            # bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
            # ssl.certificate_authorities:
            #   - /var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt
          - data_stream:
              dataset: kubernetes.state_replicaset
              type: metrics
            metricsets:
              - state_replicaset
            add_metadata: true
            add_resource_metadata:
              deployment: true
              cronjob: true
              namespace:
                include_labels:
                  - kubernetes.io/metadata.name
                include_annotations:
                  - example
              node:
                include_labels:
                  - beta.kubernetes.io/arch
                include_annotations:
                  - node.alpha.kubernetes.io/ttl
            hosts:
              - 'kube-state-metrics:8080'
            period: 10s
            # Openshift:
            # if to access 'kube-state-metrics' are used third party tools, like kube-rbac-proxy or similar, that perform RBAC authorization
            # and/or tls termination, then configuration below should be considered:
            # bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
            # ssl.certificate_authorities:
            #   - /var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt
          - data_stream:
              dataset: kubernetes.state_resourcequota
              type: metrics
            metricsets:
              - state_resourcequota
            add_metadata: true
            add_resource_metadata:
              deployment: true
              cronjob: true
              namespace:
                include_labels:
                  - kubernetes.io/metadata.name
                include_annotations:
                  - example
              node:
                include_labels:
                  - beta.kubernetes.io/arch
                include_annotations:
                  - node.alpha.kubernetes.io/ttl
            hosts:
              - 'kube-state-metrics:8080'
            period: 10s
            # Openshift:
            # if to access 'kube-state-metrics' are used third party tools, like kube-rbac-proxy or similar, that perform RBAC authorization
            # and/or tls termination, then configuration below should be considered:
            # bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
            # ssl.certificate_authorities:
            #   - /var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt
          - data_stream:
              dataset: kubernetes.state_service
              type: metrics
            metricsets:
              - state_service
            add_metadata: true
            add_resource_metadata:
              deployment: true
              cronjob: true
              namespace:
                include_labels:
                  - kubernetes.io/metadata.name
                include_annotations:
                  - example
              node:
                include_labels:
                  - beta.kubernetes.io/arch
                include_annotations:
                  - node.alpha.kubernetes.io/ttl
            hosts:
              - 'kube-state-metrics:8080'
            period: 10s
            # Openshift:
            # if to access 'kube-state-metrics' are used third party tools, like kube-rbac-proxy or similar, that perform RBAC authorization
            # and/or tls termination, then configuration below should be considered:
            # bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
            # ssl.certificate_authorities:
            #   - /var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt
          - data_stream:
              dataset: kubernetes.state_statefulset
              type: metrics
            metricsets:
              - state_statefulset
            add_metadata: true
            add_resource_metadata:
              deployment: true
              cronjob: true
              namespace:
                include_labels:
                  - kubernetes.io/metadata.name
                include_annotations:
                  - example
              node:
                include_labels:
                  - beta.kubernetes.io/arch
                include_annotations:
                  - node.alpha.kubernetes.io/ttl
            hosts:
              - 'kube-state-metrics:8080'
            period: 10s
            # Openshift:
            # if to access 'kube-state-metrics' are used third party tools, like kube-rbac-proxy or similar, that perform RBAC authorization
            # and/or tls termination, then configuration below should be considered:
            # bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
            # ssl.certificate_authorities:
            #   - /var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt
          - data_stream:
              dataset: kubernetes.state_storageclass
              type: metrics
            metricsets:
              - state_storageclass
            add_metadata: true
            add_resource_metadata:
              deployment: true
              cronjob: true
              namespace:
                include_labels:
                  - kubernetes.io/metadata.name
                include_annotations:
                  - example
              node:
                include_labels:
                  - beta.kubernetes.io/arch
                include_annotations:
                  - node.alpha.kubernetes.io/ttl
            hosts:
              - 'kube-state-metrics:8080'
            period: 10s
            # Openshift:
            # if to access 'kube-state-metrics' are used third party tools, like kube-rbac-proxy or similar, that perform RBAC authorization
            # and/or tls termination, then configuration below should be considered:
            # bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
            # ssl.certificate_authorities:
            #   - /var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt
      - id: system-logs
        type: logfile
        use_output: default
        meta:
          package:
            name: system
            version: 1.20.4
        data_stream:
          namespace: default
        streams:
          - data_stream:
              dataset: system.auth
              type: logs
            paths:
              - /var/log/auth.log*
              - /var/log/secure*
            exclude_files:
              - .gz$
            multiline:
              pattern: ^\s
              match: after
            processors:
              - add_locale: null
            ignore_older: 72h
          - data_stream:
              dataset: system.syslog
              type: logs
            paths:
              - /var/log/messages*
              - /var/log/syslog*
            exclude_files:
              - .gz$
            multiline:
              pattern: ^\s
              match: after
            processors:
              - add_locale: null
            ignore_older: 72h
      - id: windows-event-log
        type: winlog
        use_output: default
        meta:
          package:
            name: system
            version: 1.20.4
        data_stream:
          namespace: default
        streams:
          - data_stream:
              type: logs
              dataset: system.application
            condition: '${host.platform} == ''windows'''
            ignore_older: 72h
          - data_stream:
              type: logs
              dataset: system.security
            condition: '${host.platform} == ''windows'''
            ignore_older: 72h
          - data_stream:
              type: logs
              dataset: system.system
            condition: '${host.platform} == ''windows'''
            ignore_older: 72h
      # Input ID allowing Elastic Agent to track the state of this input. Must be unique.
      - id: container-log-${kubernetes.pod.name}-${kubernetes.container.id}
        type: filestream
        use_output: default
        meta:
          package:
            name: kubernetes
            version: 1.29.2
        data_stream:
          namespace: default
        streams:
          # Stream ID for this data stream allowing Filebeat to track the state of the ingested files. Must be unique.
          # Each filestream data stream creates a separate instance of the Filebeat filestream input.
          - id: container-log-${kubernetes.pod.name}-${kubernetes.container.id}
            data_stream:
              dataset: kubernetes.container_logs
              type: logs
            prospector.scanner.symlinks: true
            parsers:
              - container: ~
              # - ndjson:
              #     target: json
              # - multiline:
              #     type: pattern
              #     pattern: '^\['
              #     negate: true
              #     match: after
            paths:
              - /var/log/containers/*${kubernetes.container.id}.log
      - id: audit-log
        type: filestream
        use_output: default
        meta:
          package:
            name: kubernetes
            version: 1.29.2
        data_stream:
          namespace: default
        streams:
          - data_stream:
              dataset: kubernetes.audit_logs
              type: logs
            exclude_files:
            - .gz$
            parsers:
              - ndjson:
                  add_error_key: true
                  target: kubernetes_audit
            paths:
              - /var/log/kubernetes/kube-apiserver-audit.log
              # The default path of audit logs on Openshift:
              # - /var/log/kube-apiserver/audit.log
            processors:
            - rename:
                fields:
                - from: kubernetes_audit
                  to: kubernetes.audit
            - script:
                id: dedot_annotations
                lang: javascript
                source: |
                  function process(event) {
                    var audit = event.Get("kubernetes.audit");
                    for (var annotation in audit["annotations"]) {
                      var annotation_dedoted = annotation.replace(/\./g,'_')
                      event.Rename("kubernetes.audit.annotations."+annotation, "kubernetes.audit.annotations."+annotation_dedoted)
                    }
                    return event;
                  } function test() {
                    var event = process(new Event({ "kubernetes": { "audit": { "annotations": { "authorization.k8s.io/decision": "allow", "authorization.k8s.io/reason": "RBAC: allowed by ClusterRoleBinding \"system:kube-scheduler\" of ClusterRole \"system:kube-scheduler\" to User \"system:kube-scheduler\"" } } } }));
                    if (event.Get("kubernetes.audit.annotations.authorization_k8s_io/decision") !== "allow") {
                        throw "expected kubernetes.audit.annotations.authorization_k8s_io/decision === allow";
                    }
                  }
      - id: system-metrics
        type: system/metrics
        use_output: default
        meta:
          package:
            name: system
            version: 1.20.4
        data_stream:
          namespace: default
        streams:
          - data_stream:
              dataset: system.cpu
              type: metrics
            period: 10s
            cpu.metrics:
              - percentages
              - normalized_percentages
            metricsets:
              - cpu
          - data_stream:
              dataset: system.diskio
              type: metrics
            period: 10s
            diskio.include_devices: null
            metricsets:
              - diskio
          - data_stream:
              dataset: system.filesystem
              type: metrics
            period: 1m
            metricsets:
              - filesystem
            processors:
              - drop_event.when.regexp:
                  system.filesystem.mount_point: ^/(sys|cgroup|proc|dev|etc|host|lib|snap)($|/)
          - data_stream:
              dataset: system.fsstat
              type: metrics
            period: 1m
            metricsets:
              - fsstat
            processors:
              - drop_event.when.regexp:
                  system.fsstat.mount_point: ^/(sys|cgroup|proc|dev|etc|host|lib|snap)($|/)
          - data_stream:
              dataset: system.load
              type: metrics
            condition: '${host.platform} != ''windows'''
            period: 10s
            metricsets:
              - load
          - data_stream:
              dataset: system.memory
              type: metrics
            period: 10s
            metricsets:
              - memory
          - data_stream:
              dataset: system.network
              type: metrics
            period: 10s
            network.interfaces: null
            metricsets:
              - network
          - data_stream:
              dataset: system.process
              type: metrics
            period: 10s
            processes:
              - .*
            process.include_top_n.by_cpu: 5
            process.include_top_n.by_memory: 5
            process.cmdline.cache.enabled: true
            process.cgroups.enabled: false
            process.include_cpu_ticks: false
            metricsets:
              - process
            process.include_cpu_ticks: false
          - data_stream:
              dataset: system.process_summary
              type: metrics
            period: 10s
            metricsets:
              - process_summary
          - data_stream:
              dataset: system.socket_summary
              type: metrics
            period: 10s
            metricsets:
              - socket_summary
          - data_stream:
              type: metrics
              dataset: system.uptime
            metricsets:
              - uptime
            period: 10s
      - id: kubernetes-node-metrics
        type: kubernetes/metrics
        use_output: default
        meta:
          package:
            name: kubernetes
            version: 1.29.2
        data_stream:
          namespace: default
        streams:
          - data_stream:
              dataset: kubernetes.controllermanager
              type: metrics
            metricsets:
              - controllermanager
            bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
            hosts:
              - 'https://${kubernetes.pod.ip}:10257'
            period: 10s
            ssl.verification_mode: none
            condition: ${kubernetes.labels.component} == 'kube-controller-manager'
            # On Openshift condition should be adjusted:
            # condition: ${kubernetes.labels.app} == 'kube-controller-manager'
          - data_stream:
              dataset: kubernetes.scheduler
              type: metrics
            metricsets:
              - scheduler
            bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
            hosts:
              - 'https://${kubernetes.pod.ip}:10259'
            period: 10s
            ssl.verification_mode: none
            condition: ${kubernetes.labels.component} == 'kube-scheduler'
            # On Openshift condition should be adjusted:
            # condition: ${kubernetes.labels.app} == 'openshift-kube-scheduler'
          - data_stream:
              dataset: kubernetes.proxy
              type: metrics
            metricsets:
              - proxy
            hosts:
              - 'localhost:10249'
              # On Openshift port should be adjusted:
              # - 'localhost:29101'
            period: 10s
          - data_stream:
              dataset: kubernetes.container
              type: metrics
            metricsets:
              - container
            add_metadata: true
            bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
            hosts:
              - 'https://${env.NODE_NAME}:10250'
            period: 10s
            ssl.verification_mode: none
            # On Openshift ssl configuration must be replaced:
            # bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
            # ssl.certificate_authorities:
            #   - /path/to/ca-bundle.crt
          - data_stream:
              dataset: kubernetes.node
              type: metrics
            metricsets:
              - node
            add_metadata: true
            bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
            hosts:
              - 'https://${env.NODE_NAME}:10250'
            period: 10s
            ssl.verification_mode: none
            # On Openshift ssl configuration must be replaced:
            # bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
            # ssl.certificate_authorities:
            #   - /path/to/ca-bundle.crt
          - data_stream:
              dataset: kubernetes.pod
              type: metrics
            metricsets:
              - pod
            add_metadata: true
            bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
            hosts:
              - 'https://${env.NODE_NAME}:10250'
            period: 10s
            ssl.verification_mode: none
            # On Openshift ssl configuration must be replaced:
            # bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
            # ssl.certificate_authorities:
            #   - /path/to/ca-bundle.crt
          - data_stream:
              dataset: kubernetes.system
              type: metrics
            metricsets:
              - system
            add_metadata: true
            bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
            hosts:
              - 'https://${env.NODE_NAME}:10250'
            period: 10s
            ssl.verification_mode: none
            # On Openshift ssl configuration must be replaced:
            # bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
            # ssl.certificate_authorities:
            #   - /path/to/ca-bundle.crt
          - data_stream:
              dataset: kubernetes.volume
              type: metrics
            metricsets:
              - volume
            add_metadata: true
            bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
            hosts:
              - 'https://${env.NODE_NAME}:10250'
            period: 10s
            ssl.verification_mode: none
            # On Openshift ssl configuration must be replaced:
            # bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
            # ssl.certificate_authorities:
            #   - /path/to/ca-bundle.crt
      # Add extra input blocks here, based on conditions
      # so as to automatically identify targeted Pods and start monitoring them
      # using a predefined integration. For instance:
      #- id: redis-metrics
      #  type: redis/metrics
      #  use_output: default
      #  meta:
      #    package:
      #      name: redis
      #      version: 0.3.6
      #  data_stream:
      #    namespace: default
      #  streams:
      #    - data_stream:
      #        dataset: redis.info
      #        type: metrics
      #      metricsets:
      #        - info
      #      hosts:
      #        - '${kubernetes.pod.ip}:6379'
      #      idle_timeout: 20s
      #      maxconn: 10
      #      network: tcp
      #      period: 10s
      #      condition: ${kubernetes.labels.app} == 'redis'
---
# For more information refer https://www.elastic.co/guide/en/fleet/current/running-on-kubernetes-standalone.html
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: elastic-agent-standalone
  namespace: kube-system
  labels:
    app: elastic-agent-standalone
spec:
  selector:
    matchLabels:
      app: elastic-agent-standalone
  template:
    metadata:
      labels:
        app: elastic-agent-standalone
    spec:
      # Tolerations are needed to run Elastic Agent on Kubernetes control-plane nodes.
      # Agents running on control-plane nodes collect metrics from the control plane components (scheduler, controller manager) of Kubernetes
      tolerations:
        - key: node-role.kubernetes.io/control-plane
          effect: NoSchedule
        - key: node-role.kubernetes.io/master
          effect: NoSchedule
      serviceAccountName: elastic-agent-standalone
      hostNetwork: true
      dnsPolicy: ClusterFirstWithHostNet
      # Uncomment if using hints feature
      #initContainers:
      #  - name: k8s-templates-downloader
      #    image: busybox:1.28
      #    command: ['sh']
      #    args:
      #      - -c
      #      - >-
      #        mkdir -p /etc/elastic-agent/inputs.d &&
      #        wget -O - https://github.com/elastic/elastic-agent/archive/main.tar.gz | tar xz -C /etc/elastic-agent/inputs.d --strip=5 "elastic-agent-main/deploy/kubernetes/elastic-agent-standalone/templates.d"
      #    volumeMounts:
      #      - name: external-inputs
      #        mountPath: /etc/elastic-agent/inputs.d
      containers:
        - name: elastic-agent-standalone
          image: docker.elastic.co/beats/elastic-agent:8.11.0-SNAPSHOT
          args: ["-c", "/etc/elastic-agent/agent.yml", "-e"]
          env:
            # The basic authentication username used to connect to Elasticsearch
            # This user needs the privileges required to publish events to Elasticsearch.
            - name: ES_USERNAME
              value: "elastic"
            # The basic authentication password used to connect to Elasticsearch
            - name: ES_PASSWORD
              value: "changeme"
            # The Elasticsearch host to communicate with
            - name: ES_HOST
              value: "https://elasticsearch:9200"
            - name: NODE_NAME
              valueFrom:
                fieldRef:
                  fieldPath: spec.nodeName
            - name: POD_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name: STATE_PATH
              value: "/etc/elastic-agent"
            # The following ELASTIC_NETINFO:false variable will disable the netinfo.enabled option of add-host-metadata processor. This will remove fields host.ip and host.mac.  
            # For more info: https://www.elastic.co/guide/en/beats/metricbeat/current/add-host-metadata.html
            - name: ELASTIC_NETINFO
              value: "false"
          securityContext:
            runAsUser: 0
            # The following capabilities are needed for 'Defend for containers' integration (cloud-defend)
            # If you are using this integration, please uncomment these lines before applying.
            #capabilities:
            #  add:
            #    - BPF # (since Linux 5.8) allows loading of BPF programs, create most map types, load BTF, iterate programs and maps.
            #    - PERFMON # (since Linux 5.8) allows attaching of BPF programs used for performance metrics and observability operations.
            #    - SYS_RESOURCE # Allow use of special resources or raising of resource limits. Used by 'Defend for Containers' to modify 'rlimit_memlock'
            ########################################################################################
            # The following capabilities are needed for Universal Profiling.
            # More fine graded capabilities are only available for newer Linux kernels.
            # If you are using the Universal Profiling integration, please uncomment these lines before applying.
            #procMount: "Unmasked"
            #privileged: true
            #capabilities:
            #  add:
            #    - SYS_ADMIN
          resources:
            limits:
              memory: 700Mi
            requests:
              cpu: 100m
              memory: 400Mi
          volumeMounts:
            - name: datastreams
              mountPath: /etc/elastic-agent/agent.yml
              readOnly: true
              subPath: agent.yml
            # Uncomment if using hints feature
            #- name: external-inputs
            #  mountPath: /etc/elastic-agent/inputs.d
            - name: proc
              mountPath: /hostfs/proc
              readOnly: true
            - name: cgroup
              mountPath: /hostfs/sys/fs/cgroup
              readOnly: true
            - name: varlibdockercontainers
              mountPath: /var/lib/docker/containers
              readOnly: true
            - name: varlog
              mountPath: /var/log
              readOnly: true
            - name: etc-full
              mountPath: /hostfs/etc
              readOnly: true
            - name: var-lib
              mountPath: /hostfs/var/lib
              readOnly: true
            - name: sys-kernel-debug
              mountPath: /sys/kernel/debug
            - name: elastic-agent-state
              mountPath: /usr/share/elastic-agent/state
            # If you are using the Universal Profiling integration, please uncomment these lines before applying.
            #- name: universal-profiling-cache
            #  mountPath: /var/cache/Elastic
      volumes:
        - name: datastreams
          configMap:
            defaultMode: 0640
            name: agent-node-datastreams
        # Uncomment if using hints feature
        #- name: external-inputs
        #  emptyDir: {}
        - name: proc
          hostPath:
            path: /proc
        - name: cgroup
          hostPath:
            path: /sys/fs/cgroup
        - name: varlibdockercontainers
          hostPath:
            path: /var/lib/docker/containers
        - name: varlog
          hostPath:
            path: /var/log
        # The following volumes are needed for Cloud Security Posture integration (cloudbeat)
        # If you are not using this integration, then these volumes and the corresponding
        # mounts can be removed.
        - name: etc-full
          hostPath:
            path: /etc
        - name: var-lib
          hostPath:
            path: /var/lib
        # Needed for 'Defend for containers' integration (cloud-defend) and Universal Profiling
        # If you are not using one of these integrations, then these volumes and the corresponding
        # mounts can be removed.
        - name: sys-kernel-debug
          hostPath:
            path: /sys/kernel/debug
        # Mount /var/lib/elastic-agent-managed/kube-system/state to store elastic-agent state
        # Update 'kube-system' with the namespace of your agent installation
        - name: elastic-agent-state
          hostPath:
            path: /var/lib/elastic-agent-standalone/kube-system/state
            type: DirectoryOrCreate
        # Mount required for Universal Profiling.
        # If you are using the Universal Profiling integration, please uncomment these lines before applying.
        #- name: universal-profiling-cache
        #  hostPath:
        #    path: /var/cache/Elastic
        #    type: DirectoryOrCreate
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: elastic-agent-standalone
subjects:
  - kind: ServiceAccount
    name: elastic-agent-standalone
    namespace: kube-system
roleRef:
  kind: ClusterRole
  name: elastic-agent-standalone
  apiGroup: rbac.authorization.k8s.io
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  namespace: kube-system
  name: elastic-agent-standalone
subjects:
  - kind: ServiceAccount
    name: elastic-agent-standalone
    namespace: kube-system
roleRef:
  kind: Role
  name: elastic-agent-standalone
  apiGroup: rbac.authorization.k8s.io
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: elastic-agent-standalone-kubeadm-config
  namespace: kube-system
subjects:
  - kind: ServiceAccount
    name: elastic-agent-standalone
    namespace: kube-system
roleRef:
  kind: Role
  name: elastic-agent-standalone-kubeadm-config
  apiGroup: rbac.authorization.k8s.io
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: elastic-agent-standalone
  labels:
    k8s-app: elastic-agent-standalone
rules:
  - apiGroups: [""]
    resources:
      - nodes
      - namespaces
      - events
      - pods
      - services
      - configmaps
      # Needed for cloudbeat
      - serviceaccounts
      - persistentvolumes
      - persistentvolumeclaims
    verbs: ["get", "list", "watch"]
  # Enable this rule only if planing to use kubernetes_secrets provider
  #- apiGroups: [""]
  #  resources:
  #  - secrets
  #  verbs: ["get"]
  - apiGroups: ["extensions"]
    resources:
      - replicasets
    verbs: ["get", "list", "watch"]
  - apiGroups: ["apps"]
    resources:
      - statefulsets
      - deployments
      - replicasets
      - daemonsets
    verbs: ["get", "list", "watch"]
  - apiGroups: ["batch"]
    resources:
      - jobs
      - cronjobs
    verbs: ["get", "list", "watch"]
  - apiGroups:
      - ""
    resources:
      - nodes/stats
    verbs:
      - get
  # Needed for apiserver
  - nonResourceURLs:
      - "/metrics"
    verbs:
      - get
  # Needed for cloudbeat
  - apiGroups: ["rbac.authorization.k8s.io"]
    resources:
      - clusterrolebindings
      - clusterroles
      - rolebindings
      - roles
    verbs: ["get", "list", "watch"]
  # Needed for cloudbeat
  - apiGroups: ["policy"]
    resources:
      - podsecuritypolicies
    verbs: ["get", "list", "watch"]
  - apiGroups: [ "storage.k8s.io" ]
    resources:
      - storageclasses
    verbs: [ "get", "list", "watch" ]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: elastic-agent-standalone
  # Should be the namespace where elastic-agent is running
  namespace: kube-system
  labels:
    k8s-app: elastic-agent-standalone
rules:
  - apiGroups:
      - coordination.k8s.io
    resources:
      - leases
    verbs: ["get", "create", "update"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: elastic-agent-standalone-kubeadm-config
  namespace: kube-system
  labels:
    k8s-app: elastic-agent-standalone
rules:
  - apiGroups: [""]
    resources:
      - configmaps
    resourceNames:
      - kubeadm-config
    verbs: ["get"]
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: elastic-agent-standalone
  namespace: kube-system
  labels:
    k8s-app: elastic-agent-standalone
---

Changes

For each data stream, the following fields were removed:

state_container
  • kubernetes.selectors.*
  • kubernetes.container.image
state_cronjob
  • kubernetes.pod.name
  • kubernetes.pod.uid
  • kubernetes.pod.ip
  • kubernetes.node.name
  • kubernetes.node.hostname
  • kubernetes.selectors.*
  • kubernetes.replicaset.name
  • kubernetes.deployment.name
  • kubernetes.statefulset.name
  • kubernetes.container.name
  • kubernetes.container.image
state_daemonset
  • kubernetes.pod.name
  • kubernetes.pod.uid
  • kubernetes.pod.ip
  • kubernetes.node.name
  • kubernetes.node.hostname
  • kubernetes.selectors.*
  • kubernetes.replicaset.name
  • kubernetes.deployment.name
  • kubernetes.statefulset.name
  • kubernetes.container.name
  • kubernetes.container.image
state_deployment
  • kubernetes.pod.name
  • kubernetes.pod.uid
  • kubernetes.pod.ip
  • kubernetes.node.name
  • kubernetes.node.hostname
  • kubernetes.selectors.*
  • kubernetes.replicaset.name
  • kubernetes.statefulset.name
  • kubernetes.container.name
  • kubernetes.container.image
state_job
  • kubernetes.pod.name
  • kubernetes.pod.uid
  • kubernetes.pod.ip
  • kubernetes.node.name
  • kubernetes.node.hostname
  • kubernetes.selectors.*
  • kubernetes.replicaset.name
  • kubernetes.deployment.name
  • kubernetes.statefulset.name
  • kubernetes.container.name
  • kubernetes.container.image
state_namespace Newly introduced, nothing to delete.
state_node
  • kubernetes.pod.name
  • kubernetes.pod.uid
  • kubernetes.pod.ip
  • kubernetes.namespace
  • kubernetes.node.hostname
  • kubernetes.selectors.*
  • kubernetes.replicaset.name
  • kubernetes.deployment.name
  • kubernetes.statefulset.name
  • kubernetes.container.name
  • kubernetes.container.image
state_persistentvolume
  • kubernetes.pod.name
  • kubernetes.pod.uid
  • kubernetes.pod.ip
  • kubernetes.namespace
  • kubernetes.node.name
  • kubernetes.node.hostname
  • kubernetes.selectors.*
  • kubernetes.replicaset.name
  • kubernetes.deployment.name
  • kubernetes.statefulset.name
  • kubernetes.container.name
  • kubernetes.container.image
state_persistentvolumeclaim
  • kubernetes.pod.name
  • kubernetes.pod.uid
  • kubernetes.pod.ip
  • kubernetes.namespace
  • kubernetes.node.name
  • kubernetes.selectors.*
  • kubernetes.replicaset.name
  • kubernetes.deployment.name
  • kubernetes.statefulset.name
  • kubernetes.container.name
  • kubernetes.container.image
state_pod
  • kubernetes.selectors.*
  • kubernetes.container.name
  • kubernetes.container.image
state_replicaset
  • kubernetes.pod.name
  • kubernetes.pod.uid
  • kubernetes.pod.ip
  • kubernetes.node.name
  • kubernetes.node.hostname
  • kubernetes.selectors.*
  • kubernetes.statefulset.name
  • kubernetes.container.name
  • kubernetes.container.image
state_resourcequota
  • kubernetes.pod.name
  • kubernetes.pod.uid
  • kubernetes.pod.ip
  • kubernetes.node.name
  • kubernetes.selectors.*
  • kubernetes.replicaset.name
  • kubernetes.deployment.name
  • kubernetes.statefulset.name
  • kubernetes.container.name
  • kubernetes.container.image
state_service
  • kubernetes.pod.name
  • kubernetes.pod.uid
  • kubernetes.pod.ip
  • kubernetes.replicaset.name
  • kubernetes.deployment.name
  • kubernetes.statefulset.name
  • kubernetes.container.name
  • kubernetes.container.image
state_statefulset
  • kubernetes.pod.name
  • kubernetes.pod.uid
  • kubernetes.pod.ip
  • kubernetes.node.name
  • kubernetes.node.hostname
  • kubernetes.seletectors.*
  • kubernetes.replicaset.name
  • kubernetes.deployment.name
  • kubernetes.seletectors.*
  • kubernetes.container.name
  • kubernetes.container.image
state_storageclass
  • kubernetes.pod.name
  • kubernetes.pod.uid
  • kubernetes.pod.ip
  • kubernetes.node.name
  • kubernetes.node.hostname
  • kubernetes.namespace
  • kubernetes.seletectors.*
  • kubernetes.replicaset.name
  • kubernetes.deployment.name
  • kubernetes.statefulset.name
  • kubernetes.container.name
  • kubernetes.container.image

Results

Expected result is that nothing will be broken and everything will keep running as before.

I built the package and update the policy. It was updated as expected:
Screenshot from 2023-11-03 11-59-43

I also checked every dashboard and all were still working as before (not including screenshots to not overwhelm this description).

Signed-off-by: constanca-m <constanca.manteigas@elastic.co>
Signed-off-by: constanca-m <constanca.manteigas@elastic.co>
Signed-off-by: constanca-m <constanca.manteigas@elastic.co>
Signed-off-by: constanca-m <constanca.manteigas@elastic.co>
@constanca-m constanca-m requested a review from a team as a code owner November 3, 2023 11:12
@constanca-m constanca-m self-assigned this Nov 3, 2023
@elasticmachine
Copy link
Copy Markdown

elasticmachine commented Nov 3, 2023

💚 Build Succeeded

the below badges are clickable and redirect to their specific view in the CI or DOCS
Pipeline View Test View Changes Artifacts preview preview

Expand to view the summary

Build stats

  • Start Time: 2023-11-23T10:01:51.638+0000

  • Duration: 58 min 11 sec

Test stats 🧪

Test Results
Failed 0
Passed 97
Skipped 0
Total 97

🤖 GitHub comments

Expand to view the GitHub comments

To re-run your PR in the CI, just comment with:

  • /test : Re-trigger the build.

@elasticmachine
Copy link
Copy Markdown

elasticmachine commented Nov 3, 2023

🌐 Coverage report

Name Metrics % (covered/total) Diff
Packages 100.0% (1/1) 💚
Files 100.0% (1/1) 💚
Classes 100.0% (1/1) 💚
Methods 96.386% (80/83) 👎 -3.614
Lines 100.0% (22/22) 💚 1.537
Conditionals 100.0% (0/0) 💚

@gizas
Copy link
Copy Markdown
Contributor

gizas commented Nov 3, 2023

Warning: kubernetes.deployment.name had no values, even though there were deployments in the cluster... Maybe there is a bug in the code, as it does not seem to be expected. I did not remove this field. Same for kubernetes.cronjob.name.

Since elastic-agent-autodiscovery 0.6.4 (elastic/elastic-agent-autodiscover@285f0bb) we disable deployment and cronjob enrichemnt by default.

So you need to enable those specifically in the advance options to make the deployment and cronjob name appear.

but not for node.annotations.* (same problem as kubernetes.annotations.*).

The annotations are not include by default. See documentation here: https://www.elastic.co/guide/en/fleet/current/add_kubernetes_metadata-processor.html

So you need to add include_annotations under node and namespace fields

FYI there is also this bug open that is related

@constanca-m
Copy link
Copy Markdown
Contributor Author

FYI there is also this elastic/elastic-agent#3636 open that is related

I see this is for node labels? I can see them in Discover. Only node annotations are not there

Signed-off-by: constanca-m <constanca.manteigas@elastic.co>
Signed-off-by: constanca-m <constanca.manteigas@elastic.co>
Signed-off-by: constanca-m <constanca.manteigas@elastic.co>
Copy link
Copy Markdown
Member

@ChrsMark ChrsMark left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@constanca-m I would suggest moving the Changes part of the PR's description into a more obvious place (in the begging of the description).

I have left some comments for some fields that should not be removed most probably. We should check the code and ensure that we don't miss sth here. Just testing that the integration works with default settings is not enough for removing stuff.

Also since I see you have a section Bugs and warnings with some open questions/concerns why not to first open an issue and call for feedback/discussion there instead of directly opening a PR with "risky" changes? In this way a possible change is first discussed carefully before moving into the actual implementation.

Comment thread packages/kubernetes/data_stream/state_pod/fields/base-fields.yml
description: >
Kubernetes node name

- name: node.hostname
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should this really be remove? Do we know why it is here right now?

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I can see if I can find some background on this field. But even checking with kubectl describe node ..., there is nothing close to node.hostname

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would say keep it for now as it might break old versions. If we are not 100% sure

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The PR that introduced this field is this one. I will be adding it again.

Signed-off-by: constanca-m <constanca.manteigas@elastic.co>
Signed-off-by: constanca-m <constanca.manteigas@elastic.co>
Signed-off-by: constanca-m <constanca.manteigas@elastic.co>
@constanca-m
Copy link
Copy Markdown
Contributor Author

/test

Signed-off-by: constanca-m <constanca.manteigas@elastic.co>
Signed-off-by: constanca-m <constanca.manteigas@elastic.co>
@constanca-m
Copy link
Copy Markdown
Contributor Author

Just testing that the integration works with default settings is not enough for removing stuff.

I tested all of them again @ChrsMark with EA standalone and I added in the description the manifest I used. In summary, I just added add_metadata and add_resource_metadata to all data streams. I tested all of them again, and I could find values for namespace_annotations for most, but again, no values for kubernetes.annotations.* at all.

@constanca-m constanca-m requested a review from ChrsMark November 13, 2023 05:50
description: >
Kubernetes annotations map

- name: selectors.*
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Selectors should stay here

According to. https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/ daemonset can have selectors

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Are you sure? @ChrsMark mentioned the implementation PR above

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok I was thinking that we might need them in the future. But ok as long as only services support that we can remove it

description: >
Kubernetes annotations map

- name: selectors.*
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same for deployment selectors should stay

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Here is the comment with the link to the implementation PR that introduces selectors.* only for state_service

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ok

description: >
Kubernetes annotations map

- name: selectors.*
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same should stay

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Here is the comment with the link to the implementation PR that introduces selectors.* only for state_service

type: keyword
description: >-
Kubernetes container image
Kubernetes annotations map No newline at end of file
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This line should be here?

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, it is the description for the annotations.* above, the other lines were removed

description: >
Kubernetes annotations map

- name: selectors.*
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Shoud stay I think

Signed-off-by: constanca-m <constanca.manteigas@elastic.co>
Copy link
Copy Markdown
Member

@ChrsMark ChrsMark left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Anything missing from this one?
If the previous comments are covered maybe resolve them so as to have a clear view of what is pending and what is not.

@constanca-m
Copy link
Copy Markdown
Contributor Author

Anything missing from this one?

I believe nothing is missing. @ChrsMark

@gizas Do you want to add anything? I did not mark some of your comments as resolved since I don't know if there is something else you want to say

Copy link
Copy Markdown
Contributor

@gizas gizas left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, I also did some tests with kind locally and all seem ok

Can I also suggest to run some tests in GKE just to see there if something changes? I wont think but just in case

@constanca-m constanca-m merged commit c326c9f into elastic:main Nov 23, 2023
@constanca-m constanca-m deleted the remove-base-fields branch November 23, 2023 11:16
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants