[Otel Collector] Kubernetes Attribute Processor is Not Attaching k8s Metadata to Spans/Traces #10257
-
Cluster Environment
Helm Charts
ProblemAs mentioned in the Otel documentation, the K8s attribute processor would automatically attach the spans/ metrics and log resource attributes with K8s metadata, but even though I enabled the Here is my configuration: mode: deployment
image:
repository: otel/opentelemetry-collector-k8s
# We only want one of these collectors - any more and we'd produce duplicate data
replicaCount: 1
presets:
# enables the k8sclusterreceiver and adds it to the metrics pipelines
clusterMetrics:
enabled: true
# enables the k8sobjectsreceiver to collect events only and adds it to the logs pipelines
kubernetesEvents:
enabled: true
kubernetesAttributes:
enabled: true
extractAllPodLabels: true
extractAllPodAnnotations: true
tolerations:
- key: node-role.kubernetes.io/master
operator: Exists
effect: NoSchedule
- key: node-role.kubernetes.io/control-plane
operator: Exists
effect: NoSchedule
- key: CriticalAddonsOnly
operator: Exists
effect: NoExecute
podMonitor:
enabled: true
serviceMonitor:
enabled: true
ports:
metrics:
enabled: true
config:
processors:
k8sattributes:
pod_association:
- sources:
- from: resource_attribute
name: k8s.pod.ip
- sources:
- from: resource_attribute
name: k8s.pod.name
- from: resource_attribute
name: k8s.namespace.name
exporters:
otlp:
endpoint: tempo-distributor.tempo.svc.cluster.local:4317
tls:
insecure: true
otlphttp:
endpoint: http://tempo-distributor.tempo.svc.cluster.local:4318
tls:
insecure: true
receivers:
otlp:
protocols:
http:
endpoint: 0.0.0.0:4318
grpc:
endpoint: 0.0.0.0:4317
zipkin:
endpoint: 0.0.0.0:9411
service:
telemetry:
logs:
level: DEBUG
development: true
encoding: json
pipelines:
logs:
exporters: [ debug ]
processors: [ batch, k8sattributes ]
traces:
processors: [ batch, k8sattributes]
receivers: [ otlp, zipkin ]
exporters: [ otlp, otlphttp ] In this context, I have a Node backend that is being instrumented with Otel and sent to the collector (e.g. As you can see, the K8s metadata are not attached to the Spans/Traces. LogsThe debug logs don't really tell me much about the traces that were received from the Node backend. The only kind of logs I get would look like the following: {
"level": "debug",
"ts": 1717007040.3467577,
"caller": "[email protected]/otlp.go:153",
"msg": "Preparing to make HTTP request",
"kind": "exporter",
"data_type": "traces",
"name": "otlphttp",
"url": "http://tempo-distributor.tempo.svc.cluster.local:4318/v1/traces"
} {"level":"debug","ts":1717014583.9830065,"caller":"[email protected]/processor.go:122","msg":"evaluating pod identifier","kind":"processor","name":"k8sattributes","pipeline":"metrics","value":[{"Source":{"From":"resource_attribute","Name":"k8s.pod.name"},"Value":"naro-depl-98ddbd877-9gd7d"},{"Source":{"From":"resource_attribute","Name":"k8s.namespace.name"},"Value":"default"},{"Source":{"From":"","Name":""},"Value":""},{"Source":{"From":"","Name":""},"Value":""}]} DiscussionWith all said, I was expecting the k8s attribute processor to attach the metadata to the resource attributes, but that didn't seem to be the case. I'm also wondering if it has to do something with my nodes being tainted, or if the Otel collector needs to be deployed on all server nodes, or if some other configuration is needed within Tempo. Any suggestion or insights would be much appreciated! |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment
-
Okay, after including the following source to my config:
processors:
k8sattributes:
pod_association:
- sources:
- from: resource_attribute
name: k8s.pod.ip
- sources: # this source seems to help identify the pods and the k8s metadata
- from: resource_attribute
name: k8s.pod.uid
- sources:
- from: connection |
Beta Was this translation helpful? Give feedback.
Okay, after including the following source to my
pod_association
, it seems the processor is attaching the k8s metadata to the attributes.