Skip to content

[9.3] [APM][Infra] Fix OTel metrics mapping in infrastructure tab (#259552)#260988

Merged
kibanamachine merged 1 commit intoelastic:9.3from
kibanamachine:backport/9.3/pr-259552
Apr 2, 2026
Merged

[9.3] [APM][Infra] Fix OTel metrics mapping in infrastructure tab (#259552)#260988
kibanamachine merged 1 commit intoelastic:9.3from
kibanamachine:backport/9.3/pr-259552

Conversation

@kibanamachine
Copy link
Copy Markdown
Contributor

Backport

This will backport the following commits from main to 9.3:

Questions ?

Please refer to the Backport tool documentation

…259552)

## Summary

Closes elastic#256731

Fix OTel metrics in the APM Infrastructure tab so hosts, pods, and
containers display actual values instead of `N/A`. The root causes were:
(1) hosts queried `metrics.system.memory.limit`, a field that doesn't
exist in `hostmetricsreceiver` data, (2) pod and container configs
queried `_limit_utilization` fields that only exist when Kubernetes
resource limits are explicitly set — which most deployments don't have,
and (3) all OTel dataset filters only matched `event.dataset`, missing
documents indexed under `data_stream.dataset`.

## Demo

### Before

https://github.com/user-attachments/assets/63193175-7893-47fa-8a82-ff76924908fb

### After

https://github.com/user-attachments/assets/440676f7-4168-4c13-9227-1b3b6bc74e57

## Problem

For OTel entities, the Infrastructure tab in APM showed `N/A` for most
metrics even when semconv data was present in Elasticsearch:

- **Hosts**: The `metrics.system.memory.limit` field doesn't exist in
`hostmetricsreceiver` data, causing Memory Total to show `N/A`.
- **Pods**: CPU used `metrics.k8s.pod.cpu_limit_utilization` (requires
CPU limits) and memory used `metrics.k8s.pod.memory_limit_utilization`
(requires memory limits). Both return empty results when limits aren't
set.
- **Containers (K8s)**: CPU used
`metrics.k8s.container.cpu_limit_utilization` and memory used
`metrics.k8s.container.memory_limit_utilization` — same limits-only
problem.
- **Dataset filters**: All OTel paths only matched `event.dataset`,
missing documents indexed under `data_stream.dataset`.

## Field mapping changes

### Hosts

| Metric | Before | After | Why |
|---|---|---|---|
| Memory total | `metrics.system.memory.limit` | Derived:
`metrics.system.memory.usage / metrics.system.memory.utilization` |
`memory.limit` doesn't exist in hostmetricsreceiver; total is derived
from usage and utilization ratio |
| Dataset filter | `event.dataset: "hostmetricsreceiver.otel"` |
`(data_stream.dataset: "hostmetricsreceiver.otel" OR event.dataset:
"hostmetricsreceiver.otel")` | Match both field locations |

### Pods

| Metric | Before | After | Why |
|---|---|---|---|
| CPU | `metrics.k8s.pod.cpu_limit_utilization` |
`metrics.k8s.pod.cpu.node.utilization` | `cpu_limit_utilization`
requires resource limits; `cpu.node.utilization` is always emitted by
kubeletstats |
| Memory | `metrics.k8s.pod.memory_limit_utilization` |
`metrics.k8s.pod.memory_limit_utilization` with fallback to
`metrics.k8s.pod.memory.working_set` | Queries both; prefers
`memory_limit_utilization` (shown as %) when available, falls back to
`memory.working_set` (shown as MB) to avoid N/A |
| Dataset filter | `event.dataset: "kubeletstatsreceiver.otel"` |
`(data_stream.dataset: "kubeletstatsreceiver.otel" OR event.dataset:
"kubeletstatsreceiver.otel")` | Match both field locations |

### Containers (K8s path)

| Metric | Before | After | Why |
|---|---|---|---|
| CPU | `metrics.k8s.container.cpu_limit_utilization` |
`metrics.container.cpu.usage` | `cpu_limit_utilization` requires
resource limits; `container.cpu.usage` is always emitted by kubeletstats
(0–1 ratio of one CPU core) |
| Memory | `metrics.k8s.container.memory_limit_utilization` |
`metrics.container.memory.working_set` | `memory_limit_utilization`
requires resource limits; `memory.working_set` (bytes → MB) is always
available |
| Memory unit | Always `%` for OTel | `MB` for K8s containers, `%` for
Docker containers | K8s path now uses `working_set` (bytes) not a
percentage |
| Dataset filter | `event.dataset: "kubeletstatsreceiver.otel"` |
`(data_stream.dataset: "kubeletstatsreceiver.otel" OR event.dataset:
"kubeletstatsreceiver.otel")` | Match both field locations |

### Containers (Docker path)

| Metric | Before | After | Why |
|---|---|---|---|
| Dataset filter | `event.dataset: "dockerstatsreceiver.otel"` |
`(data_stream.dataset: "dockerstatsreceiver.otel" OR event.dataset:
"dockerstatsreceiver.otel")` | Match both field locations |

## Other changes

- **Pod memory tooltip**: Added an `EuiIconTip` explaining the fallback
logic (prefers `memory_limit_utilization` as %, falls back to
`memory.working_set` as MB).
- **Pod CPU tooltip removed**: The old tooltip warned that
`cpu_limit_utilization` was optional. The new field
(`cpu.node.utilization`) is always present, making the tooltip
misleading.
- **OTel dataset filter helper**: Extracted `otelDatasetFilter()`
utility to avoid duplicating the `(data_stream.dataset OR
event.dataset)` pattern.
- **Host OTel unpack path**: Added `metricByFieldOtel` /
`unpackMetricOtel` so the host table correctly reads OTel metric
positions instead of falling through to ECS metric keys.

## Test plan

- [x] `yarn test:jest
x-pack/solutions/observability/plugins/metrics_data_access/public/components/infrastructure_node_metrics_tables/`
— all 46 tests pass
- [x] Manual smoke in APM Infrastructure tab with OTel service data
(e.g. `kbn-otel-demo` with EDOT Collector):
  - [ ] Hosts tab: CPU count, CPU %, Memory total, Memory % all populate
- [x] Pods tab: CPU % populates; Memory shows % (with limits) or MB
(without limits)
  - [x] Containers tab: CPU % and Memory MB populate for K8s containers

(cherry picked from commit c6485d7)
@kibanamachine kibanamachine added the backport This PR is a backport of another PR label Apr 2, 2026
@kibanamachine kibanamachine enabled auto-merge (squash) April 2, 2026 17:01
@kibanamachine kibanamachine merged commit 527a065 into elastic:9.3 Apr 2, 2026
21 checks passed
@elasticmachine
Copy link
Copy Markdown
Contributor

💚 Build Succeeded

Metrics [docs]

Async chunks

Total size of all lazy-loaded chunks that will be downloaded as the user navigates the app

id before after diff
metricsDataAccess 81.7KB 82.1KB +381.0B

cc @rmyz

rmyz added a commit that referenced this pull request Apr 3, 2026
…259552) (#261020)

# Backport

This will backport the following commits from `main` to `8.19`:
- [[APM][Infra] Fix OTel metrics mapping in infrastructure tab
(#259552)](#259552)

<!--- Backport version: 10.2.0 -->

### Questions ?
Please refer to the [Backport tool
documentation](https://github.com/sorenlouv/backport)

<!--BACKPORT [{"author":{"name":"Sergi
Romeu","email":"sergi.romeu@elastic.co"},"sourceCommit":{"committedDate":"2026-04-02T16:52:34Z","message":"[APM][Infra]
Fix OTel metrics mapping in infrastructure tab (#259552)\n\n##
Summary\n\nCloses #256731\n\nFix OTel metrics in the APM Infrastructure
tab so hosts, pods, and\ncontainers display actual values instead of
`N/A`. The root causes were:\n(1) hosts queried
`metrics.system.memory.limit`, a field that doesn't\nexist in
`hostmetricsreceiver` data, (2) pod and container configs\nqueried
`_limit_utilization` fields that only exist when Kubernetes\nresource
limits are explicitly set — which most deployments don't have,\nand (3)
all OTel dataset filters only matched `event.dataset`,
missing\ndocuments indexed under `data_stream.dataset`.\n\n##
Demo\n\n###
Before\n\n\nhttps://github.com/user-attachments/assets/63193175-7893-47fa-8a82-ff76924908fb\n\n\n###
After\n\n\nhttps://github.com/user-attachments/assets/440676f7-4168-4c13-9227-1b3b6bc74e57\n\n\n##
Problem\n\nFor OTel entities, the Infrastructure tab in APM showed `N/A`
for most\nmetrics even when semconv data was present in
Elasticsearch:\n\n- **Hosts**: The `metrics.system.memory.limit` field
doesn't exist in\n`hostmetricsreceiver` data, causing Memory Total to
show `N/A`.\n- **Pods**: CPU used
`metrics.k8s.pod.cpu_limit_utilization` (requires\nCPU limits) and
memory used `metrics.k8s.pod.memory_limit_utilization`\n(requires memory
limits). Both return empty results when limits aren't\nset.\n-
**Containers (K8s)**: CPU
used\n`metrics.k8s.container.cpu_limit_utilization` and memory
used\n`metrics.k8s.container.memory_limit_utilization` — same
limits-only\nproblem.\n- **Dataset filters**: All OTel paths only
matched `event.dataset`,\nmissing documents indexed under
`data_stream.dataset`.\n\n## Field mapping changes\n\n### Hosts\n\n|
Metric | Before | After | Why |\n|---|---|---|---|\n| Memory total |
`metrics.system.memory.limit` | Derived:\n`metrics.system.memory.usage /
metrics.system.memory.utilization` |\n`memory.limit` doesn't exist in
hostmetricsreceiver; total is derived\nfrom usage and utilization ratio
|\n| Dataset filter | `event.dataset: \"hostmetricsreceiver.otel\"`
|\n`(data_stream.dataset: \"hostmetricsreceiver.otel\" OR
event.dataset:\n\"hostmetricsreceiver.otel\")` | Match both field
locations |\n\n### Pods\n\n| Metric | Before | After | Why
|\n|---|---|---|---|\n| CPU | `metrics.k8s.pod.cpu_limit_utilization`
|\n`metrics.k8s.pod.cpu.node.utilization` |
`cpu_limit_utilization`\nrequires resource limits;
`cpu.node.utilization` is always emitted by\nkubeletstats |\n| Memory |
`metrics.k8s.pod.memory_limit_utilization`
|\n`metrics.k8s.pod.memory_limit_utilization` with fallback
to\n`metrics.k8s.pod.memory.working_set` | Queries both;
prefers\n`memory_limit_utilization` (shown as %) when available, falls
back to\n`memory.working_set` (shown as MB) to avoid N/A |\n| Dataset
filter | `event.dataset: \"kubeletstatsreceiver.otel\"`
|\n`(data_stream.dataset: \"kubeletstatsreceiver.otel\" OR
event.dataset:\n\"kubeletstatsreceiver.otel\")` | Match both field
locations |\n\n### Containers (K8s path)\n\n| Metric | Before | After |
Why |\n|---|---|---|---|\n| CPU |
`metrics.k8s.container.cpu_limit_utilization`
|\n`metrics.container.cpu.usage` | `cpu_limit_utilization`
requires\nresource limits; `container.cpu.usage` is always emitted by
kubeletstats\n(0–1 ratio of one CPU core) |\n| Memory |
`metrics.k8s.container.memory_limit_utilization`
|\n`metrics.container.memory.working_set` |
`memory_limit_utilization`\nrequires resource limits;
`memory.working_set` (bytes → MB) is always\navailable |\n| Memory unit
| Always `%` for OTel | `MB` for K8s containers, `%` for\nDocker
containers | K8s path now uses `working_set` (bytes) not a\npercentage
|\n| Dataset filter | `event.dataset: \"kubeletstatsreceiver.otel\"`
|\n`(data_stream.dataset: \"kubeletstatsreceiver.otel\" OR
event.dataset:\n\"kubeletstatsreceiver.otel\")` | Match both field
locations |\n\n### Containers (Docker path)\n\n| Metric | Before | After
| Why |\n|---|---|---|---|\n| Dataset filter | `event.dataset:
\"dockerstatsreceiver.otel\"` |\n`(data_stream.dataset:
\"dockerstatsreceiver.otel\" OR
event.dataset:\n\"dockerstatsreceiver.otel\")` | Match both field
locations |\n\n## Other changes\n\n- **Pod memory tooltip**: Added an
`EuiIconTip` explaining the fallback\nlogic (prefers
`memory_limit_utilization` as %, falls back to\n`memory.working_set` as
MB).\n- **Pod CPU tooltip removed**: The old tooltip warned
that\n`cpu_limit_utilization` was optional. The new
field\n(`cpu.node.utilization`) is always present, making the
tooltip\nmisleading.\n- **OTel dataset filter helper**: Extracted
`otelDatasetFilter()`\nutility to avoid duplicating the
`(data_stream.dataset OR\nevent.dataset)` pattern.\n- **Host OTel unpack
path**: Added `metricByFieldOtel` /\n`unpackMetricOtel` so the host
table correctly reads OTel metric\npositions instead of falling through
to ECS metric keys.\n\n## Test plan\n\n- [x] `yarn
test:jest\nx-pack/solutions/observability/plugins/metrics_data_access/public/components/infrastructure_node_metrics_tables/`\n—
all 46 tests pass\n- [x] Manual smoke in APM Infrastructure tab with
OTel service data\n(e.g. `kbn-otel-demo` with EDOT Collector):\n - [ ]
Hosts tab: CPU count, CPU %, Memory total, Memory % all populate\n- [x]
Pods tab: CPU % populates; Memory shows % (with limits) or MB\n(without
limits)\n - [x] Containers tab: CPU % and Memory MB populate for K8s
containers","sha":"c6485d753760eac356c74a253cbd440d7ef83227","branchLabelMapping":{"^v9.4.0$":"main","^v(\\d+).(\\d+).\\d+$":"$1.$2"}},"sourcePullRequest":{"labels":["release_note:fix","backport:all-open","v9.4.0","Team:obs-presentation","v9.3.3","v9.2.8"],"title":"[APM][Infra]
Fix OTel metrics mapping in infrastructure
tab","number":259552,"url":"https://github.com/elastic/kibana/pull/259552","mergeCommit":{"message":"[APM][Infra]
Fix OTel metrics mapping in infrastructure tab (#259552)\n\n##
Summary\n\nCloses #256731\n\nFix OTel metrics in the APM Infrastructure
tab so hosts, pods, and\ncontainers display actual values instead of
`N/A`. The root causes were:\n(1) hosts queried
`metrics.system.memory.limit`, a field that doesn't\nexist in
`hostmetricsreceiver` data, (2) pod and container configs\nqueried
`_limit_utilization` fields that only exist when Kubernetes\nresource
limits are explicitly set — which most deployments don't have,\nand (3)
all OTel dataset filters only matched `event.dataset`,
missing\ndocuments indexed under `data_stream.dataset`.\n\n##
Demo\n\n###
Before\n\n\nhttps://github.com/user-attachments/assets/63193175-7893-47fa-8a82-ff76924908fb\n\n\n###
After\n\n\nhttps://github.com/user-attachments/assets/440676f7-4168-4c13-9227-1b3b6bc74e57\n\n\n##
Problem\n\nFor OTel entities, the Infrastructure tab in APM showed `N/A`
for most\nmetrics even when semconv data was present in
Elasticsearch:\n\n- **Hosts**: The `metrics.system.memory.limit` field
doesn't exist in\n`hostmetricsreceiver` data, causing Memory Total to
show `N/A`.\n- **Pods**: CPU used
`metrics.k8s.pod.cpu_limit_utilization` (requires\nCPU limits) and
memory used `metrics.k8s.pod.memory_limit_utilization`\n(requires memory
limits). Both return empty results when limits aren't\nset.\n-
**Containers (K8s)**: CPU
used\n`metrics.k8s.container.cpu_limit_utilization` and memory
used\n`metrics.k8s.container.memory_limit_utilization` — same
limits-only\nproblem.\n- **Dataset filters**: All OTel paths only
matched `event.dataset`,\nmissing documents indexed under
`data_stream.dataset`.\n\n## Field mapping changes\n\n### Hosts\n\n|
Metric | Before | After | Why |\n|---|---|---|---|\n| Memory total |
`metrics.system.memory.limit` | Derived:\n`metrics.system.memory.usage /
metrics.system.memory.utilization` |\n`memory.limit` doesn't exist in
hostmetricsreceiver; total is derived\nfrom usage and utilization ratio
|\n| Dataset filter | `event.dataset: \"hostmetricsreceiver.otel\"`
|\n`(data_stream.dataset: \"hostmetricsreceiver.otel\" OR
event.dataset:\n\"hostmetricsreceiver.otel\")` | Match both field
locations |\n\n### Pods\n\n| Metric | Before | After | Why
|\n|---|---|---|---|\n| CPU | `metrics.k8s.pod.cpu_limit_utilization`
|\n`metrics.k8s.pod.cpu.node.utilization` |
`cpu_limit_utilization`\nrequires resource limits;
`cpu.node.utilization` is always emitted by\nkubeletstats |\n| Memory |
`metrics.k8s.pod.memory_limit_utilization`
|\n`metrics.k8s.pod.memory_limit_utilization` with fallback
to\n`metrics.k8s.pod.memory.working_set` | Queries both;
prefers\n`memory_limit_utilization` (shown as %) when available, falls
back to\n`memory.working_set` (shown as MB) to avoid N/A |\n| Dataset
filter | `event.dataset: \"kubeletstatsreceiver.otel\"`
|\n`(data_stream.dataset: \"kubeletstatsreceiver.otel\" OR
event.dataset:\n\"kubeletstatsreceiver.otel\")` | Match both field
locations |\n\n### Containers (K8s path)\n\n| Metric | Before | After |
Why |\n|---|---|---|---|\n| CPU |
`metrics.k8s.container.cpu_limit_utilization`
|\n`metrics.container.cpu.usage` | `cpu_limit_utilization`
requires\nresource limits; `container.cpu.usage` is always emitted by
kubeletstats\n(0–1 ratio of one CPU core) |\n| Memory |
`metrics.k8s.container.memory_limit_utilization`
|\n`metrics.container.memory.working_set` |
`memory_limit_utilization`\nrequires resource limits;
`memory.working_set` (bytes → MB) is always\navailable |\n| Memory unit
| Always `%` for OTel | `MB` for K8s containers, `%` for\nDocker
containers | K8s path now uses `working_set` (bytes) not a\npercentage
|\n| Dataset filter | `event.dataset: \"kubeletstatsreceiver.otel\"`
|\n`(data_stream.dataset: \"kubeletstatsreceiver.otel\" OR
event.dataset:\n\"kubeletstatsreceiver.otel\")` | Match both field
locations |\n\n### Containers (Docker path)\n\n| Metric | Before | After
| Why |\n|---|---|---|---|\n| Dataset filter | `event.dataset:
\"dockerstatsreceiver.otel\"` |\n`(data_stream.dataset:
\"dockerstatsreceiver.otel\" OR
event.dataset:\n\"dockerstatsreceiver.otel\")` | Match both field
locations |\n\n## Other changes\n\n- **Pod memory tooltip**: Added an
`EuiIconTip` explaining the fallback\nlogic (prefers
`memory_limit_utilization` as %, falls back to\n`memory.working_set` as
MB).\n- **Pod CPU tooltip removed**: The old tooltip warned
that\n`cpu_limit_utilization` was optional. The new
field\n(`cpu.node.utilization`) is always present, making the
tooltip\nmisleading.\n- **OTel dataset filter helper**: Extracted
`otelDatasetFilter()`\nutility to avoid duplicating the
`(data_stream.dataset OR\nevent.dataset)` pattern.\n- **Host OTel unpack
path**: Added `metricByFieldOtel` /\n`unpackMetricOtel` so the host
table correctly reads OTel metric\npositions instead of falling through
to ECS metric keys.\n\n## Test plan\n\n- [x] `yarn
test:jest\nx-pack/solutions/observability/plugins/metrics_data_access/public/components/infrastructure_node_metrics_tables/`\n—
all 46 tests pass\n- [x] Manual smoke in APM Infrastructure tab with
OTel service data\n(e.g. `kbn-otel-demo` with EDOT Collector):\n - [ ]
Hosts tab: CPU count, CPU %, Memory total, Memory % all populate\n- [x]
Pods tab: CPU % populates; Memory shows % (with limits) or MB\n(without
limits)\n - [x] Containers tab: CPU % and Memory MB populate for K8s
containers","sha":"c6485d753760eac356c74a253cbd440d7ef83227"}},"sourceBranch":"main","suggestedTargetBranches":[],"targetPullRequestStates":[{"branch":"main","label":"v9.4.0","branchLabelMappingKey":"^v9.4.0$","isSourceBranch":true,"state":"MERGED","url":"https://github.com/elastic/kibana/pull/259552","number":259552,"mergeCommit":{"message":"[APM][Infra]
Fix OTel metrics mapping in infrastructure tab (#259552)\n\n##
Summary\n\nCloses #256731\n\nFix OTel metrics in the APM Infrastructure
tab so hosts, pods, and\ncontainers display actual values instead of
`N/A`. The root causes were:\n(1) hosts queried
`metrics.system.memory.limit`, a field that doesn't\nexist in
`hostmetricsreceiver` data, (2) pod and container configs\nqueried
`_limit_utilization` fields that only exist when Kubernetes\nresource
limits are explicitly set — which most deployments don't have,\nand (3)
all OTel dataset filters only matched `event.dataset`,
missing\ndocuments indexed under `data_stream.dataset`.\n\n##
Demo\n\n###
Before\n\n\nhttps://github.com/user-attachments/assets/63193175-7893-47fa-8a82-ff76924908fb\n\n\n###
After\n\n\nhttps://github.com/user-attachments/assets/440676f7-4168-4c13-9227-1b3b6bc74e57\n\n\n##
Problem\n\nFor OTel entities, the Infrastructure tab in APM showed `N/A`
for most\nmetrics even when semconv data was present in
Elasticsearch:\n\n- **Hosts**: The `metrics.system.memory.limit` field
doesn't exist in\n`hostmetricsreceiver` data, causing Memory Total to
show `N/A`.\n- **Pods**: CPU used
`metrics.k8s.pod.cpu_limit_utilization` (requires\nCPU limits) and
memory used `metrics.k8s.pod.memory_limit_utilization`\n(requires memory
limits). Both return empty results when limits aren't\nset.\n-
**Containers (K8s)**: CPU
used\n`metrics.k8s.container.cpu_limit_utilization` and memory
used\n`metrics.k8s.container.memory_limit_utilization` — same
limits-only\nproblem.\n- **Dataset filters**: All OTel paths only
matched `event.dataset`,\nmissing documents indexed under
`data_stream.dataset`.\n\n## Field mapping changes\n\n### Hosts\n\n|
Metric | Before | After | Why |\n|---|---|---|---|\n| Memory total |
`metrics.system.memory.limit` | Derived:\n`metrics.system.memory.usage /
metrics.system.memory.utilization` |\n`memory.limit` doesn't exist in
hostmetricsreceiver; total is derived\nfrom usage and utilization ratio
|\n| Dataset filter | `event.dataset: \"hostmetricsreceiver.otel\"`
|\n`(data_stream.dataset: \"hostmetricsreceiver.otel\" OR
event.dataset:\n\"hostmetricsreceiver.otel\")` | Match both field
locations |\n\n### Pods\n\n| Metric | Before | After | Why
|\n|---|---|---|---|\n| CPU | `metrics.k8s.pod.cpu_limit_utilization`
|\n`metrics.k8s.pod.cpu.node.utilization` |
`cpu_limit_utilization`\nrequires resource limits;
`cpu.node.utilization` is always emitted by\nkubeletstats |\n| Memory |
`metrics.k8s.pod.memory_limit_utilization`
|\n`metrics.k8s.pod.memory_limit_utilization` with fallback
to\n`metrics.k8s.pod.memory.working_set` | Queries both;
prefers\n`memory_limit_utilization` (shown as %) when available, falls
back to\n`memory.working_set` (shown as MB) to avoid N/A |\n| Dataset
filter | `event.dataset: \"kubeletstatsreceiver.otel\"`
|\n`(data_stream.dataset: \"kubeletstatsreceiver.otel\" OR
event.dataset:\n\"kubeletstatsreceiver.otel\")` | Match both field
locations |\n\n### Containers (K8s path)\n\n| Metric | Before | After |
Why |\n|---|---|---|---|\n| CPU |
`metrics.k8s.container.cpu_limit_utilization`
|\n`metrics.container.cpu.usage` | `cpu_limit_utilization`
requires\nresource limits; `container.cpu.usage` is always emitted by
kubeletstats\n(0–1 ratio of one CPU core) |\n| Memory |
`metrics.k8s.container.memory_limit_utilization`
|\n`metrics.container.memory.working_set` |
`memory_limit_utilization`\nrequires resource limits;
`memory.working_set` (bytes → MB) is always\navailable |\n| Memory unit
| Always `%` for OTel | `MB` for K8s containers, `%` for\nDocker
containers | K8s path now uses `working_set` (bytes) not a\npercentage
|\n| Dataset filter | `event.dataset: \"kubeletstatsreceiver.otel\"`
|\n`(data_stream.dataset: \"kubeletstatsreceiver.otel\" OR
event.dataset:\n\"kubeletstatsreceiver.otel\")` | Match both field
locations |\n\n### Containers (Docker path)\n\n| Metric | Before | After
| Why |\n|---|---|---|---|\n| Dataset filter | `event.dataset:
\"dockerstatsreceiver.otel\"` |\n`(data_stream.dataset:
\"dockerstatsreceiver.otel\" OR
event.dataset:\n\"dockerstatsreceiver.otel\")` | Match both field
locations |\n\n## Other changes\n\n- **Pod memory tooltip**: Added an
`EuiIconTip` explaining the fallback\nlogic (prefers
`memory_limit_utilization` as %, falls back to\n`memory.working_set` as
MB).\n- **Pod CPU tooltip removed**: The old tooltip warned
that\n`cpu_limit_utilization` was optional. The new
field\n(`cpu.node.utilization`) is always present, making the
tooltip\nmisleading.\n- **OTel dataset filter helper**: Extracted
`otelDatasetFilter()`\nutility to avoid duplicating the
`(data_stream.dataset OR\nevent.dataset)` pattern.\n- **Host OTel unpack
path**: Added `metricByFieldOtel` /\n`unpackMetricOtel` so the host
table correctly reads OTel metric\npositions instead of falling through
to ECS metric keys.\n\n## Test plan\n\n- [x] `yarn
test:jest\nx-pack/solutions/observability/plugins/metrics_data_access/public/components/infrastructure_node_metrics_tables/`\n—
all 46 tests pass\n- [x] Manual smoke in APM Infrastructure tab with
OTel service data\n(e.g. `kbn-otel-demo` with EDOT Collector):\n - [ ]
Hosts tab: CPU count, CPU %, Memory total, Memory % all populate\n- [x]
Pods tab: CPU % populates; Memory shows % (with limits) or MB\n(without
limits)\n - [x] Containers tab: CPU % and Memory MB populate for K8s
containers","sha":"c6485d753760eac356c74a253cbd440d7ef83227"}},{"branch":"9.3","label":"v9.3.3","branchLabelMappingKey":"^v(\\d+).(\\d+).\\d+$","isSourceBranch":false,"url":"https://github.com/elastic/kibana/pull/260988","number":260988,"state":"MERGED","mergeCommit":{"sha":"527a065dfb9714b23dc7ad5ce4b7083647b3a798","message":"[9.3]
[APM][Infra] Fix OTel metrics mapping in infrastructure tab (#259552)
(#260988)\n\n# Backport\n\nThis will backport the following commits from
`main` to `9.3`:\n- [[APM][Infra] Fix OTel metrics mapping in
infrastructure
tab\n(#259552)](https://github.com/elastic/kibana/pull/259552)\n\n\n\n###
Questions ?\nPlease refer to the [Backport
tool\ndocumentation](https://github.com/sorenlouv/backport)\n\n\n\nCo-authored-by:
Sergi Romeu
<sergi.romeu@elastic.co>"}},{"branch":"9.2","label":"v9.2.8","branchLabelMappingKey":"^v(\\d+).(\\d+).\\d+$","isSourceBranch":false,"url":"https://github.com/elastic/kibana/pull/260987","number":260987,"state":"MERGED","mergeCommit":{"sha":"f4e284b5e9e1d3eeadbcdc9c6bacb72a532d9521","message":"[9.2]
[APM][Infra] Fix OTel metrics mapping in infrastructure tab (#259552)
(#260987)\n\n# Backport\n\nThis will backport the following commits from
`main` to `9.2`:\n- [[APM][Infra] Fix OTel metrics mapping in
infrastructure
tab\n(#259552)](https://github.com/elastic/kibana/pull/259552)\n\n\n\n###
Questions ?\nPlease refer to the [Backport
tool\ndocumentation](https://github.com/sorenlouv/backport)\n\n\n\nCo-authored-by:
Sergi Romeu <sergi.romeu@elastic.co>"}}]}] BACKPORT-->

---------

Co-authored-by: kibanamachine <42973632+kibanamachine@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

backport This PR is a backport of another PR

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants