Skip to content

[APM][Infra] Fix OTel metrics mapping in infrastructure tab#259552

Merged
rmyz merged 13 commits intoelastic:mainfrom
rmyz:256731-apm-fix-infrastructure-tab-otel
Apr 2, 2026
Merged

[APM][Infra] Fix OTel metrics mapping in infrastructure tab#259552
rmyz merged 13 commits intoelastic:mainfrom
rmyz:256731-apm-fix-infrastructure-tab-otel

Conversation

@rmyz
Copy link
Copy Markdown
Contributor

@rmyz rmyz commented Mar 25, 2026

Summary

Closes #256731

Fix OTel metrics in the APM Infrastructure tab so hosts, pods, and containers display actual values instead of N/A. The root causes were: (1) hosts queried metrics.system.memory.limit, a field that doesn't exist in hostmetricsreceiver data, (2) pod and container configs queried _limit_utilization fields that only exist when Kubernetes resource limits are explicitly set — which most deployments don't have, and (3) all OTel dataset filters only matched event.dataset, missing documents indexed under data_stream.dataset.

Demo

Before

Kapture.2026-04-02.at.09.26.23.mp4

After

Kapture.2026-04-02.at.09.12.09.mp4

Problem

For OTel entities, the Infrastructure tab in APM showed N/A for most metrics even when semconv data was present in Elasticsearch:

  • Hosts: The metrics.system.memory.limit field doesn't exist in hostmetricsreceiver data, causing Memory Total to show N/A.
  • Pods: CPU used metrics.k8s.pod.cpu_limit_utilization (requires CPU limits) and memory used metrics.k8s.pod.memory_limit_utilization (requires memory limits). Both return empty results when limits aren't set.
  • Containers (K8s): CPU used metrics.k8s.container.cpu_limit_utilization and memory used metrics.k8s.container.memory_limit_utilization — same limits-only problem.
  • Dataset filters: All OTel paths only matched event.dataset, missing documents indexed under data_stream.dataset.

Field mapping changes

Hosts

Metric Before After Why
Memory total metrics.system.memory.limit Derived: metrics.system.memory.usage / metrics.system.memory.utilization memory.limit doesn't exist in hostmetricsreceiver; total is derived from usage and utilization ratio
Dataset filter event.dataset: "hostmetricsreceiver.otel" (data_stream.dataset: "hostmetricsreceiver.otel" OR event.dataset: "hostmetricsreceiver.otel") Match both field locations

Pods

Metric Before After Why
CPU metrics.k8s.pod.cpu_limit_utilization metrics.k8s.pod.cpu.node.utilization cpu_limit_utilization requires resource limits; cpu.node.utilization is always emitted by kubeletstats
Memory metrics.k8s.pod.memory_limit_utilization metrics.k8s.pod.memory_limit_utilization with fallback to metrics.k8s.pod.memory.working_set Queries both; prefers memory_limit_utilization (shown as %) when available, falls back to memory.working_set (shown as MB) to avoid N/A
Dataset filter event.dataset: "kubeletstatsreceiver.otel" (data_stream.dataset: "kubeletstatsreceiver.otel" OR event.dataset: "kubeletstatsreceiver.otel") Match both field locations

Containers (K8s path)

Metric Before After Why
CPU metrics.k8s.container.cpu_limit_utilization metrics.container.cpu.usage cpu_limit_utilization requires resource limits; container.cpu.usage is always emitted by kubeletstats (0–1 ratio of one CPU core)
Memory metrics.k8s.container.memory_limit_utilization metrics.container.memory.working_set memory_limit_utilization requires resource limits; memory.working_set (bytes → MB) is always available
Memory unit Always % for OTel MB for K8s containers, % for Docker containers K8s path now uses working_set (bytes) not a percentage
Dataset filter event.dataset: "kubeletstatsreceiver.otel" (data_stream.dataset: "kubeletstatsreceiver.otel" OR event.dataset: "kubeletstatsreceiver.otel") Match both field locations

Containers (Docker path)

Metric Before After Why
Dataset filter event.dataset: "dockerstatsreceiver.otel" (data_stream.dataset: "dockerstatsreceiver.otel" OR event.dataset: "dockerstatsreceiver.otel") Match both field locations

Other changes

  • Pod memory tooltip: Added an EuiIconTip explaining the fallback logic (prefers memory_limit_utilization as %, falls back to memory.working_set as MB).
  • Pod CPU tooltip removed: The old tooltip warned that cpu_limit_utilization was optional. The new field (cpu.node.utilization) is always present, making the tooltip misleading.
  • OTel dataset filter helper: Extracted otelDatasetFilter() utility to avoid duplicating the (data_stream.dataset OR event.dataset) pattern.
  • Host OTel unpack path: Added metricByFieldOtel / unpackMetricOtel so the host table correctly reads OTel metric positions instead of falling through to ECS metric keys.

Test plan

  • yarn test:jest x-pack/solutions/observability/plugins/metrics_data_access/public/components/infrastructure_node_metrics_tables/ — all 46 tests pass
  • Manual smoke in APM Infrastructure tab with OTel service data (e.g. kbn-otel-demo with EDOT Collector):
    • Hosts tab: CPU count, CPU %, Memory total, Memory % all populate
    • Pods tab: CPU % populates; Memory shows % (with limits) or MB (without limits)
    • Containers tab: CPU % and Memory MB populate for K8s containers

Align semconv host metric field names and dataset filtering in the APM infrastructure host metrics table so available OTel metrics populate reliably instead of rendering N/A. Add regression coverage for the OTel query config and row transformation path.

Closes elastic#256731

Made-with: Cursor
@rmyz rmyz self-assigned this Mar 25, 2026
@rmyz rmyz added release_note:fix backport:version Backport to applied version labels Team:obs-presentation Focus: APM UI, Infra UI, Hosts UI, Universal Profiling, Obs Overview and left Navigation labels Mar 25, 2026
@rmyz
Copy link
Copy Markdown
Contributor Author

rmyz commented Mar 25, 2026

/ci

@rmyz
Copy link
Copy Markdown
Contributor Author

rmyz commented Mar 26, 2026

/ci

@rmyz rmyz requested a review from Copilot March 26, 2026 12:32
Copy link
Copy Markdown
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Fixes OTel host metrics rendering in the APM Infrastructure tab by aligning host semconv metric field names and using a more resilient OTel dataset filter, plus adding regression tests for the updated query/transform path.

Changes:

  • Updated OTel (semconv) host metric field constants to use system.* field names.
  • Broadened OTel dataset source filters to match either data_stream.dataset or event.dataset.
  • Added an OTel-specific metric lookup/unpack path in the host metrics table and extended unit tests to cover the new behavior.

Reviewed changes

Copilot reviewed 8 out of 8 changed files in this pull request and generated 2 comments.

Show a summary per file
File Description
x-pack/solutions/observability/plugins/metrics_data_access/public/components/infrastructure_node_metrics_tables/shared/constants.ts Aligns semconv host metric field names to system.* fields.
x-pack/solutions/observability/plugins/metrics_data_access/public/components/infrastructure_node_metrics_tables/pod/use_pod_metrics_table.ts Broadens OTel kubelet dataset filter to include data_stream.dataset.
x-pack/solutions/observability/plugins/metrics_data_access/public/components/infrastructure_node_metrics_tables/pod/use_pod_metrics_table.test.ts Updates test expectation for the new OTel kuery filter.
x-pack/solutions/observability/plugins/metrics_data_access/public/components/infrastructure_node_metrics_tables/host/use_host_metrics_table.ts Adds OTel-specific metric unpacking and uses a resilient OTel dataset filter.
x-pack/solutions/observability/plugins/metrics_data_access/public/components/infrastructure_node_metrics_tables/host/use_host_metrics_table.test.ts Adds regression test asserting non-null transformed values for OTel host rows and updates kuery expectation.
x-pack/solutions/observability/plugins/metrics_data_access/public/components/infrastructure_node_metrics_tables/container/use_container_metrics_table.test.ts Updates test expectations for resilient OTel dataset filters.
x-pack/solutions/observability/plugins/metrics_data_access/public/components/infrastructure_node_metrics_tables/container/container_metrics_configs.ts Broadens OTel dataset filters for docker/kubelet semconv containers.
x-pack/solutions/observability/plugins/metrics_data_access/public/components/infrastructure_node_metrics_tables/container/container_metrics_configs.test.ts Updates kuery expectations to match the broader OTel dataset filters.

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

@rmyz
Copy link
Copy Markdown
Contributor Author

rmyz commented Mar 27, 2026

/ci

@rmyz rmyz closed this Apr 1, 2026
@rmyz rmyz reopened this Apr 2, 2026
@rmyz
Copy link
Copy Markdown
Contributor Author

rmyz commented Apr 2, 2026

/ci

@rmyz rmyz requested a review from Copilot April 2, 2026 07:16
Copy link
Copy Markdown
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Copilot reviewed 11 out of 11 changed files in this pull request and generated 3 comments.


💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

@rmyz
Copy link
Copy Markdown
Contributor Author

rmyz commented Apr 2, 2026

/ci

@rmyz rmyz changed the title [APM][Infra] Fix OTel host metrics mapping in infrastructure tab [APM][Infra] Fix OTel metrics mapping in infrastructure tab Apr 2, 2026
Change Painless equation from 'A / B' to 'A / (B > 0 ? B : 1)' to
prevent Infinity when system.memory.utilization is unexpectedly zero.

Made-with: Cursor
rmyz added 3 commits April 2, 2026 12:12
Made-with: Cursor
When the bucket_script equation `B > 0 ? A / B : null` skips a bucket,
the metric key is missing from the row. `makeUnpackMetric` returned
`undefined` (not `null`), bypassing the null guard and producing NaN via
`Math.floor(undefined / 1e6)`. Add nullish coalescing in the helper so
it honours its `number | null` return type, and add a regression test.

Made-with: Cursor
@jennypavlova jennypavlova self-requested a review April 2, 2026 15:18
@jennypavlova
Copy link
Copy Markdown
Member

/oblt-deploy

Copy link
Copy Markdown
Member

@jennypavlova jennypavlova left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Tested locally and it LGTM 💯

Not related probably: Just a small note, not sure if it's related to the testing data but the connection between the service and the host is visible in APM but not on the host detail (probably a data issue so you can ignore if it works for you or open a new issue)

Screen.Recording.2026-04-02.at.18.10.22.mov

And some nits/questions ⬇️

Comment on lines 175 to +206
@@ -173,39 +199,37 @@ function calculateMetricAverages(rows: MetricsExplorerRow[]) {

let averageMemoryUsagePercent = null;
if (averageMemoryUsagePercentValues.length !== 0) {
averageMemoryUsagePercent = scaleUpPercentage(averageOfValues(averageMemoryUsagePercentValues));
const avg = averageOfValues(averageMemoryUsagePercentValues);
averageMemoryUsagePercent = memoryUnit === '%' ? scaleUpPercentage(avg) : Math.floor(avg);
}
return {
averageCpuUsagePercent,
averageMemoryUsagePercent,
};
}

function collectMetricValues(rows: MetricsExplorerRow[]) {
const averageCpuUsagePercentValues: number[] = [];
const averageMemoryUsagePercentValues: number[] = [];

rows.forEach((row) => {
const { averageCpuUsagePercent, averageMemoryUsagePercent } = unpackMetrics(row);

if (averageCpuUsagePercent !== null) {
averageCpuUsagePercentValues.push(averageCpuUsagePercent);
}

if (averageMemoryUsagePercent !== null) {
averageMemoryUsagePercentValues.push(averageMemoryUsagePercent);
}
});

return {
averageCpuUsagePercentValues,
averageMemoryUsagePercentValues,
};
return { averageCpuUsagePercent, averageMemoryUsagePercent, memoryUnit };
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

NIT We can optimize a bi: extract the unpackRow logic from the for loop to avoid doing it on every iteration, destructure and some changes from let to const - it is a NIT so it is an optional change, wdyt?

Suggested change
function calculateMetricAverages(
rows: MetricsExplorerRow[],
isOtel: boolean
): Omit<PodNodeMetricsRow, 'id' | 'name'> {
const unpackRow = isOtel ? unpackMetricsOtel : unpackMetrics;
const averageCpuUsagePercentValues: number[] = [];
const averageMemoryUsagePercentValues: number[] = [];
let memoryUnit: PodNodeMetricsRow['memoryUnit'] = '%';
for (const row of rows) {
const {
averageCpuUsagePercent,
averageMemoryUsagePercent,
memoryUnit: rowMemoryUnit,
} = unpackRow(row);
if (averageCpuUsagePercent !== null) {
averageCpuUsagePercentValues.push(averageCpuUsagePercent);
}
if (averageMemoryUsagePercent !== null) {
averageMemoryUsagePercentValues.push(averageMemoryUsagePercent);
}
memoryUnit = rowMemoryUnit;
}
const averageCpuUsagePercent =
averageCpuUsagePercentValues.length === 0
? null
: scaleUpPercentage(averageOfValues(averageCpuUsagePercentValues));
const averageMemoryUsagePercent =
averageMemoryUsagePercentValues.length === 0
? null
: memoryUnit === '%'
? scaleUpPercentage(averageOfValues(averageMemoryUsagePercentValues))
: Math.floor(averageOfValues(averageMemoryUsagePercentValues));
return { averageCpuUsagePercent, averageMemoryUsagePercent, memoryUnit };
}

const memoryBytes = unpackMetricOtel(row, SEMCONV_K8S_POD_MEMORY_WORKING_SET);
return {
averageCpuUsagePercent: cpuUtilization,
averageMemoryUsagePercent: memoryBytes != null ? memoryBytes / 1_000_000 : null,
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Q: Should we check if memoryBytes is a number instead (or try to cast it in case it's a string) so we be sure that we can do the operation after?

@rmyz rmyz merged commit c6485d7 into elastic:main Apr 2, 2026
27 checks passed
@kibanamachine
Copy link
Copy Markdown
Contributor

Starting backport for target branches: 8.19, 9.2, 9.3

https://github.com/elastic/kibana/actions/runs/23911799505

kibanamachine pushed a commit to kibanamachine/kibana that referenced this pull request Apr 2, 2026
…259552)

## Summary

Closes elastic#256731

Fix OTel metrics in the APM Infrastructure tab so hosts, pods, and
containers display actual values instead of `N/A`. The root causes were:
(1) hosts queried `metrics.system.memory.limit`, a field that doesn't
exist in `hostmetricsreceiver` data, (2) pod and container configs
queried `_limit_utilization` fields that only exist when Kubernetes
resource limits are explicitly set — which most deployments don't have,
and (3) all OTel dataset filters only matched `event.dataset`, missing
documents indexed under `data_stream.dataset`.

## Demo

### Before

https://github.com/user-attachments/assets/63193175-7893-47fa-8a82-ff76924908fb

### After

https://github.com/user-attachments/assets/440676f7-4168-4c13-9227-1b3b6bc74e57

## Problem

For OTel entities, the Infrastructure tab in APM showed `N/A` for most
metrics even when semconv data was present in Elasticsearch:

- **Hosts**: The `metrics.system.memory.limit` field doesn't exist in
`hostmetricsreceiver` data, causing Memory Total to show `N/A`.
- **Pods**: CPU used `metrics.k8s.pod.cpu_limit_utilization` (requires
CPU limits) and memory used `metrics.k8s.pod.memory_limit_utilization`
(requires memory limits). Both return empty results when limits aren't
set.
- **Containers (K8s)**: CPU used
`metrics.k8s.container.cpu_limit_utilization` and memory used
`metrics.k8s.container.memory_limit_utilization` — same limits-only
problem.
- **Dataset filters**: All OTel paths only matched `event.dataset`,
missing documents indexed under `data_stream.dataset`.

## Field mapping changes

### Hosts

| Metric | Before | After | Why |
|---|---|---|---|
| Memory total | `metrics.system.memory.limit` | Derived:
`metrics.system.memory.usage / metrics.system.memory.utilization` |
`memory.limit` doesn't exist in hostmetricsreceiver; total is derived
from usage and utilization ratio |
| Dataset filter | `event.dataset: "hostmetricsreceiver.otel"` |
`(data_stream.dataset: "hostmetricsreceiver.otel" OR event.dataset:
"hostmetricsreceiver.otel")` | Match both field locations |

### Pods

| Metric | Before | After | Why |
|---|---|---|---|
| CPU | `metrics.k8s.pod.cpu_limit_utilization` |
`metrics.k8s.pod.cpu.node.utilization` | `cpu_limit_utilization`
requires resource limits; `cpu.node.utilization` is always emitted by
kubeletstats |
| Memory | `metrics.k8s.pod.memory_limit_utilization` |
`metrics.k8s.pod.memory_limit_utilization` with fallback to
`metrics.k8s.pod.memory.working_set` | Queries both; prefers
`memory_limit_utilization` (shown as %) when available, falls back to
`memory.working_set` (shown as MB) to avoid N/A |
| Dataset filter | `event.dataset: "kubeletstatsreceiver.otel"` |
`(data_stream.dataset: "kubeletstatsreceiver.otel" OR event.dataset:
"kubeletstatsreceiver.otel")` | Match both field locations |

### Containers (K8s path)

| Metric | Before | After | Why |
|---|---|---|---|
| CPU | `metrics.k8s.container.cpu_limit_utilization` |
`metrics.container.cpu.usage` | `cpu_limit_utilization` requires
resource limits; `container.cpu.usage` is always emitted by kubeletstats
(0–1 ratio of one CPU core) |
| Memory | `metrics.k8s.container.memory_limit_utilization` |
`metrics.container.memory.working_set` | `memory_limit_utilization`
requires resource limits; `memory.working_set` (bytes → MB) is always
available |
| Memory unit | Always `%` for OTel | `MB` for K8s containers, `%` for
Docker containers | K8s path now uses `working_set` (bytes) not a
percentage |
| Dataset filter | `event.dataset: "kubeletstatsreceiver.otel"` |
`(data_stream.dataset: "kubeletstatsreceiver.otel" OR event.dataset:
"kubeletstatsreceiver.otel")` | Match both field locations |

### Containers (Docker path)

| Metric | Before | After | Why |
|---|---|---|---|
| Dataset filter | `event.dataset: "dockerstatsreceiver.otel"` |
`(data_stream.dataset: "dockerstatsreceiver.otel" OR event.dataset:
"dockerstatsreceiver.otel")` | Match both field locations |

## Other changes

- **Pod memory tooltip**: Added an `EuiIconTip` explaining the fallback
logic (prefers `memory_limit_utilization` as %, falls back to
`memory.working_set` as MB).
- **Pod CPU tooltip removed**: The old tooltip warned that
`cpu_limit_utilization` was optional. The new field
(`cpu.node.utilization`) is always present, making the tooltip
misleading.
- **OTel dataset filter helper**: Extracted `otelDatasetFilter()`
utility to avoid duplicating the `(data_stream.dataset OR
event.dataset)` pattern.
- **Host OTel unpack path**: Added `metricByFieldOtel` /
`unpackMetricOtel` so the host table correctly reads OTel metric
positions instead of falling through to ECS metric keys.

## Test plan

- [x] `yarn test:jest
x-pack/solutions/observability/plugins/metrics_data_access/public/components/infrastructure_node_metrics_tables/`
— all 46 tests pass
- [x] Manual smoke in APM Infrastructure tab with OTel service data
(e.g. `kbn-otel-demo` with EDOT Collector):
  - [ ] Hosts tab: CPU count, CPU %, Memory total, Memory % all populate
- [x] Pods tab: CPU % populates; Memory shows % (with limits) or MB
(without limits)
  - [x] Containers tab: CPU % and Memory MB populate for K8s containers

(cherry picked from commit c6485d7)
kibanamachine pushed a commit to kibanamachine/kibana that referenced this pull request Apr 2, 2026
…259552)

## Summary

Closes elastic#256731

Fix OTel metrics in the APM Infrastructure tab so hosts, pods, and
containers display actual values instead of `N/A`. The root causes were:
(1) hosts queried `metrics.system.memory.limit`, a field that doesn't
exist in `hostmetricsreceiver` data, (2) pod and container configs
queried `_limit_utilization` fields that only exist when Kubernetes
resource limits are explicitly set — which most deployments don't have,
and (3) all OTel dataset filters only matched `event.dataset`, missing
documents indexed under `data_stream.dataset`.

## Demo

### Before

https://github.com/user-attachments/assets/63193175-7893-47fa-8a82-ff76924908fb

### After

https://github.com/user-attachments/assets/440676f7-4168-4c13-9227-1b3b6bc74e57

## Problem

For OTel entities, the Infrastructure tab in APM showed `N/A` for most
metrics even when semconv data was present in Elasticsearch:

- **Hosts**: The `metrics.system.memory.limit` field doesn't exist in
`hostmetricsreceiver` data, causing Memory Total to show `N/A`.
- **Pods**: CPU used `metrics.k8s.pod.cpu_limit_utilization` (requires
CPU limits) and memory used `metrics.k8s.pod.memory_limit_utilization`
(requires memory limits). Both return empty results when limits aren't
set.
- **Containers (K8s)**: CPU used
`metrics.k8s.container.cpu_limit_utilization` and memory used
`metrics.k8s.container.memory_limit_utilization` — same limits-only
problem.
- **Dataset filters**: All OTel paths only matched `event.dataset`,
missing documents indexed under `data_stream.dataset`.

## Field mapping changes

### Hosts

| Metric | Before | After | Why |
|---|---|---|---|
| Memory total | `metrics.system.memory.limit` | Derived:
`metrics.system.memory.usage / metrics.system.memory.utilization` |
`memory.limit` doesn't exist in hostmetricsreceiver; total is derived
from usage and utilization ratio |
| Dataset filter | `event.dataset: "hostmetricsreceiver.otel"` |
`(data_stream.dataset: "hostmetricsreceiver.otel" OR event.dataset:
"hostmetricsreceiver.otel")` | Match both field locations |

### Pods

| Metric | Before | After | Why |
|---|---|---|---|
| CPU | `metrics.k8s.pod.cpu_limit_utilization` |
`metrics.k8s.pod.cpu.node.utilization` | `cpu_limit_utilization`
requires resource limits; `cpu.node.utilization` is always emitted by
kubeletstats |
| Memory | `metrics.k8s.pod.memory_limit_utilization` |
`metrics.k8s.pod.memory_limit_utilization` with fallback to
`metrics.k8s.pod.memory.working_set` | Queries both; prefers
`memory_limit_utilization` (shown as %) when available, falls back to
`memory.working_set` (shown as MB) to avoid N/A |
| Dataset filter | `event.dataset: "kubeletstatsreceiver.otel"` |
`(data_stream.dataset: "kubeletstatsreceiver.otel" OR event.dataset:
"kubeletstatsreceiver.otel")` | Match both field locations |

### Containers (K8s path)

| Metric | Before | After | Why |
|---|---|---|---|
| CPU | `metrics.k8s.container.cpu_limit_utilization` |
`metrics.container.cpu.usage` | `cpu_limit_utilization` requires
resource limits; `container.cpu.usage` is always emitted by kubeletstats
(0–1 ratio of one CPU core) |
| Memory | `metrics.k8s.container.memory_limit_utilization` |
`metrics.container.memory.working_set` | `memory_limit_utilization`
requires resource limits; `memory.working_set` (bytes → MB) is always
available |
| Memory unit | Always `%` for OTel | `MB` for K8s containers, `%` for
Docker containers | K8s path now uses `working_set` (bytes) not a
percentage |
| Dataset filter | `event.dataset: "kubeletstatsreceiver.otel"` |
`(data_stream.dataset: "kubeletstatsreceiver.otel" OR event.dataset:
"kubeletstatsreceiver.otel")` | Match both field locations |

### Containers (Docker path)

| Metric | Before | After | Why |
|---|---|---|---|
| Dataset filter | `event.dataset: "dockerstatsreceiver.otel"` |
`(data_stream.dataset: "dockerstatsreceiver.otel" OR event.dataset:
"dockerstatsreceiver.otel")` | Match both field locations |

## Other changes

- **Pod memory tooltip**: Added an `EuiIconTip` explaining the fallback
logic (prefers `memory_limit_utilization` as %, falls back to
`memory.working_set` as MB).
- **Pod CPU tooltip removed**: The old tooltip warned that
`cpu_limit_utilization` was optional. The new field
(`cpu.node.utilization`) is always present, making the tooltip
misleading.
- **OTel dataset filter helper**: Extracted `otelDatasetFilter()`
utility to avoid duplicating the `(data_stream.dataset OR
event.dataset)` pattern.
- **Host OTel unpack path**: Added `metricByFieldOtel` /
`unpackMetricOtel` so the host table correctly reads OTel metric
positions instead of falling through to ECS metric keys.

## Test plan

- [x] `yarn test:jest
x-pack/solutions/observability/plugins/metrics_data_access/public/components/infrastructure_node_metrics_tables/`
— all 46 tests pass
- [x] Manual smoke in APM Infrastructure tab with OTel service data
(e.g. `kbn-otel-demo` with EDOT Collector):
  - [ ] Hosts tab: CPU count, CPU %, Memory total, Memory % all populate
- [x] Pods tab: CPU % populates; Memory shows % (with limits) or MB
(without limits)
  - [x] Containers tab: CPU % and Memory MB populate for K8s containers

(cherry picked from commit c6485d7)
@kibanamachine
Copy link
Copy Markdown
Contributor

💔 Some backports could not be created

Status Branch Result
8.19 Backport failed because of merge conflicts
9.2
9.3

Note: Successful backport PRs will be merged automatically after passing CI.

Manual backport

To create the backport manually run:

node scripts/backport --pr 259552

Questions ?

Please refer to the Backport tool documentation

@rmyz rmyz deleted the 256731-apm-fix-infrastructure-tab-otel branch April 2, 2026 18:27
kibanamachine added a commit that referenced this pull request Apr 2, 2026
…59552) (#260988)

# Backport

This will backport the following commits from `main` to `9.3`:
- [[APM][Infra] Fix OTel metrics mapping in infrastructure tab
(#259552)](#259552)

<!--- Backport version: 9.6.6 -->

### Questions ?
Please refer to the [Backport tool
documentation](https://github.com/sorenlouv/backport)

<!--BACKPORT [{"author":{"name":"Sergi
Romeu","email":"sergi.romeu@elastic.co"},"sourceCommit":{"committedDate":"2026-04-02T16:52:34Z","message":"[APM][Infra]
Fix OTel metrics mapping in infrastructure tab (#259552)\n\n##
Summary\n\nCloses #256731\n\nFix OTel metrics in the APM Infrastructure
tab so hosts, pods, and\ncontainers display actual values instead of
`N/A`. The root causes were:\n(1) hosts queried
`metrics.system.memory.limit`, a field that doesn't\nexist in
`hostmetricsreceiver` data, (2) pod and container configs\nqueried
`_limit_utilization` fields that only exist when Kubernetes\nresource
limits are explicitly set — which most deployments don't have,\nand (3)
all OTel dataset filters only matched `event.dataset`,
missing\ndocuments indexed under `data_stream.dataset`.\n\n##
Demo\n\n###
Before\n\n\nhttps://github.com/user-attachments/assets/63193175-7893-47fa-8a82-ff76924908fb\n\n\n###
After\n\n\nhttps://github.com/user-attachments/assets/440676f7-4168-4c13-9227-1b3b6bc74e57\n\n\n##
Problem\n\nFor OTel entities, the Infrastructure tab in APM showed `N/A`
for most\nmetrics even when semconv data was present in
Elasticsearch:\n\n- **Hosts**: The `metrics.system.memory.limit` field
doesn't exist in\n`hostmetricsreceiver` data, causing Memory Total to
show `N/A`.\n- **Pods**: CPU used
`metrics.k8s.pod.cpu_limit_utilization` (requires\nCPU limits) and
memory used `metrics.k8s.pod.memory_limit_utilization`\n(requires memory
limits). Both return empty results when limits aren't\nset.\n-
**Containers (K8s)**: CPU
used\n`metrics.k8s.container.cpu_limit_utilization` and memory
used\n`metrics.k8s.container.memory_limit_utilization` — same
limits-only\nproblem.\n- **Dataset filters**: All OTel paths only
matched `event.dataset`,\nmissing documents indexed under
`data_stream.dataset`.\n\n## Field mapping changes\n\n### Hosts\n\n|
Metric | Before | After | Why |\n|---|---|---|---|\n| Memory total |
`metrics.system.memory.limit` | Derived:\n`metrics.system.memory.usage /
metrics.system.memory.utilization` |\n`memory.limit` doesn't exist in
hostmetricsreceiver; total is derived\nfrom usage and utilization ratio
|\n| Dataset filter | `event.dataset: \"hostmetricsreceiver.otel\"`
|\n`(data_stream.dataset: \"hostmetricsreceiver.otel\" OR
event.dataset:\n\"hostmetricsreceiver.otel\")` | Match both field
locations |\n\n### Pods\n\n| Metric | Before | After | Why
|\n|---|---|---|---|\n| CPU | `metrics.k8s.pod.cpu_limit_utilization`
|\n`metrics.k8s.pod.cpu.node.utilization` |
`cpu_limit_utilization`\nrequires resource limits;
`cpu.node.utilization` is always emitted by\nkubeletstats |\n| Memory |
`metrics.k8s.pod.memory_limit_utilization`
|\n`metrics.k8s.pod.memory_limit_utilization` with fallback
to\n`metrics.k8s.pod.memory.working_set` | Queries both;
prefers\n`memory_limit_utilization` (shown as %) when available, falls
back to\n`memory.working_set` (shown as MB) to avoid N/A |\n| Dataset
filter | `event.dataset: \"kubeletstatsreceiver.otel\"`
|\n`(data_stream.dataset: \"kubeletstatsreceiver.otel\" OR
event.dataset:\n\"kubeletstatsreceiver.otel\")` | Match both field
locations |\n\n### Containers (K8s path)\n\n| Metric | Before | After |
Why |\n|---|---|---|---|\n| CPU |
`metrics.k8s.container.cpu_limit_utilization`
|\n`metrics.container.cpu.usage` | `cpu_limit_utilization`
requires\nresource limits; `container.cpu.usage` is always emitted by
kubeletstats\n(0–1 ratio of one CPU core) |\n| Memory |
`metrics.k8s.container.memory_limit_utilization`
|\n`metrics.container.memory.working_set` |
`memory_limit_utilization`\nrequires resource limits;
`memory.working_set` (bytes → MB) is always\navailable |\n| Memory unit
| Always `%` for OTel | `MB` for K8s containers, `%` for\nDocker
containers | K8s path now uses `working_set` (bytes) not a\npercentage
|\n| Dataset filter | `event.dataset: \"kubeletstatsreceiver.otel\"`
|\n`(data_stream.dataset: \"kubeletstatsreceiver.otel\" OR
event.dataset:\n\"kubeletstatsreceiver.otel\")` | Match both field
locations |\n\n### Containers (Docker path)\n\n| Metric | Before | After
| Why |\n|---|---|---|---|\n| Dataset filter | `event.dataset:
\"dockerstatsreceiver.otel\"` |\n`(data_stream.dataset:
\"dockerstatsreceiver.otel\" OR
event.dataset:\n\"dockerstatsreceiver.otel\")` | Match both field
locations |\n\n## Other changes\n\n- **Pod memory tooltip**: Added an
`EuiIconTip` explaining the fallback\nlogic (prefers
`memory_limit_utilization` as %, falls back to\n`memory.working_set` as
MB).\n- **Pod CPU tooltip removed**: The old tooltip warned
that\n`cpu_limit_utilization` was optional. The new
field\n(`cpu.node.utilization`) is always present, making the
tooltip\nmisleading.\n- **OTel dataset filter helper**: Extracted
`otelDatasetFilter()`\nutility to avoid duplicating the
`(data_stream.dataset OR\nevent.dataset)` pattern.\n- **Host OTel unpack
path**: Added `metricByFieldOtel` /\n`unpackMetricOtel` so the host
table correctly reads OTel metric\npositions instead of falling through
to ECS metric keys.\n\n## Test plan\n\n- [x] `yarn
test:jest\nx-pack/solutions/observability/plugins/metrics_data_access/public/components/infrastructure_node_metrics_tables/`\n—
all 46 tests pass\n- [x] Manual smoke in APM Infrastructure tab with
OTel service data\n(e.g. `kbn-otel-demo` with EDOT Collector):\n - [ ]
Hosts tab: CPU count, CPU %, Memory total, Memory % all populate\n- [x]
Pods tab: CPU % populates; Memory shows % (with limits) or MB\n(without
limits)\n - [x] Containers tab: CPU % and Memory MB populate for K8s
containers","sha":"c6485d753760eac356c74a253cbd440d7ef83227","branchLabelMapping":{"^v9.4.0$":"main","^v(\\d+).(\\d+).\\d+$":"$1.$2"}},"sourcePullRequest":{"labels":["release_note:fix","backport:all-open","v9.4.0","Team:obs-presentation"],"title":"[APM][Infra]
Fix OTel metrics mapping in infrastructure
tab","number":259552,"url":"https://github.com/elastic/kibana/pull/259552","mergeCommit":{"message":"[APM][Infra]
Fix OTel metrics mapping in infrastructure tab (#259552)\n\n##
Summary\n\nCloses #256731\n\nFix OTel metrics in the APM Infrastructure
tab so hosts, pods, and\ncontainers display actual values instead of
`N/A`. The root causes were:\n(1) hosts queried
`metrics.system.memory.limit`, a field that doesn't\nexist in
`hostmetricsreceiver` data, (2) pod and container configs\nqueried
`_limit_utilization` fields that only exist when Kubernetes\nresource
limits are explicitly set — which most deployments don't have,\nand (3)
all OTel dataset filters only matched `event.dataset`,
missing\ndocuments indexed under `data_stream.dataset`.\n\n##
Demo\n\n###
Before\n\n\nhttps://github.com/user-attachments/assets/63193175-7893-47fa-8a82-ff76924908fb\n\n\n###
After\n\n\nhttps://github.com/user-attachments/assets/440676f7-4168-4c13-9227-1b3b6bc74e57\n\n\n##
Problem\n\nFor OTel entities, the Infrastructure tab in APM showed `N/A`
for most\nmetrics even when semconv data was present in
Elasticsearch:\n\n- **Hosts**: The `metrics.system.memory.limit` field
doesn't exist in\n`hostmetricsreceiver` data, causing Memory Total to
show `N/A`.\n- **Pods**: CPU used
`metrics.k8s.pod.cpu_limit_utilization` (requires\nCPU limits) and
memory used `metrics.k8s.pod.memory_limit_utilization`\n(requires memory
limits). Both return empty results when limits aren't\nset.\n-
**Containers (K8s)**: CPU
used\n`metrics.k8s.container.cpu_limit_utilization` and memory
used\n`metrics.k8s.container.memory_limit_utilization` — same
limits-only\nproblem.\n- **Dataset filters**: All OTel paths only
matched `event.dataset`,\nmissing documents indexed under
`data_stream.dataset`.\n\n## Field mapping changes\n\n### Hosts\n\n|
Metric | Before | After | Why |\n|---|---|---|---|\n| Memory total |
`metrics.system.memory.limit` | Derived:\n`metrics.system.memory.usage /
metrics.system.memory.utilization` |\n`memory.limit` doesn't exist in
hostmetricsreceiver; total is derived\nfrom usage and utilization ratio
|\n| Dataset filter | `event.dataset: \"hostmetricsreceiver.otel\"`
|\n`(data_stream.dataset: \"hostmetricsreceiver.otel\" OR
event.dataset:\n\"hostmetricsreceiver.otel\")` | Match both field
locations |\n\n### Pods\n\n| Metric | Before | After | Why
|\n|---|---|---|---|\n| CPU | `metrics.k8s.pod.cpu_limit_utilization`
|\n`metrics.k8s.pod.cpu.node.utilization` |
`cpu_limit_utilization`\nrequires resource limits;
`cpu.node.utilization` is always emitted by\nkubeletstats |\n| Memory |
`metrics.k8s.pod.memory_limit_utilization`
|\n`metrics.k8s.pod.memory_limit_utilization` with fallback
to\n`metrics.k8s.pod.memory.working_set` | Queries both;
prefers\n`memory_limit_utilization` (shown as %) when available, falls
back to\n`memory.working_set` (shown as MB) to avoid N/A |\n| Dataset
filter | `event.dataset: \"kubeletstatsreceiver.otel\"`
|\n`(data_stream.dataset: \"kubeletstatsreceiver.otel\" OR
event.dataset:\n\"kubeletstatsreceiver.otel\")` | Match both field
locations |\n\n### Containers (K8s path)\n\n| Metric | Before | After |
Why |\n|---|---|---|---|\n| CPU |
`metrics.k8s.container.cpu_limit_utilization`
|\n`metrics.container.cpu.usage` | `cpu_limit_utilization`
requires\nresource limits; `container.cpu.usage` is always emitted by
kubeletstats\n(0–1 ratio of one CPU core) |\n| Memory |
`metrics.k8s.container.memory_limit_utilization`
|\n`metrics.container.memory.working_set` |
`memory_limit_utilization`\nrequires resource limits;
`memory.working_set` (bytes → MB) is always\navailable |\n| Memory unit
| Always `%` for OTel | `MB` for K8s containers, `%` for\nDocker
containers | K8s path now uses `working_set` (bytes) not a\npercentage
|\n| Dataset filter | `event.dataset: \"kubeletstatsreceiver.otel\"`
|\n`(data_stream.dataset: \"kubeletstatsreceiver.otel\" OR
event.dataset:\n\"kubeletstatsreceiver.otel\")` | Match both field
locations |\n\n### Containers (Docker path)\n\n| Metric | Before | After
| Why |\n|---|---|---|---|\n| Dataset filter | `event.dataset:
\"dockerstatsreceiver.otel\"` |\n`(data_stream.dataset:
\"dockerstatsreceiver.otel\" OR
event.dataset:\n\"dockerstatsreceiver.otel\")` | Match both field
locations |\n\n## Other changes\n\n- **Pod memory tooltip**: Added an
`EuiIconTip` explaining the fallback\nlogic (prefers
`memory_limit_utilization` as %, falls back to\n`memory.working_set` as
MB).\n- **Pod CPU tooltip removed**: The old tooltip warned
that\n`cpu_limit_utilization` was optional. The new
field\n(`cpu.node.utilization`) is always present, making the
tooltip\nmisleading.\n- **OTel dataset filter helper**: Extracted
`otelDatasetFilter()`\nutility to avoid duplicating the
`(data_stream.dataset OR\nevent.dataset)` pattern.\n- **Host OTel unpack
path**: Added `metricByFieldOtel` /\n`unpackMetricOtel` so the host
table correctly reads OTel metric\npositions instead of falling through
to ECS metric keys.\n\n## Test plan\n\n- [x] `yarn
test:jest\nx-pack/solutions/observability/plugins/metrics_data_access/public/components/infrastructure_node_metrics_tables/`\n—
all 46 tests pass\n- [x] Manual smoke in APM Infrastructure tab with
OTel service data\n(e.g. `kbn-otel-demo` with EDOT Collector):\n - [ ]
Hosts tab: CPU count, CPU %, Memory total, Memory % all populate\n- [x]
Pods tab: CPU % populates; Memory shows % (with limits) or MB\n(without
limits)\n - [x] Containers tab: CPU % and Memory MB populate for K8s
containers","sha":"c6485d753760eac356c74a253cbd440d7ef83227"}},"sourceBranch":"main","suggestedTargetBranches":[],"targetPullRequestStates":[{"branch":"main","label":"v9.4.0","branchLabelMappingKey":"^v9.4.0$","isSourceBranch":true,"state":"MERGED","url":"https://github.com/elastic/kibana/pull/259552","number":259552,"mergeCommit":{"message":"[APM][Infra]
Fix OTel metrics mapping in infrastructure tab (#259552)\n\n##
Summary\n\nCloses #256731\n\nFix OTel metrics in the APM Infrastructure
tab so hosts, pods, and\ncontainers display actual values instead of
`N/A`. The root causes were:\n(1) hosts queried
`metrics.system.memory.limit`, a field that doesn't\nexist in
`hostmetricsreceiver` data, (2) pod and container configs\nqueried
`_limit_utilization` fields that only exist when Kubernetes\nresource
limits are explicitly set — which most deployments don't have,\nand (3)
all OTel dataset filters only matched `event.dataset`,
missing\ndocuments indexed under `data_stream.dataset`.\n\n##
Demo\n\n###
Before\n\n\nhttps://github.com/user-attachments/assets/63193175-7893-47fa-8a82-ff76924908fb\n\n\n###
After\n\n\nhttps://github.com/user-attachments/assets/440676f7-4168-4c13-9227-1b3b6bc74e57\n\n\n##
Problem\n\nFor OTel entities, the Infrastructure tab in APM showed `N/A`
for most\nmetrics even when semconv data was present in
Elasticsearch:\n\n- **Hosts**: The `metrics.system.memory.limit` field
doesn't exist in\n`hostmetricsreceiver` data, causing Memory Total to
show `N/A`.\n- **Pods**: CPU used
`metrics.k8s.pod.cpu_limit_utilization` (requires\nCPU limits) and
memory used `metrics.k8s.pod.memory_limit_utilization`\n(requires memory
limits). Both return empty results when limits aren't\nset.\n-
**Containers (K8s)**: CPU
used\n`metrics.k8s.container.cpu_limit_utilization` and memory
used\n`metrics.k8s.container.memory_limit_utilization` — same
limits-only\nproblem.\n- **Dataset filters**: All OTel paths only
matched `event.dataset`,\nmissing documents indexed under
`data_stream.dataset`.\n\n## Field mapping changes\n\n### Hosts\n\n|
Metric | Before | After | Why |\n|---|---|---|---|\n| Memory total |
`metrics.system.memory.limit` | Derived:\n`metrics.system.memory.usage /
metrics.system.memory.utilization` |\n`memory.limit` doesn't exist in
hostmetricsreceiver; total is derived\nfrom usage and utilization ratio
|\n| Dataset filter | `event.dataset: \"hostmetricsreceiver.otel\"`
|\n`(data_stream.dataset: \"hostmetricsreceiver.otel\" OR
event.dataset:\n\"hostmetricsreceiver.otel\")` | Match both field
locations |\n\n### Pods\n\n| Metric | Before | After | Why
|\n|---|---|---|---|\n| CPU | `metrics.k8s.pod.cpu_limit_utilization`
|\n`metrics.k8s.pod.cpu.node.utilization` |
`cpu_limit_utilization`\nrequires resource limits;
`cpu.node.utilization` is always emitted by\nkubeletstats |\n| Memory |
`metrics.k8s.pod.memory_limit_utilization`
|\n`metrics.k8s.pod.memory_limit_utilization` with fallback
to\n`metrics.k8s.pod.memory.working_set` | Queries both;
prefers\n`memory_limit_utilization` (shown as %) when available, falls
back to\n`memory.working_set` (shown as MB) to avoid N/A |\n| Dataset
filter | `event.dataset: \"kubeletstatsreceiver.otel\"`
|\n`(data_stream.dataset: \"kubeletstatsreceiver.otel\" OR
event.dataset:\n\"kubeletstatsreceiver.otel\")` | Match both field
locations |\n\n### Containers (K8s path)\n\n| Metric | Before | After |
Why |\n|---|---|---|---|\n| CPU |
`metrics.k8s.container.cpu_limit_utilization`
|\n`metrics.container.cpu.usage` | `cpu_limit_utilization`
requires\nresource limits; `container.cpu.usage` is always emitted by
kubeletstats\n(0–1 ratio of one CPU core) |\n| Memory |
`metrics.k8s.container.memory_limit_utilization`
|\n`metrics.container.memory.working_set` |
`memory_limit_utilization`\nrequires resource limits;
`memory.working_set` (bytes → MB) is always\navailable |\n| Memory unit
| Always `%` for OTel | `MB` for K8s containers, `%` for\nDocker
containers | K8s path now uses `working_set` (bytes) not a\npercentage
|\n| Dataset filter | `event.dataset: \"kubeletstatsreceiver.otel\"`
|\n`(data_stream.dataset: \"kubeletstatsreceiver.otel\" OR
event.dataset:\n\"kubeletstatsreceiver.otel\")` | Match both field
locations |\n\n### Containers (Docker path)\n\n| Metric | Before | After
| Why |\n|---|---|---|---|\n| Dataset filter | `event.dataset:
\"dockerstatsreceiver.otel\"` |\n`(data_stream.dataset:
\"dockerstatsreceiver.otel\" OR
event.dataset:\n\"dockerstatsreceiver.otel\")` | Match both field
locations |\n\n## Other changes\n\n- **Pod memory tooltip**: Added an
`EuiIconTip` explaining the fallback\nlogic (prefers
`memory_limit_utilization` as %, falls back to\n`memory.working_set` as
MB).\n- **Pod CPU tooltip removed**: The old tooltip warned
that\n`cpu_limit_utilization` was optional. The new
field\n(`cpu.node.utilization`) is always present, making the
tooltip\nmisleading.\n- **OTel dataset filter helper**: Extracted
`otelDatasetFilter()`\nutility to avoid duplicating the
`(data_stream.dataset OR\nevent.dataset)` pattern.\n- **Host OTel unpack
path**: Added `metricByFieldOtel` /\n`unpackMetricOtel` so the host
table correctly reads OTel metric\npositions instead of falling through
to ECS metric keys.\n\n## Test plan\n\n- [x] `yarn
test:jest\nx-pack/solutions/observability/plugins/metrics_data_access/public/components/infrastructure_node_metrics_tables/`\n—
all 46 tests pass\n- [x] Manual smoke in APM Infrastructure tab with
OTel service data\n(e.g. `kbn-otel-demo` with EDOT Collector):\n - [ ]
Hosts tab: CPU count, CPU %, Memory total, Memory % all populate\n- [x]
Pods tab: CPU % populates; Memory shows % (with limits) or MB\n(without
limits)\n - [x] Containers tab: CPU % and Memory MB populate for K8s
containers","sha":"c6485d753760eac356c74a253cbd440d7ef83227"}}]}]
BACKPORT-->

Co-authored-by: Sergi Romeu <sergi.romeu@elastic.co>
kibanamachine added a commit that referenced this pull request Apr 2, 2026
…59552) (#260987)

# Backport

This will backport the following commits from `main` to `9.2`:
- [[APM][Infra] Fix OTel metrics mapping in infrastructure tab
(#259552)](#259552)

<!--- Backport version: 9.6.6 -->

### Questions ?
Please refer to the [Backport tool
documentation](https://github.com/sorenlouv/backport)

<!--BACKPORT [{"author":{"name":"Sergi
Romeu","email":"sergi.romeu@elastic.co"},"sourceCommit":{"committedDate":"2026-04-02T16:52:34Z","message":"[APM][Infra]
Fix OTel metrics mapping in infrastructure tab (#259552)\n\n##
Summary\n\nCloses #256731\n\nFix OTel metrics in the APM Infrastructure
tab so hosts, pods, and\ncontainers display actual values instead of
`N/A`. The root causes were:\n(1) hosts queried
`metrics.system.memory.limit`, a field that doesn't\nexist in
`hostmetricsreceiver` data, (2) pod and container configs\nqueried
`_limit_utilization` fields that only exist when Kubernetes\nresource
limits are explicitly set — which most deployments don't have,\nand (3)
all OTel dataset filters only matched `event.dataset`,
missing\ndocuments indexed under `data_stream.dataset`.\n\n##
Demo\n\n###
Before\n\n\nhttps://github.com/user-attachments/assets/63193175-7893-47fa-8a82-ff76924908fb\n\n\n###
After\n\n\nhttps://github.com/user-attachments/assets/440676f7-4168-4c13-9227-1b3b6bc74e57\n\n\n##
Problem\n\nFor OTel entities, the Infrastructure tab in APM showed `N/A`
for most\nmetrics even when semconv data was present in
Elasticsearch:\n\n- **Hosts**: The `metrics.system.memory.limit` field
doesn't exist in\n`hostmetricsreceiver` data, causing Memory Total to
show `N/A`.\n- **Pods**: CPU used
`metrics.k8s.pod.cpu_limit_utilization` (requires\nCPU limits) and
memory used `metrics.k8s.pod.memory_limit_utilization`\n(requires memory
limits). Both return empty results when limits aren't\nset.\n-
**Containers (K8s)**: CPU
used\n`metrics.k8s.container.cpu_limit_utilization` and memory
used\n`metrics.k8s.container.memory_limit_utilization` — same
limits-only\nproblem.\n- **Dataset filters**: All OTel paths only
matched `event.dataset`,\nmissing documents indexed under
`data_stream.dataset`.\n\n## Field mapping changes\n\n### Hosts\n\n|
Metric | Before | After | Why |\n|---|---|---|---|\n| Memory total |
`metrics.system.memory.limit` | Derived:\n`metrics.system.memory.usage /
metrics.system.memory.utilization` |\n`memory.limit` doesn't exist in
hostmetricsreceiver; total is derived\nfrom usage and utilization ratio
|\n| Dataset filter | `event.dataset: \"hostmetricsreceiver.otel\"`
|\n`(data_stream.dataset: \"hostmetricsreceiver.otel\" OR
event.dataset:\n\"hostmetricsreceiver.otel\")` | Match both field
locations |\n\n### Pods\n\n| Metric | Before | After | Why
|\n|---|---|---|---|\n| CPU | `metrics.k8s.pod.cpu_limit_utilization`
|\n`metrics.k8s.pod.cpu.node.utilization` |
`cpu_limit_utilization`\nrequires resource limits;
`cpu.node.utilization` is always emitted by\nkubeletstats |\n| Memory |
`metrics.k8s.pod.memory_limit_utilization`
|\n`metrics.k8s.pod.memory_limit_utilization` with fallback
to\n`metrics.k8s.pod.memory.working_set` | Queries both;
prefers\n`memory_limit_utilization` (shown as %) when available, falls
back to\n`memory.working_set` (shown as MB) to avoid N/A |\n| Dataset
filter | `event.dataset: \"kubeletstatsreceiver.otel\"`
|\n`(data_stream.dataset: \"kubeletstatsreceiver.otel\" OR
event.dataset:\n\"kubeletstatsreceiver.otel\")` | Match both field
locations |\n\n### Containers (K8s path)\n\n| Metric | Before | After |
Why |\n|---|---|---|---|\n| CPU |
`metrics.k8s.container.cpu_limit_utilization`
|\n`metrics.container.cpu.usage` | `cpu_limit_utilization`
requires\nresource limits; `container.cpu.usage` is always emitted by
kubeletstats\n(0–1 ratio of one CPU core) |\n| Memory |
`metrics.k8s.container.memory_limit_utilization`
|\n`metrics.container.memory.working_set` |
`memory_limit_utilization`\nrequires resource limits;
`memory.working_set` (bytes → MB) is always\navailable |\n| Memory unit
| Always `%` for OTel | `MB` for K8s containers, `%` for\nDocker
containers | K8s path now uses `working_set` (bytes) not a\npercentage
|\n| Dataset filter | `event.dataset: \"kubeletstatsreceiver.otel\"`
|\n`(data_stream.dataset: \"kubeletstatsreceiver.otel\" OR
event.dataset:\n\"kubeletstatsreceiver.otel\")` | Match both field
locations |\n\n### Containers (Docker path)\n\n| Metric | Before | After
| Why |\n|---|---|---|---|\n| Dataset filter | `event.dataset:
\"dockerstatsreceiver.otel\"` |\n`(data_stream.dataset:
\"dockerstatsreceiver.otel\" OR
event.dataset:\n\"dockerstatsreceiver.otel\")` | Match both field
locations |\n\n## Other changes\n\n- **Pod memory tooltip**: Added an
`EuiIconTip` explaining the fallback\nlogic (prefers
`memory_limit_utilization` as %, falls back to\n`memory.working_set` as
MB).\n- **Pod CPU tooltip removed**: The old tooltip warned
that\n`cpu_limit_utilization` was optional. The new
field\n(`cpu.node.utilization`) is always present, making the
tooltip\nmisleading.\n- **OTel dataset filter helper**: Extracted
`otelDatasetFilter()`\nutility to avoid duplicating the
`(data_stream.dataset OR\nevent.dataset)` pattern.\n- **Host OTel unpack
path**: Added `metricByFieldOtel` /\n`unpackMetricOtel` so the host
table correctly reads OTel metric\npositions instead of falling through
to ECS metric keys.\n\n## Test plan\n\n- [x] `yarn
test:jest\nx-pack/solutions/observability/plugins/metrics_data_access/public/components/infrastructure_node_metrics_tables/`\n—
all 46 tests pass\n- [x] Manual smoke in APM Infrastructure tab with
OTel service data\n(e.g. `kbn-otel-demo` with EDOT Collector):\n - [ ]
Hosts tab: CPU count, CPU %, Memory total, Memory % all populate\n- [x]
Pods tab: CPU % populates; Memory shows % (with limits) or MB\n(without
limits)\n - [x] Containers tab: CPU % and Memory MB populate for K8s
containers","sha":"c6485d753760eac356c74a253cbd440d7ef83227","branchLabelMapping":{"^v9.4.0$":"main","^v(\\d+).(\\d+).\\d+$":"$1.$2"}},"sourcePullRequest":{"labels":["release_note:fix","backport:all-open","v9.4.0","Team:obs-presentation"],"title":"[APM][Infra]
Fix OTel metrics mapping in infrastructure
tab","number":259552,"url":"https://github.com/elastic/kibana/pull/259552","mergeCommit":{"message":"[APM][Infra]
Fix OTel metrics mapping in infrastructure tab (#259552)\n\n##
Summary\n\nCloses #256731\n\nFix OTel metrics in the APM Infrastructure
tab so hosts, pods, and\ncontainers display actual values instead of
`N/A`. The root causes were:\n(1) hosts queried
`metrics.system.memory.limit`, a field that doesn't\nexist in
`hostmetricsreceiver` data, (2) pod and container configs\nqueried
`_limit_utilization` fields that only exist when Kubernetes\nresource
limits are explicitly set — which most deployments don't have,\nand (3)
all OTel dataset filters only matched `event.dataset`,
missing\ndocuments indexed under `data_stream.dataset`.\n\n##
Demo\n\n###
Before\n\n\nhttps://github.com/user-attachments/assets/63193175-7893-47fa-8a82-ff76924908fb\n\n\n###
After\n\n\nhttps://github.com/user-attachments/assets/440676f7-4168-4c13-9227-1b3b6bc74e57\n\n\n##
Problem\n\nFor OTel entities, the Infrastructure tab in APM showed `N/A`
for most\nmetrics even when semconv data was present in
Elasticsearch:\n\n- **Hosts**: The `metrics.system.memory.limit` field
doesn't exist in\n`hostmetricsreceiver` data, causing Memory Total to
show `N/A`.\n- **Pods**: CPU used
`metrics.k8s.pod.cpu_limit_utilization` (requires\nCPU limits) and
memory used `metrics.k8s.pod.memory_limit_utilization`\n(requires memory
limits). Both return empty results when limits aren't\nset.\n-
**Containers (K8s)**: CPU
used\n`metrics.k8s.container.cpu_limit_utilization` and memory
used\n`metrics.k8s.container.memory_limit_utilization` — same
limits-only\nproblem.\n- **Dataset filters**: All OTel paths only
matched `event.dataset`,\nmissing documents indexed under
`data_stream.dataset`.\n\n## Field mapping changes\n\n### Hosts\n\n|
Metric | Before | After | Why |\n|---|---|---|---|\n| Memory total |
`metrics.system.memory.limit` | Derived:\n`metrics.system.memory.usage /
metrics.system.memory.utilization` |\n`memory.limit` doesn't exist in
hostmetricsreceiver; total is derived\nfrom usage and utilization ratio
|\n| Dataset filter | `event.dataset: \"hostmetricsreceiver.otel\"`
|\n`(data_stream.dataset: \"hostmetricsreceiver.otel\" OR
event.dataset:\n\"hostmetricsreceiver.otel\")` | Match both field
locations |\n\n### Pods\n\n| Metric | Before | After | Why
|\n|---|---|---|---|\n| CPU | `metrics.k8s.pod.cpu_limit_utilization`
|\n`metrics.k8s.pod.cpu.node.utilization` |
`cpu_limit_utilization`\nrequires resource limits;
`cpu.node.utilization` is always emitted by\nkubeletstats |\n| Memory |
`metrics.k8s.pod.memory_limit_utilization`
|\n`metrics.k8s.pod.memory_limit_utilization` with fallback
to\n`metrics.k8s.pod.memory.working_set` | Queries both;
prefers\n`memory_limit_utilization` (shown as %) when available, falls
back to\n`memory.working_set` (shown as MB) to avoid N/A |\n| Dataset
filter | `event.dataset: \"kubeletstatsreceiver.otel\"`
|\n`(data_stream.dataset: \"kubeletstatsreceiver.otel\" OR
event.dataset:\n\"kubeletstatsreceiver.otel\")` | Match both field
locations |\n\n### Containers (K8s path)\n\n| Metric | Before | After |
Why |\n|---|---|---|---|\n| CPU |
`metrics.k8s.container.cpu_limit_utilization`
|\n`metrics.container.cpu.usage` | `cpu_limit_utilization`
requires\nresource limits; `container.cpu.usage` is always emitted by
kubeletstats\n(0–1 ratio of one CPU core) |\n| Memory |
`metrics.k8s.container.memory_limit_utilization`
|\n`metrics.container.memory.working_set` |
`memory_limit_utilization`\nrequires resource limits;
`memory.working_set` (bytes → MB) is always\navailable |\n| Memory unit
| Always `%` for OTel | `MB` for K8s containers, `%` for\nDocker
containers | K8s path now uses `working_set` (bytes) not a\npercentage
|\n| Dataset filter | `event.dataset: \"kubeletstatsreceiver.otel\"`
|\n`(data_stream.dataset: \"kubeletstatsreceiver.otel\" OR
event.dataset:\n\"kubeletstatsreceiver.otel\")` | Match both field
locations |\n\n### Containers (Docker path)\n\n| Metric | Before | After
| Why |\n|---|---|---|---|\n| Dataset filter | `event.dataset:
\"dockerstatsreceiver.otel\"` |\n`(data_stream.dataset:
\"dockerstatsreceiver.otel\" OR
event.dataset:\n\"dockerstatsreceiver.otel\")` | Match both field
locations |\n\n## Other changes\n\n- **Pod memory tooltip**: Added an
`EuiIconTip` explaining the fallback\nlogic (prefers
`memory_limit_utilization` as %, falls back to\n`memory.working_set` as
MB).\n- **Pod CPU tooltip removed**: The old tooltip warned
that\n`cpu_limit_utilization` was optional. The new
field\n(`cpu.node.utilization`) is always present, making the
tooltip\nmisleading.\n- **OTel dataset filter helper**: Extracted
`otelDatasetFilter()`\nutility to avoid duplicating the
`(data_stream.dataset OR\nevent.dataset)` pattern.\n- **Host OTel unpack
path**: Added `metricByFieldOtel` /\n`unpackMetricOtel` so the host
table correctly reads OTel metric\npositions instead of falling through
to ECS metric keys.\n\n## Test plan\n\n- [x] `yarn
test:jest\nx-pack/solutions/observability/plugins/metrics_data_access/public/components/infrastructure_node_metrics_tables/`\n—
all 46 tests pass\n- [x] Manual smoke in APM Infrastructure tab with
OTel service data\n(e.g. `kbn-otel-demo` with EDOT Collector):\n - [ ]
Hosts tab: CPU count, CPU %, Memory total, Memory % all populate\n- [x]
Pods tab: CPU % populates; Memory shows % (with limits) or MB\n(without
limits)\n - [x] Containers tab: CPU % and Memory MB populate for K8s
containers","sha":"c6485d753760eac356c74a253cbd440d7ef83227"}},"sourceBranch":"main","suggestedTargetBranches":[],"targetPullRequestStates":[{"branch":"main","label":"v9.4.0","branchLabelMappingKey":"^v9.4.0$","isSourceBranch":true,"state":"MERGED","url":"https://github.com/elastic/kibana/pull/259552","number":259552,"mergeCommit":{"message":"[APM][Infra]
Fix OTel metrics mapping in infrastructure tab (#259552)\n\n##
Summary\n\nCloses #256731\n\nFix OTel metrics in the APM Infrastructure
tab so hosts, pods, and\ncontainers display actual values instead of
`N/A`. The root causes were:\n(1) hosts queried
`metrics.system.memory.limit`, a field that doesn't\nexist in
`hostmetricsreceiver` data, (2) pod and container configs\nqueried
`_limit_utilization` fields that only exist when Kubernetes\nresource
limits are explicitly set — which most deployments don't have,\nand (3)
all OTel dataset filters only matched `event.dataset`,
missing\ndocuments indexed under `data_stream.dataset`.\n\n##
Demo\n\n###
Before\n\n\nhttps://github.com/user-attachments/assets/63193175-7893-47fa-8a82-ff76924908fb\n\n\n###
After\n\n\nhttps://github.com/user-attachments/assets/440676f7-4168-4c13-9227-1b3b6bc74e57\n\n\n##
Problem\n\nFor OTel entities, the Infrastructure tab in APM showed `N/A`
for most\nmetrics even when semconv data was present in
Elasticsearch:\n\n- **Hosts**: The `metrics.system.memory.limit` field
doesn't exist in\n`hostmetricsreceiver` data, causing Memory Total to
show `N/A`.\n- **Pods**: CPU used
`metrics.k8s.pod.cpu_limit_utilization` (requires\nCPU limits) and
memory used `metrics.k8s.pod.memory_limit_utilization`\n(requires memory
limits). Both return empty results when limits aren't\nset.\n-
**Containers (K8s)**: CPU
used\n`metrics.k8s.container.cpu_limit_utilization` and memory
used\n`metrics.k8s.container.memory_limit_utilization` — same
limits-only\nproblem.\n- **Dataset filters**: All OTel paths only
matched `event.dataset`,\nmissing documents indexed under
`data_stream.dataset`.\n\n## Field mapping changes\n\n### Hosts\n\n|
Metric | Before | After | Why |\n|---|---|---|---|\n| Memory total |
`metrics.system.memory.limit` | Derived:\n`metrics.system.memory.usage /
metrics.system.memory.utilization` |\n`memory.limit` doesn't exist in
hostmetricsreceiver; total is derived\nfrom usage and utilization ratio
|\n| Dataset filter | `event.dataset: \"hostmetricsreceiver.otel\"`
|\n`(data_stream.dataset: \"hostmetricsreceiver.otel\" OR
event.dataset:\n\"hostmetricsreceiver.otel\")` | Match both field
locations |\n\n### Pods\n\n| Metric | Before | After | Why
|\n|---|---|---|---|\n| CPU | `metrics.k8s.pod.cpu_limit_utilization`
|\n`metrics.k8s.pod.cpu.node.utilization` |
`cpu_limit_utilization`\nrequires resource limits;
`cpu.node.utilization` is always emitted by\nkubeletstats |\n| Memory |
`metrics.k8s.pod.memory_limit_utilization`
|\n`metrics.k8s.pod.memory_limit_utilization` with fallback
to\n`metrics.k8s.pod.memory.working_set` | Queries both;
prefers\n`memory_limit_utilization` (shown as %) when available, falls
back to\n`memory.working_set` (shown as MB) to avoid N/A |\n| Dataset
filter | `event.dataset: \"kubeletstatsreceiver.otel\"`
|\n`(data_stream.dataset: \"kubeletstatsreceiver.otel\" OR
event.dataset:\n\"kubeletstatsreceiver.otel\")` | Match both field
locations |\n\n### Containers (K8s path)\n\n| Metric | Before | After |
Why |\n|---|---|---|---|\n| CPU |
`metrics.k8s.container.cpu_limit_utilization`
|\n`metrics.container.cpu.usage` | `cpu_limit_utilization`
requires\nresource limits; `container.cpu.usage` is always emitted by
kubeletstats\n(0–1 ratio of one CPU core) |\n| Memory |
`metrics.k8s.container.memory_limit_utilization`
|\n`metrics.container.memory.working_set` |
`memory_limit_utilization`\nrequires resource limits;
`memory.working_set` (bytes → MB) is always\navailable |\n| Memory unit
| Always `%` for OTel | `MB` for K8s containers, `%` for\nDocker
containers | K8s path now uses `working_set` (bytes) not a\npercentage
|\n| Dataset filter | `event.dataset: \"kubeletstatsreceiver.otel\"`
|\n`(data_stream.dataset: \"kubeletstatsreceiver.otel\" OR
event.dataset:\n\"kubeletstatsreceiver.otel\")` | Match both field
locations |\n\n### Containers (Docker path)\n\n| Metric | Before | After
| Why |\n|---|---|---|---|\n| Dataset filter | `event.dataset:
\"dockerstatsreceiver.otel\"` |\n`(data_stream.dataset:
\"dockerstatsreceiver.otel\" OR
event.dataset:\n\"dockerstatsreceiver.otel\")` | Match both field
locations |\n\n## Other changes\n\n- **Pod memory tooltip**: Added an
`EuiIconTip` explaining the fallback\nlogic (prefers
`memory_limit_utilization` as %, falls back to\n`memory.working_set` as
MB).\n- **Pod CPU tooltip removed**: The old tooltip warned
that\n`cpu_limit_utilization` was optional. The new
field\n(`cpu.node.utilization`) is always present, making the
tooltip\nmisleading.\n- **OTel dataset filter helper**: Extracted
`otelDatasetFilter()`\nutility to avoid duplicating the
`(data_stream.dataset OR\nevent.dataset)` pattern.\n- **Host OTel unpack
path**: Added `metricByFieldOtel` /\n`unpackMetricOtel` so the host
table correctly reads OTel metric\npositions instead of falling through
to ECS metric keys.\n\n## Test plan\n\n- [x] `yarn
test:jest\nx-pack/solutions/observability/plugins/metrics_data_access/public/components/infrastructure_node_metrics_tables/`\n—
all 46 tests pass\n- [x] Manual smoke in APM Infrastructure tab with
OTel service data\n(e.g. `kbn-otel-demo` with EDOT Collector):\n - [ ]
Hosts tab: CPU count, CPU %, Memory total, Memory % all populate\n- [x]
Pods tab: CPU % populates; Memory shows % (with limits) or MB\n(without
limits)\n - [x] Containers tab: CPU % and Memory MB populate for K8s
containers","sha":"c6485d753760eac356c74a253cbd440d7ef83227"}}]}]
BACKPORT-->

Co-authored-by: Sergi Romeu <sergi.romeu@elastic.co>
@rmyz
Copy link
Copy Markdown
Contributor Author

rmyz commented Apr 2, 2026

💚 All backports created successfully

Status Branch Result
8.19

Note: Successful backport PRs will be merged automatically after passing CI.

Questions ?

Please refer to the Backport tool documentation

rmyz added a commit that referenced this pull request Apr 3, 2026
…259552) (#261020)

# Backport

This will backport the following commits from `main` to `8.19`:
- [[APM][Infra] Fix OTel metrics mapping in infrastructure tab
(#259552)](#259552)

<!--- Backport version: 10.2.0 -->

### Questions ?
Please refer to the [Backport tool
documentation](https://github.com/sorenlouv/backport)

<!--BACKPORT [{"author":{"name":"Sergi
Romeu","email":"sergi.romeu@elastic.co"},"sourceCommit":{"committedDate":"2026-04-02T16:52:34Z","message":"[APM][Infra]
Fix OTel metrics mapping in infrastructure tab (#259552)\n\n##
Summary\n\nCloses #256731\n\nFix OTel metrics in the APM Infrastructure
tab so hosts, pods, and\ncontainers display actual values instead of
`N/A`. The root causes were:\n(1) hosts queried
`metrics.system.memory.limit`, a field that doesn't\nexist in
`hostmetricsreceiver` data, (2) pod and container configs\nqueried
`_limit_utilization` fields that only exist when Kubernetes\nresource
limits are explicitly set — which most deployments don't have,\nand (3)
all OTel dataset filters only matched `event.dataset`,
missing\ndocuments indexed under `data_stream.dataset`.\n\n##
Demo\n\n###
Before\n\n\nhttps://github.com/user-attachments/assets/63193175-7893-47fa-8a82-ff76924908fb\n\n\n###
After\n\n\nhttps://github.com/user-attachments/assets/440676f7-4168-4c13-9227-1b3b6bc74e57\n\n\n##
Problem\n\nFor OTel entities, the Infrastructure tab in APM showed `N/A`
for most\nmetrics even when semconv data was present in
Elasticsearch:\n\n- **Hosts**: The `metrics.system.memory.limit` field
doesn't exist in\n`hostmetricsreceiver` data, causing Memory Total to
show `N/A`.\n- **Pods**: CPU used
`metrics.k8s.pod.cpu_limit_utilization` (requires\nCPU limits) and
memory used `metrics.k8s.pod.memory_limit_utilization`\n(requires memory
limits). Both return empty results when limits aren't\nset.\n-
**Containers (K8s)**: CPU
used\n`metrics.k8s.container.cpu_limit_utilization` and memory
used\n`metrics.k8s.container.memory_limit_utilization` — same
limits-only\nproblem.\n- **Dataset filters**: All OTel paths only
matched `event.dataset`,\nmissing documents indexed under
`data_stream.dataset`.\n\n## Field mapping changes\n\n### Hosts\n\n|
Metric | Before | After | Why |\n|---|---|---|---|\n| Memory total |
`metrics.system.memory.limit` | Derived:\n`metrics.system.memory.usage /
metrics.system.memory.utilization` |\n`memory.limit` doesn't exist in
hostmetricsreceiver; total is derived\nfrom usage and utilization ratio
|\n| Dataset filter | `event.dataset: \"hostmetricsreceiver.otel\"`
|\n`(data_stream.dataset: \"hostmetricsreceiver.otel\" OR
event.dataset:\n\"hostmetricsreceiver.otel\")` | Match both field
locations |\n\n### Pods\n\n| Metric | Before | After | Why
|\n|---|---|---|---|\n| CPU | `metrics.k8s.pod.cpu_limit_utilization`
|\n`metrics.k8s.pod.cpu.node.utilization` |
`cpu_limit_utilization`\nrequires resource limits;
`cpu.node.utilization` is always emitted by\nkubeletstats |\n| Memory |
`metrics.k8s.pod.memory_limit_utilization`
|\n`metrics.k8s.pod.memory_limit_utilization` with fallback
to\n`metrics.k8s.pod.memory.working_set` | Queries both;
prefers\n`memory_limit_utilization` (shown as %) when available, falls
back to\n`memory.working_set` (shown as MB) to avoid N/A |\n| Dataset
filter | `event.dataset: \"kubeletstatsreceiver.otel\"`
|\n`(data_stream.dataset: \"kubeletstatsreceiver.otel\" OR
event.dataset:\n\"kubeletstatsreceiver.otel\")` | Match both field
locations |\n\n### Containers (K8s path)\n\n| Metric | Before | After |
Why |\n|---|---|---|---|\n| CPU |
`metrics.k8s.container.cpu_limit_utilization`
|\n`metrics.container.cpu.usage` | `cpu_limit_utilization`
requires\nresource limits; `container.cpu.usage` is always emitted by
kubeletstats\n(0–1 ratio of one CPU core) |\n| Memory |
`metrics.k8s.container.memory_limit_utilization`
|\n`metrics.container.memory.working_set` |
`memory_limit_utilization`\nrequires resource limits;
`memory.working_set` (bytes → MB) is always\navailable |\n| Memory unit
| Always `%` for OTel | `MB` for K8s containers, `%` for\nDocker
containers | K8s path now uses `working_set` (bytes) not a\npercentage
|\n| Dataset filter | `event.dataset: \"kubeletstatsreceiver.otel\"`
|\n`(data_stream.dataset: \"kubeletstatsreceiver.otel\" OR
event.dataset:\n\"kubeletstatsreceiver.otel\")` | Match both field
locations |\n\n### Containers (Docker path)\n\n| Metric | Before | After
| Why |\n|---|---|---|---|\n| Dataset filter | `event.dataset:
\"dockerstatsreceiver.otel\"` |\n`(data_stream.dataset:
\"dockerstatsreceiver.otel\" OR
event.dataset:\n\"dockerstatsreceiver.otel\")` | Match both field
locations |\n\n## Other changes\n\n- **Pod memory tooltip**: Added an
`EuiIconTip` explaining the fallback\nlogic (prefers
`memory_limit_utilization` as %, falls back to\n`memory.working_set` as
MB).\n- **Pod CPU tooltip removed**: The old tooltip warned
that\n`cpu_limit_utilization` was optional. The new
field\n(`cpu.node.utilization`) is always present, making the
tooltip\nmisleading.\n- **OTel dataset filter helper**: Extracted
`otelDatasetFilter()`\nutility to avoid duplicating the
`(data_stream.dataset OR\nevent.dataset)` pattern.\n- **Host OTel unpack
path**: Added `metricByFieldOtel` /\n`unpackMetricOtel` so the host
table correctly reads OTel metric\npositions instead of falling through
to ECS metric keys.\n\n## Test plan\n\n- [x] `yarn
test:jest\nx-pack/solutions/observability/plugins/metrics_data_access/public/components/infrastructure_node_metrics_tables/`\n—
all 46 tests pass\n- [x] Manual smoke in APM Infrastructure tab with
OTel service data\n(e.g. `kbn-otel-demo` with EDOT Collector):\n - [ ]
Hosts tab: CPU count, CPU %, Memory total, Memory % all populate\n- [x]
Pods tab: CPU % populates; Memory shows % (with limits) or MB\n(without
limits)\n - [x] Containers tab: CPU % and Memory MB populate for K8s
containers","sha":"c6485d753760eac356c74a253cbd440d7ef83227","branchLabelMapping":{"^v9.4.0$":"main","^v(\\d+).(\\d+).\\d+$":"$1.$2"}},"sourcePullRequest":{"labels":["release_note:fix","backport:all-open","v9.4.0","Team:obs-presentation","v9.3.3","v9.2.8"],"title":"[APM][Infra]
Fix OTel metrics mapping in infrastructure
tab","number":259552,"url":"https://github.com/elastic/kibana/pull/259552","mergeCommit":{"message":"[APM][Infra]
Fix OTel metrics mapping in infrastructure tab (#259552)\n\n##
Summary\n\nCloses #256731\n\nFix OTel metrics in the APM Infrastructure
tab so hosts, pods, and\ncontainers display actual values instead of
`N/A`. The root causes were:\n(1) hosts queried
`metrics.system.memory.limit`, a field that doesn't\nexist in
`hostmetricsreceiver` data, (2) pod and container configs\nqueried
`_limit_utilization` fields that only exist when Kubernetes\nresource
limits are explicitly set — which most deployments don't have,\nand (3)
all OTel dataset filters only matched `event.dataset`,
missing\ndocuments indexed under `data_stream.dataset`.\n\n##
Demo\n\n###
Before\n\n\nhttps://github.com/user-attachments/assets/63193175-7893-47fa-8a82-ff76924908fb\n\n\n###
After\n\n\nhttps://github.com/user-attachments/assets/440676f7-4168-4c13-9227-1b3b6bc74e57\n\n\n##
Problem\n\nFor OTel entities, the Infrastructure tab in APM showed `N/A`
for most\nmetrics even when semconv data was present in
Elasticsearch:\n\n- **Hosts**: The `metrics.system.memory.limit` field
doesn't exist in\n`hostmetricsreceiver` data, causing Memory Total to
show `N/A`.\n- **Pods**: CPU used
`metrics.k8s.pod.cpu_limit_utilization` (requires\nCPU limits) and
memory used `metrics.k8s.pod.memory_limit_utilization`\n(requires memory
limits). Both return empty results when limits aren't\nset.\n-
**Containers (K8s)**: CPU
used\n`metrics.k8s.container.cpu_limit_utilization` and memory
used\n`metrics.k8s.container.memory_limit_utilization` — same
limits-only\nproblem.\n- **Dataset filters**: All OTel paths only
matched `event.dataset`,\nmissing documents indexed under
`data_stream.dataset`.\n\n## Field mapping changes\n\n### Hosts\n\n|
Metric | Before | After | Why |\n|---|---|---|---|\n| Memory total |
`metrics.system.memory.limit` | Derived:\n`metrics.system.memory.usage /
metrics.system.memory.utilization` |\n`memory.limit` doesn't exist in
hostmetricsreceiver; total is derived\nfrom usage and utilization ratio
|\n| Dataset filter | `event.dataset: \"hostmetricsreceiver.otel\"`
|\n`(data_stream.dataset: \"hostmetricsreceiver.otel\" OR
event.dataset:\n\"hostmetricsreceiver.otel\")` | Match both field
locations |\n\n### Pods\n\n| Metric | Before | After | Why
|\n|---|---|---|---|\n| CPU | `metrics.k8s.pod.cpu_limit_utilization`
|\n`metrics.k8s.pod.cpu.node.utilization` |
`cpu_limit_utilization`\nrequires resource limits;
`cpu.node.utilization` is always emitted by\nkubeletstats |\n| Memory |
`metrics.k8s.pod.memory_limit_utilization`
|\n`metrics.k8s.pod.memory_limit_utilization` with fallback
to\n`metrics.k8s.pod.memory.working_set` | Queries both;
prefers\n`memory_limit_utilization` (shown as %) when available, falls
back to\n`memory.working_set` (shown as MB) to avoid N/A |\n| Dataset
filter | `event.dataset: \"kubeletstatsreceiver.otel\"`
|\n`(data_stream.dataset: \"kubeletstatsreceiver.otel\" OR
event.dataset:\n\"kubeletstatsreceiver.otel\")` | Match both field
locations |\n\n### Containers (K8s path)\n\n| Metric | Before | After |
Why |\n|---|---|---|---|\n| CPU |
`metrics.k8s.container.cpu_limit_utilization`
|\n`metrics.container.cpu.usage` | `cpu_limit_utilization`
requires\nresource limits; `container.cpu.usage` is always emitted by
kubeletstats\n(0–1 ratio of one CPU core) |\n| Memory |
`metrics.k8s.container.memory_limit_utilization`
|\n`metrics.container.memory.working_set` |
`memory_limit_utilization`\nrequires resource limits;
`memory.working_set` (bytes → MB) is always\navailable |\n| Memory unit
| Always `%` for OTel | `MB` for K8s containers, `%` for\nDocker
containers | K8s path now uses `working_set` (bytes) not a\npercentage
|\n| Dataset filter | `event.dataset: \"kubeletstatsreceiver.otel\"`
|\n`(data_stream.dataset: \"kubeletstatsreceiver.otel\" OR
event.dataset:\n\"kubeletstatsreceiver.otel\")` | Match both field
locations |\n\n### Containers (Docker path)\n\n| Metric | Before | After
| Why |\n|---|---|---|---|\n| Dataset filter | `event.dataset:
\"dockerstatsreceiver.otel\"` |\n`(data_stream.dataset:
\"dockerstatsreceiver.otel\" OR
event.dataset:\n\"dockerstatsreceiver.otel\")` | Match both field
locations |\n\n## Other changes\n\n- **Pod memory tooltip**: Added an
`EuiIconTip` explaining the fallback\nlogic (prefers
`memory_limit_utilization` as %, falls back to\n`memory.working_set` as
MB).\n- **Pod CPU tooltip removed**: The old tooltip warned
that\n`cpu_limit_utilization` was optional. The new
field\n(`cpu.node.utilization`) is always present, making the
tooltip\nmisleading.\n- **OTel dataset filter helper**: Extracted
`otelDatasetFilter()`\nutility to avoid duplicating the
`(data_stream.dataset OR\nevent.dataset)` pattern.\n- **Host OTel unpack
path**: Added `metricByFieldOtel` /\n`unpackMetricOtel` so the host
table correctly reads OTel metric\npositions instead of falling through
to ECS metric keys.\n\n## Test plan\n\n- [x] `yarn
test:jest\nx-pack/solutions/observability/plugins/metrics_data_access/public/components/infrastructure_node_metrics_tables/`\n—
all 46 tests pass\n- [x] Manual smoke in APM Infrastructure tab with
OTel service data\n(e.g. `kbn-otel-demo` with EDOT Collector):\n - [ ]
Hosts tab: CPU count, CPU %, Memory total, Memory % all populate\n- [x]
Pods tab: CPU % populates; Memory shows % (with limits) or MB\n(without
limits)\n - [x] Containers tab: CPU % and Memory MB populate for K8s
containers","sha":"c6485d753760eac356c74a253cbd440d7ef83227"}},"sourceBranch":"main","suggestedTargetBranches":[],"targetPullRequestStates":[{"branch":"main","label":"v9.4.0","branchLabelMappingKey":"^v9.4.0$","isSourceBranch":true,"state":"MERGED","url":"https://github.com/elastic/kibana/pull/259552","number":259552,"mergeCommit":{"message":"[APM][Infra]
Fix OTel metrics mapping in infrastructure tab (#259552)\n\n##
Summary\n\nCloses #256731\n\nFix OTel metrics in the APM Infrastructure
tab so hosts, pods, and\ncontainers display actual values instead of
`N/A`. The root causes were:\n(1) hosts queried
`metrics.system.memory.limit`, a field that doesn't\nexist in
`hostmetricsreceiver` data, (2) pod and container configs\nqueried
`_limit_utilization` fields that only exist when Kubernetes\nresource
limits are explicitly set — which most deployments don't have,\nand (3)
all OTel dataset filters only matched `event.dataset`,
missing\ndocuments indexed under `data_stream.dataset`.\n\n##
Demo\n\n###
Before\n\n\nhttps://github.com/user-attachments/assets/63193175-7893-47fa-8a82-ff76924908fb\n\n\n###
After\n\n\nhttps://github.com/user-attachments/assets/440676f7-4168-4c13-9227-1b3b6bc74e57\n\n\n##
Problem\n\nFor OTel entities, the Infrastructure tab in APM showed `N/A`
for most\nmetrics even when semconv data was present in
Elasticsearch:\n\n- **Hosts**: The `metrics.system.memory.limit` field
doesn't exist in\n`hostmetricsreceiver` data, causing Memory Total to
show `N/A`.\n- **Pods**: CPU used
`metrics.k8s.pod.cpu_limit_utilization` (requires\nCPU limits) and
memory used `metrics.k8s.pod.memory_limit_utilization`\n(requires memory
limits). Both return empty results when limits aren't\nset.\n-
**Containers (K8s)**: CPU
used\n`metrics.k8s.container.cpu_limit_utilization` and memory
used\n`metrics.k8s.container.memory_limit_utilization` — same
limits-only\nproblem.\n- **Dataset filters**: All OTel paths only
matched `event.dataset`,\nmissing documents indexed under
`data_stream.dataset`.\n\n## Field mapping changes\n\n### Hosts\n\n|
Metric | Before | After | Why |\n|---|---|---|---|\n| Memory total |
`metrics.system.memory.limit` | Derived:\n`metrics.system.memory.usage /
metrics.system.memory.utilization` |\n`memory.limit` doesn't exist in
hostmetricsreceiver; total is derived\nfrom usage and utilization ratio
|\n| Dataset filter | `event.dataset: \"hostmetricsreceiver.otel\"`
|\n`(data_stream.dataset: \"hostmetricsreceiver.otel\" OR
event.dataset:\n\"hostmetricsreceiver.otel\")` | Match both field
locations |\n\n### Pods\n\n| Metric | Before | After | Why
|\n|---|---|---|---|\n| CPU | `metrics.k8s.pod.cpu_limit_utilization`
|\n`metrics.k8s.pod.cpu.node.utilization` |
`cpu_limit_utilization`\nrequires resource limits;
`cpu.node.utilization` is always emitted by\nkubeletstats |\n| Memory |
`metrics.k8s.pod.memory_limit_utilization`
|\n`metrics.k8s.pod.memory_limit_utilization` with fallback
to\n`metrics.k8s.pod.memory.working_set` | Queries both;
prefers\n`memory_limit_utilization` (shown as %) when available, falls
back to\n`memory.working_set` (shown as MB) to avoid N/A |\n| Dataset
filter | `event.dataset: \"kubeletstatsreceiver.otel\"`
|\n`(data_stream.dataset: \"kubeletstatsreceiver.otel\" OR
event.dataset:\n\"kubeletstatsreceiver.otel\")` | Match both field
locations |\n\n### Containers (K8s path)\n\n| Metric | Before | After |
Why |\n|---|---|---|---|\n| CPU |
`metrics.k8s.container.cpu_limit_utilization`
|\n`metrics.container.cpu.usage` | `cpu_limit_utilization`
requires\nresource limits; `container.cpu.usage` is always emitted by
kubeletstats\n(0–1 ratio of one CPU core) |\n| Memory |
`metrics.k8s.container.memory_limit_utilization`
|\n`metrics.container.memory.working_set` |
`memory_limit_utilization`\nrequires resource limits;
`memory.working_set` (bytes → MB) is always\navailable |\n| Memory unit
| Always `%` for OTel | `MB` for K8s containers, `%` for\nDocker
containers | K8s path now uses `working_set` (bytes) not a\npercentage
|\n| Dataset filter | `event.dataset: \"kubeletstatsreceiver.otel\"`
|\n`(data_stream.dataset: \"kubeletstatsreceiver.otel\" OR
event.dataset:\n\"kubeletstatsreceiver.otel\")` | Match both field
locations |\n\n### Containers (Docker path)\n\n| Metric | Before | After
| Why |\n|---|---|---|---|\n| Dataset filter | `event.dataset:
\"dockerstatsreceiver.otel\"` |\n`(data_stream.dataset:
\"dockerstatsreceiver.otel\" OR
event.dataset:\n\"dockerstatsreceiver.otel\")` | Match both field
locations |\n\n## Other changes\n\n- **Pod memory tooltip**: Added an
`EuiIconTip` explaining the fallback\nlogic (prefers
`memory_limit_utilization` as %, falls back to\n`memory.working_set` as
MB).\n- **Pod CPU tooltip removed**: The old tooltip warned
that\n`cpu_limit_utilization` was optional. The new
field\n(`cpu.node.utilization`) is always present, making the
tooltip\nmisleading.\n- **OTel dataset filter helper**: Extracted
`otelDatasetFilter()`\nutility to avoid duplicating the
`(data_stream.dataset OR\nevent.dataset)` pattern.\n- **Host OTel unpack
path**: Added `metricByFieldOtel` /\n`unpackMetricOtel` so the host
table correctly reads OTel metric\npositions instead of falling through
to ECS metric keys.\n\n## Test plan\n\n- [x] `yarn
test:jest\nx-pack/solutions/observability/plugins/metrics_data_access/public/components/infrastructure_node_metrics_tables/`\n—
all 46 tests pass\n- [x] Manual smoke in APM Infrastructure tab with
OTel service data\n(e.g. `kbn-otel-demo` with EDOT Collector):\n - [ ]
Hosts tab: CPU count, CPU %, Memory total, Memory % all populate\n- [x]
Pods tab: CPU % populates; Memory shows % (with limits) or MB\n(without
limits)\n - [x] Containers tab: CPU % and Memory MB populate for K8s
containers","sha":"c6485d753760eac356c74a253cbd440d7ef83227"}},{"branch":"9.3","label":"v9.3.3","branchLabelMappingKey":"^v(\\d+).(\\d+).\\d+$","isSourceBranch":false,"url":"https://github.com/elastic/kibana/pull/260988","number":260988,"state":"MERGED","mergeCommit":{"sha":"527a065dfb9714b23dc7ad5ce4b7083647b3a798","message":"[9.3]
[APM][Infra] Fix OTel metrics mapping in infrastructure tab (#259552)
(#260988)\n\n# Backport\n\nThis will backport the following commits from
`main` to `9.3`:\n- [[APM][Infra] Fix OTel metrics mapping in
infrastructure
tab\n(#259552)](https://github.com/elastic/kibana/pull/259552)\n\n\n\n###
Questions ?\nPlease refer to the [Backport
tool\ndocumentation](https://github.com/sorenlouv/backport)\n\n\n\nCo-authored-by:
Sergi Romeu
<sergi.romeu@elastic.co>"}},{"branch":"9.2","label":"v9.2.8","branchLabelMappingKey":"^v(\\d+).(\\d+).\\d+$","isSourceBranch":false,"url":"https://github.com/elastic/kibana/pull/260987","number":260987,"state":"MERGED","mergeCommit":{"sha":"f4e284b5e9e1d3eeadbcdc9c6bacb72a532d9521","message":"[9.2]
[APM][Infra] Fix OTel metrics mapping in infrastructure tab (#259552)
(#260987)\n\n# Backport\n\nThis will backport the following commits from
`main` to `9.2`:\n- [[APM][Infra] Fix OTel metrics mapping in
infrastructure
tab\n(#259552)](https://github.com/elastic/kibana/pull/259552)\n\n\n\n###
Questions ?\nPlease refer to the [Backport
tool\ndocumentation](https://github.com/sorenlouv/backport)\n\n\n\nCo-authored-by:
Sergi Romeu <sergi.romeu@elastic.co>"}}]}] BACKPORT-->

---------

Co-authored-by: kibanamachine <42973632+kibanamachine@users.noreply.github.com>
rmyz added a commit to rmyz/kibana that referenced this pull request Apr 13, 2026
Hoist unpackRow in calculateMetricAverages, use for..of and const
for averages per jennypavlova review on elastic#259552.

Guard working_set byte conversion with Number.isFinite before
dividing to MB, per review question on elastic#259552.

Related: elastic#259552
Made-with: Cursor
kibanamachine pushed a commit to kibanamachine/kibana that referenced this pull request Apr 14, 2026
kibanamachine pushed a commit to kibanamachine/kibana that referenced this pull request Apr 14, 2026
kibanamachine pushed a commit to kibanamachine/kibana that referenced this pull request Apr 14, 2026
kibanamachine added a commit that referenced this pull request Apr 14, 2026
#262708) (#262939)

# Backport

This will backport the following commits from `main` to `9.3`:
- [[Metrics] Pod OTEL metrics table: review follow-ups from #259552
(#262708)](#262708)

<!--- Backport version: 9.6.6 -->

### Questions ?
Please refer to the [Backport tool
documentation](https://github.com/sorenlouv/backport)

<!--BACKPORT [{"author":{"name":"Sergi
Romeu","email":"sergi.romeu@elastic.co"},"sourceCommit":{"committedDate":"2026-04-14T06:53:29Z","message":"[Metrics]
Pod OTEL metrics table: review follow-ups from #259552
(#262708)","sha":"79208d3e820bebbfb5de90863b039c949c551d78","branchLabelMapping":{"^v9.5.0$":"main","^v(\\d+).(\\d+).\\d+$":"$1.$2"}},"sourcePullRequest":{"labels":["release_note:skip","backport:version","v9.4.0","Team:obs-presentation","v9.5.0","v9.3.4","v8.19.15"],"title":"[Metrics]
Pod OTEL metrics table: review follow-ups from
#259552","number":262708,"url":"https://github.com/elastic/kibana/pull/262708","mergeCommit":{"message":"[Metrics]
Pod OTEL metrics table: review follow-ups from #259552
(#262708)","sha":"79208d3e820bebbfb5de90863b039c949c551d78"}},"sourceBranch":"main","suggestedTargetBranches":["9.4","9.3","8.19"],"targetPullRequestStates":[{"branch":"9.4","label":"v9.4.0","branchLabelMappingKey":"^v(\\d+).(\\d+).\\d+$","isSourceBranch":false,"state":"NOT_CREATED"},{"branch":"main","label":"v9.5.0","branchLabelMappingKey":"^v9.5.0$","isSourceBranch":true,"state":"MERGED","url":"https://github.com/elastic/kibana/pull/262708","number":262708,"mergeCommit":{"message":"[Metrics]
Pod OTEL metrics table: review follow-ups from #259552
(#262708)","sha":"79208d3e820bebbfb5de90863b039c949c551d78"}},{"branch":"9.3","label":"v9.3.4","branchLabelMappingKey":"^v(\\d+).(\\d+).\\d+$","isSourceBranch":false,"state":"NOT_CREATED"},{"branch":"8.19","label":"v8.19.15","branchLabelMappingKey":"^v(\\d+).(\\d+).\\d+$","isSourceBranch":false,"state":"NOT_CREATED"}]}]
BACKPORT-->

Co-authored-by: Sergi Romeu <sergi.romeu@elastic.co>
kibanamachine added a commit that referenced this pull request Apr 14, 2026
#262708) (#262940)

# Backport

This will backport the following commits from `main` to `9.4`:
- [[Metrics] Pod OTEL metrics table: review follow-ups from #259552
(#262708)](#262708)

<!--- Backport version: 9.6.6 -->

### Questions ?
Please refer to the [Backport tool
documentation](https://github.com/sorenlouv/backport)

<!--BACKPORT [{"author":{"name":"Sergi
Romeu","email":"sergi.romeu@elastic.co"},"sourceCommit":{"committedDate":"2026-04-14T06:53:29Z","message":"[Metrics]
Pod OTEL metrics table: review follow-ups from #259552
(#262708)","sha":"79208d3e820bebbfb5de90863b039c949c551d78","branchLabelMapping":{"^v9.5.0$":"main","^v(\\d+).(\\d+).\\d+$":"$1.$2"}},"sourcePullRequest":{"labels":["release_note:skip","backport:version","v9.4.0","Team:obs-presentation","v9.5.0","v9.3.4","v8.19.15"],"title":"[Metrics]
Pod OTEL metrics table: review follow-ups from
#259552","number":262708,"url":"https://github.com/elastic/kibana/pull/262708","mergeCommit":{"message":"[Metrics]
Pod OTEL metrics table: review follow-ups from #259552
(#262708)","sha":"79208d3e820bebbfb5de90863b039c949c551d78"}},"sourceBranch":"main","suggestedTargetBranches":["9.4","9.3","8.19"],"targetPullRequestStates":[{"branch":"9.4","label":"v9.4.0","branchLabelMappingKey":"^v(\\d+).(\\d+).\\d+$","isSourceBranch":false,"state":"NOT_CREATED"},{"branch":"main","label":"v9.5.0","branchLabelMappingKey":"^v9.5.0$","isSourceBranch":true,"state":"MERGED","url":"https://github.com/elastic/kibana/pull/262708","number":262708,"mergeCommit":{"message":"[Metrics]
Pod OTEL metrics table: review follow-ups from #259552
(#262708)","sha":"79208d3e820bebbfb5de90863b039c949c551d78"}},{"branch":"9.3","label":"v9.3.4","branchLabelMappingKey":"^v(\\d+).(\\d+).\\d+$","isSourceBranch":false,"state":"NOT_CREATED"},{"branch":"8.19","label":"v8.19.15","branchLabelMappingKey":"^v(\\d+).(\\d+).\\d+$","isSourceBranch":false,"state":"NOT_CREATED"}]}]
BACKPORT-->

Co-authored-by: Sergi Romeu <sergi.romeu@elastic.co>
kibanamachine added a commit that referenced this pull request Apr 14, 2026
… (#262708) (#262938)

# Backport

This will backport the following commits from `main` to `8.19`:
- [[Metrics] Pod OTEL metrics table: review follow-ups from #259552
(#262708)](#262708)

<!--- Backport version: 9.6.6 -->

### Questions ?
Please refer to the [Backport tool
documentation](https://github.com/sorenlouv/backport)

<!--BACKPORT [{"author":{"name":"Sergi
Romeu","email":"sergi.romeu@elastic.co"},"sourceCommit":{"committedDate":"2026-04-14T06:53:29Z","message":"[Metrics]
Pod OTEL metrics table: review follow-ups from #259552
(#262708)","sha":"79208d3e820bebbfb5de90863b039c949c551d78","branchLabelMapping":{"^v9.5.0$":"main","^v(\\d+).(\\d+).\\d+$":"$1.$2"}},"sourcePullRequest":{"labels":["release_note:skip","backport:version","v9.4.0","Team:obs-presentation","v9.5.0","v9.3.4","v8.19.15"],"title":"[Metrics]
Pod OTEL metrics table: review follow-ups from
#259552","number":262708,"url":"https://github.com/elastic/kibana/pull/262708","mergeCommit":{"message":"[Metrics]
Pod OTEL metrics table: review follow-ups from #259552
(#262708)","sha":"79208d3e820bebbfb5de90863b039c949c551d78"}},"sourceBranch":"main","suggestedTargetBranches":["9.4","9.3","8.19"],"targetPullRequestStates":[{"branch":"9.4","label":"v9.4.0","branchLabelMappingKey":"^v(\\d+).(\\d+).\\d+$","isSourceBranch":false,"state":"NOT_CREATED"},{"branch":"main","label":"v9.5.0","branchLabelMappingKey":"^v9.5.0$","isSourceBranch":true,"state":"MERGED","url":"https://github.com/elastic/kibana/pull/262708","number":262708,"mergeCommit":{"message":"[Metrics]
Pod OTEL metrics table: review follow-ups from #259552
(#262708)","sha":"79208d3e820bebbfb5de90863b039c949c551d78"}},{"branch":"9.3","label":"v9.3.4","branchLabelMappingKey":"^v(\\d+).(\\d+).\\d+$","isSourceBranch":false,"state":"NOT_CREATED"},{"branch":"8.19","label":"v8.19.15","branchLabelMappingKey":"^v(\\d+).(\\d+).\\d+$","isSourceBranch":false,"state":"NOT_CREATED"}]}]
BACKPORT-->

Co-authored-by: Sergi Romeu <sergi.romeu@elastic.co>
tfcmarques pushed a commit to tfcmarques/kibana that referenced this pull request Apr 14, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

backport:all-open Backport to all branches that could still receive a release release_note:fix Team:obs-presentation Focus: APM UI, Infra UI, Hosts UI, Universal Profiling, Obs Overview and left Navigation v8.19.14 v9.2.8 v9.3.3 v9.4.0

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Some APM Infrastructure Tab Metrics not showing for OTel entities

5 participants