Skip to content

[Security Solution] Try out patched Zod v4#264106

Closed
maximpn wants to merge 1 commit intoelastic:mainfrom
maximpn:check-zod-patch
Closed

[Security Solution] Try out patched Zod v4#264106
maximpn wants to merge 1 commit intoelastic:mainfrom
maximpn:check-zod-patch

Conversation

@maximpn
Copy link
Copy Markdown
Contributor

@maximpn maximpn commented Apr 17, 2026

Summary

This PR contains Zod v4 patched in gsoldevila/zod#1.

@elasticmachine
Copy link
Copy Markdown
Contributor

🤖 Jobs for this PR can be triggered through checkboxes. 🚧

ℹ️ To trigger the CI, please tick the checkbox below 👇

  • Click to trigger kibana-pull-request for this PR!
  • Click to trigger kibana-deploy-project-from-pr for this PR!
  • Click to trigger kibana-deploy-cloud-from-pr for this PR!
  • Click to trigger kibana-entity-store-performance-from-pr for this PR!
  • Click to trigger kibana-storybooks-from-pr for this PR!

@elasticmachine
Copy link
Copy Markdown
Contributor

💛 Build succeeded, but was flaky

Failed CI Steps

Test Failures

  • [job] [logs] FTR Configs #19 / Endpoint plugin @ess @serverless @skipInServerlessMKI Endpoint policy response api GET /api/endpoint/policy_response "before all" hook for "should return one policy response for an id"
  • [job] [logs] Jest Integration Tests #9 / workflow with wait step when duration is short should have correct workflow duration

Metrics [docs]

Module Count

Fewer modules leads to a faster build time

id before after diff
stackConnectors 889 955 +66
workflowsManagement 1582 1648 +66
total +132

Any counts in public APIs

Total count of every any typed public API. Target amount is 0. Run node scripts/build_api_docs --plugin [yourplugin] --stats any for more detailed information.

id before after diff
@kbn/inference-langchain 0 1 +1

Async chunks

Total size of all lazy-loaded chunks that will be downloaded as the user navigates the app

id before after diff
stackConnectors 1.7MB 1.7MB +2.6KB
workflowsManagement 2.3MB 2.3MB +2.6KB
total +5.3KB

Page load bundle

Size of the bundles that are downloaded on every page load. Target size is below 100kb

id before after diff
kbnUiSharedDeps-npmDll 7.3MB 7.3MB +6.4KB

History

cc @maximpn

@sdesalas
Copy link
Copy Markdown
Member

sdesalas commented Apr 18, 2026

Local testing: Does this improve heap memory.

Patch made by @gsoldevila included in this PR seems to have a marked improvement in memory consumption while profiling prebuilt rule installation using kibana-puppetter-scripts in my local machine.

Note that peak usage drops by around 41MB and average almost by 100MB. Since this is running locally in --dev mode the figures are different than in prod but at least its a clear indication there has been a big improvement. Note that on the second (right) data set GC also appears less aggressive since more memory is available to the process (due to lower overall baseline) and there is a higher memory budget to play with. This could mean that mean peak memory is even lower (as in greater difference than 41MB) if we reduced the max_old_space_size even further.

main vs zod-patch

Test setup

  • Local Kibana was started in --dev mode with memory profiling enabled (KBN_MEM_PROFILE / --mem-profile), producing kibana-memory-profile-*.csv and kibana.output.*.txt per run.
  • 1400 in the folder names refers to NODE_OPTIONS="--max_old_space_size=1400" (1400 MiB V8 old-space limit).

Datasets (3 runs each)

Branch / work Folder per run
main main-1400-round1, main-1400-round2, main-1400-round3
zod-patch (current work) zod-patch-1400-round1, zod-patch-1400-round2, zod-patch-1400-round3

Source: .vscode/stats/<folder>/kibana-memory-profile-*.csv

Method

  • Metric: heap_used from the CSV (bytes).
  • First 30 seconds excluded relative to the first row’s timestamp_ms (startup/warmup omitted); all remaining samples in the ~5 minute profile window are included.
  • Peak = maximum heap_used after the cutoff; average = arithmetic mean of heap_used after the cutoff.

Results (after 30 s cutoff)

main (main-1400-round*)

Run Peak heap_used Avg heap_used
round1 1235.66 MiB 1121.35 MiB
round2 1257.13 MiB 1100.33 MiB
round3 1242.56 MiB 1120.50 MiB
Mean of 3 runs 1245.12 MiB 1114.06 MiB

zod-patch (zod-patch-1400-round*)

Run Peak heap_used Avg heap_used
round1 1205.73 MiB 1019.57 MiB
round2 1216.71 MiB 1022.62 MiB
round3 1188.32 MiB 1022.64 MiB
Mean of 3 runs 1203.59 MiB 1021.61 MiB

Which set is more memory efficient?

zod-patch is more memory efficient on this data: lower peak and lower average heap_used in every run versus main, and lower means across the three runs (~41 MiB lower mean peak, ~92 MiB lower mean average heap after the 30 s cutoff).

Logs

Console captures (kibana.output.*.txt alongside each CSV) were spot-checked for heap OOM / allocation failed style messages; none were found in these runs. For deeper log comparison (errors, warnings counts), diff the kibana.output files per folder.

@maximpn
Copy link
Copy Markdown
Contributor Author

maximpn commented Apr 21, 2026

Close in favor of #263121.

@maximpn maximpn closed this Apr 21, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants