Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

release-1.x branch unit test memory consumption increase #1421

Closed
davidgamero opened this issue Nov 15, 2023 · 4 comments
Closed

release-1.x branch unit test memory consumption increase #1421

davidgamero opened this issue Nov 15, 2023 · 4 comments
Labels
lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale.

Comments

@davidgamero
Copy link
Contributor

this issue is to track the growth in memory usage when running npm test, which has lead to increasing the node memory as a stop gap

i was able to make some progress isolating the gc memory issue- maybe we make a separate issue to track it

by executing nyc mocha with the --node-command inspect to enable debugging i was able to connect a memory profiler and get a GC snapshot and allocation timeline

the majority of the growing heap usage seems to be coming from SourceMapConsumer in node_modules/source-map/lib/source-map-consumer.js at the bottom of a pretty long call stack that includes @ babel/core

i found meteor/meteor#9568 potentially relevant link that has encouraging similarities.

by changing nyc.instrument=false in package.json on line 51, the memory never exceeded 600MB, while re-enabling that line eventually balloons up to 12GB which was the most i was willing to test.

i will continue investigating upgrading our source-map indirect dependency so we can hopefully get this resolved

@mstruebing
Copy link
Member

Note: We should be able to remove this line once we have success: https://github.com/kubernetes-client/javascript/blob/release-1.x/.github/workflows/test.yml#L28

@jgielstra
Copy link

jgielstra commented Nov 28, 2023

FYI I think nyc is no longer maintained.
Last commit was ~3 years ago and there isn't any recent community activity.
We've migrated to c8

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Feb 26, 2024
@mstruebing
Copy link
Member

This can be closed an was resolved in #1480

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale.
Projects
None yet
Development

No branches or pull requests

5 participants