-
Notifications
You must be signed in to change notification settings - Fork 1.6k
[ty] Add separate CI job for memory usage stats #19134
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
crates/ty/src/lib.rs
Outdated
| ExitStatus::Failure | ||
| } else { | ||
| ExitStatus::Success | ||
| if std::env::var("TY_MEMORY_REPORT").as_deref() == Ok("mypy_primer") { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is a hack for now, we should have a way to silence diagnostics completely (but keep the memory usage report).
| flake8 | ||
| sphinx | ||
| prefect | ||
| trio |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Somewhat arbitrarily chosen, open to suggestions.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
sympy might be a good addition here, I think -- we have very high memory usage on it already
| // the exact numbers across runs and compute the difference, but we don't have | ||
| // the infrastructure for that currently. | ||
| const BASE: f64 = 1.1; | ||
| const BASE: f64 = 1.05; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I lowered the threshold because single-threaded runs should be completely deterministic, and the jump between buckets is currently quite high for large numbers.
a6cc106 to
81a95cc
Compare
|
a153f65 to
eba3b35
Compare
|
We could also just run the memory usage report in the same job? If we choose smaller projects that shouldn't add significant time, and we avoid having to use another depot runner and duplicate the setup work. We could also avoid using mypy-primer for that run and get exact numbers. |
eba3b35 to
bf4df9e
Compare
AlexWaygood
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
thank you!
950612b to
a941e5e
Compare
|
a941e5e to
6f85184
Compare
AlexWaygood
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice!
| flake8 | ||
| sphinx | ||
| prefect | ||
| trio |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
sympy might be a good addition here, I think -- we have very high memory usage on it already
|
I'm a bit surprised that the memory usage results are non deterministic when running multi threaded (to such large degrees). The only two reasons I can think of are:
Do we know which of the two is causing the flakes? |
| tracing::warn!("No python files found under the given path(s)"); | ||
| } | ||
|
|
||
| // TODO: We should have an official flag to silence workspace diagnostics. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I, at first, didn't understand what features is needed to replace this environment variable usage but I think I understand now. What we need is a --quiet or similar flag. Would you mind opening an issue in the ty repository for adding a flag to suppress diagnostic printing (and mention Ruff's --quiet and --silent flags)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Created astral-sh/ty#772
This is partly due to our rounding strategy being affected by very small changes if we get unlucky with the numbers. Now that we have a separate job we could just get the exact numbers manually. That said, on my machine I was seeing variance of ~10MB on projects with total memory usage in the hundreds of MB. It's possible that AST garbage collection is the culprit. |
## Summary As discussed in #19059.
…c_tokens * 'main' of https://github.com/astral-sh/ruff: (27 commits) [ty] First cut at semantic token provider (astral-sh#19108) [`flake8-simplify`] Make example error out-of-the-box (`SIM116`) (astral-sh#19111) [`flake8-use-pathlib`] Make example error out-of-the-box (`PTH210`) (astral-sh#19189) [`flake8-use-pathlib`] Add autofixes for `PTH203`, `PTH204`, `PTH205` (astral-sh#18922) [`flake8-type-checking`] Fix syntax error introduced by fix (`TC008`) (astral-sh#19150) [`flake8-pyi`] Make example error out-of-the-box (`PYI007`, `PYI008`) (astral-sh#19103) Update Rust crate indicatif to 0.18.0 (astral-sh#19165) [ty] Add separate CI job for memory usage stats (astral-sh#19134) [ty] Add documentation for server traits (astral-sh#19137) Rename to `SessionSnapshot`, move unwind assertion closer (astral-sh#19177) [`flake8-type-checking`] Make example error out-of-the-box (`TC001`) (astral-sh#19151) [ty] Bare `ClassVar` annotations (astral-sh#15768) [ty] Re-enable multithreaded pydantic benchmark (astral-sh#19176) [ty] Implement equivalence for protocols with method members (astral-sh#18659) [ty] Use RHS inferred type for bare `Final` symbols (astral-sh#19142) [ty] Support declaration-only attributes (astral-sh#19048) [ty] Sync vendored typeshed stubs (astral-sh#19174) Update dependency pyodide to ^0.28.0 (astral-sh#19164) Update NPM Development dependencies (astral-sh#19170) Update taiki-e/install-action action to v2.56.7 (astral-sh#19169) ...
Summary
As discussed in #19059.