Skip to content

Fix TYPE_CHECKING stub defaults in envs.py to match actual runtime defaults#35645

Merged
vllm-bot merged 1 commit intovllm-project:mainfrom
lin-shh:fix/envs-default-value-inconsistencies
Mar 3, 2026
Merged

Fix TYPE_CHECKING stub defaults in envs.py to match actual runtime defaults#35645
vllm-bot merged 1 commit intovllm-project:mainfrom
lin-shh:fix/envs-default-value-inconsistencies

Conversation

@lin-shh
Copy link
Copy Markdown
Contributor

@lin-shh lin-shh commented Mar 1, 2026

Summary

The TYPE_CHECKING block in vllm/envs.py provides stubs used by IDE tools and static type-checkers (e.g. mypy/pyright). However, three variables had stub default values that differed from the actual defaults returned by their runtime lambdas in the environment_variables dict:

Variable Stub default (wrong) Runtime lambda default (correct)
VLLM_USAGE_SOURCE "" "production"
VLLM_CPU_OMP_THREADS_BIND "" "auto"
VLLM_USE_BYTECODE_HOOK False True (from int("1"))

Note: VLLM_MAIN_CUDA_VERSION looks inconsistent at first glance (str = "12.9" vs os.getenv(..., "").lower() or "12.9"), but the effective default is the same "12.9" and does not need fixing.

Changes

Updated the three stub defaults to match the actual runtime behavior so that IDEs, type-checkers, and documentation reflect correct values.

Test Plan

  • No runtime logic changed; this is a documentation/stub fix only.
  • Verify with grep that stub values now match lambda defaults.

…faults

The TYPE_CHECKING block in envs.py provides stubs for IDE autocompletion and
type-checking, but three variables had default values inconsistent with their
actual lambda defaults in the environment variables dict:

- VLLM_USAGE_SOURCE: stub said "" but lambda defaults to "production"
- VLLM_CPU_OMP_THREADS_BIND: stub said "" but lambda defaults to "auto"
- VLLM_USE_BYTECODE_HOOK: stub said False but lambda defaults to True (via "1")

Update the stubs to match the runtime behavior so that type checkers and IDE
tooling reflect the correct default values.

Made-with: Cursor
@github-actions
Copy link
Copy Markdown

github-actions Bot commented Mar 1, 2026

👋 Hi! Thank you for contributing to the vLLM project.

💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels.

Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors.

You ask your reviewers to trigger select CI tests on top of fastcheck CI.

Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging.

To run CI, PR reviewers can either: Add ready label to the PR or enable auto-merge.

If you have any questions, please reach out to us on Slack at https://slack.vllm.ai.

🚀

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request correctly aligns the default values of several environment variable stubs within the TYPE_CHECKING block in vllm/envs.py with their actual runtime defaults. The changes for VLLM_USAGE_SOURCE, VLLM_CPU_OMP_THREADS_BIND, and VLLM_USE_BYTECODE_HOOK are accurate and improve the developer experience by ensuring static analysis tools and IDEs reflect the correct behavior. The changes are well-contained and have no impact on runtime logic. I have no further comments.

@lin-shh
Copy link
Copy Markdown
Contributor Author

lin-shh commented Mar 1, 2026

@yewentao256 Hi! Could you take a look at this PR when you get a chance? It fixes inconsistencies between the TYPE_CHECKING stub defaults and actual runtime defaults in envs.py. Thanks!

Copy link
Copy Markdown
Member

@yewentao256 yewentao256 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, thanks for the work!

@yewentao256 yewentao256 added the ready ONLY add when PR is ready to merge/full CI is needed label Mar 1, 2026
@vllm-bot vllm-bot merged commit 8fa68a8 into vllm-project:main Mar 3, 2026
51 of 53 checks passed
Copilot AI pushed a commit to machov/vllm that referenced this pull request Mar 10, 2026
avinashsingh77 pushed a commit to avinashsingh77/vllm that referenced this pull request Mar 12, 2026
wendyliu235 pushed a commit to wendyliu235/vllm-public that referenced this pull request Mar 18, 2026
…faults (vllm-project#35645)

Signed-off-by: wendyliu235 <wenjun.liu@intel.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ready ONLY add when PR is ready to merge/full CI is needed

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants