Skip to content

[Platform] Deprecate seed_everything#31659

Merged
vllm-bot merged 1 commit intovllm-project:mainfrom
wangxiyuan:deprecate_seed_everything
Jan 5, 2026
Merged

[Platform] Deprecate seed_everything#31659
vllm-bot merged 1 commit intovllm-project:mainfrom
wangxiyuan:deprecate_seed_everything

Conversation

@wangxiyuan
Copy link
Copy Markdown
Contributor

@wangxiyuan wangxiyuan commented Jan 4, 2026

Purpose

seed_everything platform interface is totally useless, no platform redefine it It's always the same with

random.seed(seed)
np.random.seed(seed)
torch.manual_seed(seed)

Let's deprecate this interface to make the platform interface more clean.

Test Plan

Test Result


Essential Elements of an Effective PR Description Checklist
  • The purpose of the PR, such as "Fix some issue (link existing issues this PR will resolve)".
  • The test plan, such as providing test command.
  • The test results, such as pasting the results comparison before and after, or e2e results
  • (Optional) The necessary documentation update, such as updating supported_models.md and examples for a new model.
  • (Optional) Release notes update. If your change is user facing, please update the release notes draft in the Google Doc.

@mergify
Copy link
Copy Markdown

mergify bot commented Jan 4, 2026

Documentation preview: https://vllm--31659.org.readthedocs.build/en/31659/

@mergify mergify bot added documentation Improvements or additions to documentation multi-modality Related to multi-modality (#4194) nvidia v1 tpu Related to Google TPUs labels Jan 4, 2026
@wangxiyuan wangxiyuan force-pushed the deprecate_seed_everything branch from 3016790 to 6a73893 Compare January 4, 2026 03:11
@mergify mergify bot added the cpu Related to CPU backends label Jan 4, 2026
Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request deprecates the seed_everything method from the platform interface and centralizes random seed setting into a new set_random_seed utility function in vllm.utils.torch_utils. This is a good cleanup that simplifies the platform interface. The changes involve updating numerous test files and internal utilities to use the new function.

I've identified a critical issue where the refactoring of set_random_seed is incomplete, which will break the build. I've also found several instances of redundant seed setting in the test files after the changes. Please see my comments for details on how to address these issues.

I am having trouble creating individual review comments. Click here to see my feedback.

vllm/model_executor/utils.py (13-16)

critical

Moving set_random_seed to vllm.utils.torch_utils is a good refactoring. However, this change is incomplete as it doesn't update the call sites of this function. This will cause import errors and break the build.

The following files need to be updated to import set_random_seed from vllm.utils.torch_utils instead of vllm.model_executor.utils:

  • vllm/engine/llm_engine.py
  • vllm/worker/worker_base.py

Please update these files to complete the refactoring.

tests/kernels/attention/test_lightning_attn.py (125-126)

high

The call to set_random_seed(42) on line 127 already sets the seed for torch on all devices. These calls to torch.manual_seed and torch.cuda.manual_seed_all are redundant and can be removed.

tests/kernels/attention/test_lightning_attn.py (168-169)

high

The call to set_random_seed(42) on line 170 already sets the seed for torch on all devices. These calls to torch.manual_seed and torch.cuda.manual_seed_all are redundant and can be removed.

tests/kernels/attention/test_lightning_attn.py (232-233)

high

The call to set_random_seed(42) on line 234 already sets the seed for torch on all devices. These calls to torch.manual_seed and torch.cuda.manual_seed_all are redundant and can be removed.

tests/kernels/quantization/test_mxfp4_qutlass.py (210-211)

high

The call to set_random_seed(0) on line 209 already sets the seed for numpy and torch. These subsequent calls are redundant and can be removed.

tests/kernels/quantization/test_nvfp4_qutlass.py (198-199)

high

The call to set_random_seed(0) on line 197 already sets the seed for numpy and torch. These subsequent calls are redundant and can be removed.

@DarkLight1337
Copy link
Copy Markdown
Member

DarkLight1337 commented Jan 4, 2026

Doesn't seed_everything include setting random.seed and np.random.seed as well? How can we replace it with only torch.manual_seed?

Copy link
Copy Markdown
Member

@DarkLight1337 DarkLight1337 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Edit: Contrary to the PR description, set_random_seed handles the other modules as well so it is fine

@github-project-automation github-project-automation bot moved this to Ready in NVIDIA Jan 4, 2026
@DarkLight1337 DarkLight1337 enabled auto-merge (squash) January 4, 2026 07:52
@github-actions github-actions bot added the ready ONLY add when PR is ready to merge/full CI is needed label Jan 4, 2026
@wangxiyuan
Copy link
Copy Markdown
Contributor Author

@DarkLight1337 My fault. Updated the commit message.

Signed-off-by: wangxiyuan <wangxiyuan1007@gmail.com>
auto-merge was automatically disabled January 4, 2026 09:02

Head branch was pushed to by a user without write access

@wangxiyuan wangxiyuan force-pushed the deprecate_seed_everything branch from 6a73893 to 728a8fc Compare January 4, 2026 09:02
Copy link
Copy Markdown
Member

@yewentao256 yewentao256 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, thanks for the work!

@vllm-bot vllm-bot merged commit bb4337b into vllm-project:main Jan 5, 2026
61 of 64 checks passed
@github-project-automation github-project-automation bot moved this from Ready to Done in NVIDIA Jan 5, 2026
LucasWilkinson pushed a commit to neuralmagic/vllm that referenced this pull request Jan 6, 2026
Signed-off-by: wangxiyuan <wangxiyuan1007@gmail.com>
yugong333 pushed a commit to yugong333/vllm that referenced this pull request Jan 9, 2026
Signed-off-by: wangxiyuan <wangxiyuan1007@gmail.com>
akh64bit pushed a commit to akh64bit/vllm that referenced this pull request Jan 16, 2026
Signed-off-by: wangxiyuan <wangxiyuan1007@gmail.com>
dsuhinin pushed a commit to dsuhinin/vllm that referenced this pull request Jan 21, 2026
Signed-off-by: wangxiyuan <wangxiyuan1007@gmail.com>
Signed-off-by: dsuhinin <suhinin.dmitriy@gmail.com>
ItzDEXX pushed a commit to ItzDEXX/vllm that referenced this pull request Feb 19, 2026
Signed-off-by: wangxiyuan <wangxiyuan1007@gmail.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

cpu Related to CPU backends documentation Improvements or additions to documentation multi-modality Related to multi-modality (#4194) nvidia ready ONLY add when PR is ready to merge/full CI is needed tpu Related to Google TPUs v1

Projects

Status: Done

Development

Successfully merging this pull request may close these issues.

4 participants