Skip to content

[Chore] Try remove init_cached_hf_modules#31786

Merged
Isotr0py merged 2 commits intovllm-project:mainfrom
DarkLight1337:cleanup-executor
Jan 7, 2026
Merged

[Chore] Try remove init_cached_hf_modules#31786
Isotr0py merged 2 commits intovllm-project:mainfrom
DarkLight1337:cleanup-executor

Conversation

@DarkLight1337
Copy link
Copy Markdown
Member

@DarkLight1337 DarkLight1337 commented Jan 6, 2026

Purpose

See if we can remove init_cached_hf_modules now, which would simplify the initialization of worker wrapper inside the executor.

Test Plan

Test Result


Essential Elements of an Effective PR Description Checklist
  • The purpose of the PR, such as "Fix some issue (link existing issues this PR will resolve)".
  • The test plan, such as providing test command.
  • The test results, such as pasting the results comparison before and after, or e2e results
  • (Optional) The necessary documentation update, such as updating supported_models.md and examples for a new model.
  • (Optional) Release notes update. If your change is user facing, please update the release notes draft in the Google Doc.

Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>
@DarkLight1337 DarkLight1337 added the ready-run-all-tests Trigger CI with all tests for wide-ranging PRs label Jan 6, 2026
@mergify mergify bot added v1 tpu Related to Google TPUs labels Jan 6, 2026
Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request refactors the initialization of WorkerWrapperBase by removing the vllm_config parameter from its constructor and instead passing it during the init_worker method call. This change is propagated across various executor implementations (UniProcExecutor, MultiprocExecutor, RayExecutor, and test executors) where WorkerWrapperBase is instantiated. Additionally, the init_cached_hf_modules function and its calls are removed from vllm/utils/import_utils.py and worker initialization logic in gpu_worker.py and tpu_worker.py, indicating a change in how Hugging Face modules are handled. The execute_model method in WorkerWrapperBase is also simplified by removing *args and **kwargs.

@DarkLight1337 DarkLight1337 requested a review from hmellor January 6, 2026 08:56
@DarkLight1337
Copy link
Copy Markdown
Member Author

Seems ok to remove

@Isotr0py Isotr0py merged commit aafd4d2 into vllm-project:main Jan 7, 2026
137 checks passed
@DarkLight1337 DarkLight1337 deleted the cleanup-executor branch January 7, 2026 04:43
yugong333 pushed a commit to yugong333/vllm that referenced this pull request Jan 9, 2026
Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>
wangxiyuan pushed a commit to vllm-project/vllm-ascend that referenced this pull request Jan 13, 2026
### What this PR does / why we need it?
Upgrade vllm commit to 0109 (bde38c11df0ea066a740efe9b77fff5418be45df)

1. remove `init_cached_hf_modules ` due to
vllm-project/vllm#31786
2. fix spec_decode e2e test due to
vllm-project/vllm#29821 break
3. fix `vllm.v1.attention.backends.utils` duo to
vllm-project/vllm#31891
4. fix `self.seq_lens - query_lens` on same device due to
vllm-project/vllm#31773
5. skip model_runner_v2 e2e test due to `'_OpNamespace' '_C' object has
no attribute 'get_cuda_view_from_cpu_tensor'`

- vLLM version: v0.13.0
- vLLM main:
vllm-project/vllm@2f4e654

Signed-off-by: hfadzxy <starmoon_zhang@163.com>
aipaes pushed a commit to aipaes/vllm-ascend that referenced this pull request Jan 15, 2026
### What this PR does / why we need it?
Upgrade vllm commit to 0109 (bde38c11df0ea066a740efe9b77fff5418be45df)

1. remove `init_cached_hf_modules ` due to
vllm-project/vllm#31786
2. fix spec_decode e2e test due to
vllm-project/vllm#29821 break
3. fix `vllm.v1.attention.backends.utils` duo to
vllm-project/vllm#31891
4. fix `self.seq_lens - query_lens` on same device due to
vllm-project/vllm#31773
5. skip model_runner_v2 e2e test due to `'_OpNamespace' '_C' object has
no attribute 'get_cuda_view_from_cpu_tensor'`

- vLLM version: v0.13.0
- vLLM main:
vllm-project/vllm@2f4e654

Signed-off-by: hfadzxy <starmoon_zhang@163.com>
akh64bit pushed a commit to akh64bit/vllm that referenced this pull request Jan 16, 2026
Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>
dsuhinin pushed a commit to dsuhinin/vllm that referenced this pull request Jan 21, 2026
Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>
Signed-off-by: dsuhinin <suhinin.dmitriy@gmail.com>
starmountain1997 pushed a commit to starmountain1997/vllm-ascend that referenced this pull request Jan 31, 2026
### What this PR does / why we need it?
Upgrade vllm commit to 0109 (bde38c11df0ea066a740efe9b77fff5418be45df)

1. remove `init_cached_hf_modules ` due to
vllm-project/vllm#31786
2. fix spec_decode e2e test due to
vllm-project/vllm#29821 break
3. fix `vllm.v1.attention.backends.utils` duo to
vllm-project/vllm#31891
4. fix `self.seq_lens - query_lens` on same device due to
vllm-project/vllm#31773
5. skip model_runner_v2 e2e test due to `'_OpNamespace' '_C' object has
no attribute 'get_cuda_view_from_cpu_tensor'`

- vLLM version: v0.13.0
- vLLM main:
vllm-project/vllm@2f4e654

Signed-off-by: hfadzxy <starmoon_zhang@163.com>
starmountain1997 pushed a commit to starmountain1997/vllm-ascend that referenced this pull request Jan 31, 2026
### What this PR does / why we need it?
Upgrade vllm commit to 0109 (bde38c11df0ea066a740efe9b77fff5418be45df)

1. remove `init_cached_hf_modules ` due to
vllm-project/vllm#31786
2. fix spec_decode e2e test due to
vllm-project/vllm#29821 break
3. fix `vllm.v1.attention.backends.utils` duo to
vllm-project/vllm#31891
4. fix `self.seq_lens - query_lens` on same device due to
vllm-project/vllm#31773
5. skip model_runner_v2 e2e test due to `'_OpNamespace' '_C' object has
no attribute 'get_cuda_view_from_cpu_tensor'`

- vLLM version: v0.13.0
- vLLM main:
vllm-project/vllm@2f4e654

Signed-off-by: hfadzxy <starmoon_zhang@163.com>
ItzDEXX pushed a commit to ItzDEXX/vllm that referenced this pull request Feb 19, 2026
Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>
ZRJ026 pushed a commit to ZRJ026/vllm-ascend that referenced this pull request Feb 28, 2026
### What this PR does / why we need it?
Upgrade vllm commit to 0109 (bde38c11df0ea066a740efe9b77fff5418be45df)

1. remove `init_cached_hf_modules ` due to
vllm-project/vllm#31786
2. fix spec_decode e2e test due to
vllm-project/vllm#29821 break
3. fix `vllm.v1.attention.backends.utils` duo to
vllm-project/vllm#31891
4. fix `self.seq_lens - query_lens` on same device due to
vllm-project/vllm#31773
5. skip model_runner_v2 e2e test due to `'_OpNamespace' '_C' object has
no attribute 'get_cuda_view_from_cpu_tensor'`

- vLLM version: v0.13.0
- vLLM main:
vllm-project/vllm@2f4e654

Signed-off-by: hfadzxy <starmoon_zhang@163.com>
Signed-off-by: zrj026 <zhangrunjiang026@gmail.com>
maoxx241 pushed a commit to maoxx241/vllm-ascend that referenced this pull request Mar 2, 2026
### What this PR does / why we need it?
Upgrade vllm commit to 0109 (bde38c11df0ea066a740efe9b77fff5418be45df)

1. remove `init_cached_hf_modules ` due to
vllm-project/vllm#31786
2. fix spec_decode e2e test due to
vllm-project/vllm#29821 break
3. fix `vllm.v1.attention.backends.utils` duo to
vllm-project/vllm#31891
4. fix `self.seq_lens - query_lens` on same device due to
vllm-project/vllm#31773
5. skip model_runner_v2 e2e test due to `'_OpNamespace' '_C' object has
no attribute 'get_cuda_view_from_cpu_tensor'`

- vLLM version: v0.13.0
- vLLM main:
vllm-project/vllm@2f4e654

Signed-off-by: hfadzxy <starmoon_zhang@163.com>
ZRJ026 pushed a commit to ZRJ026/vllm-ascend that referenced this pull request Mar 4, 2026
### What this PR does / why we need it?
Upgrade vllm commit to 0109 (bde38c11df0ea066a740efe9b77fff5418be45df)

1. remove `init_cached_hf_modules ` due to
vllm-project/vllm#31786
2. fix spec_decode e2e test due to
vllm-project/vllm#29821 break
3. fix `vllm.v1.attention.backends.utils` duo to
vllm-project/vllm#31891
4. fix `self.seq_lens - query_lens` on same device due to
vllm-project/vllm#31773
5. skip model_runner_v2 e2e test due to `'_OpNamespace' '_C' object has
no attribute 'get_cuda_view_from_cpu_tensor'`

- vLLM version: v0.13.0
- vLLM main:
vllm-project/vllm@2f4e654

Signed-off-by: hfadzxy <starmoon_zhang@163.com>
Signed-off-by: zrj026 <zhangrunjiang026@gmail.com>
LCAIZJ pushed a commit to LCAIZJ/vllm-ascend that referenced this pull request Mar 7, 2026
### What this PR does / why we need it?
Upgrade vllm commit to 0109 (bde38c11df0ea066a740efe9b77fff5418be45df)

1. remove `init_cached_hf_modules ` due to
vllm-project/vllm#31786
2. fix spec_decode e2e test due to
vllm-project/vllm#29821 break
3. fix `vllm.v1.attention.backends.utils` duo to
vllm-project/vllm#31891
4. fix `self.seq_lens - query_lens` on same device due to
vllm-project/vllm#31773
5. skip model_runner_v2 e2e test due to `'_OpNamespace' '_C' object has
no attribute 'get_cuda_view_from_cpu_tensor'`

- vLLM version: v0.13.0
- vLLM main:
vllm-project/vllm@2f4e654

Signed-off-by: hfadzxy <starmoon_zhang@163.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ready-run-all-tests Trigger CI with all tests for wide-ranging PRs tpu Related to Google TPUs v1

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants