Skip to content

[Model] Handle trust_remote_code for transformers backend#32194

Merged
Isotr0py merged 1 commit intovllm-project:mainfrom
DarkLight1337:handle-dynamic-module-trust
Jan 13, 2026
Merged

[Model] Handle trust_remote_code for transformers backend#32194
Isotr0py merged 1 commit intovllm-project:mainfrom
DarkLight1337:handle-dynamic-module-trust

Conversation

@DarkLight1337
Copy link
Copy Markdown
Member

@DarkLight1337 DarkLight1337 commented Jan 12, 2026

Purpose

Add trust_remote_code argument to try_get_class_from_dynamic_module and handle it accordingly.

Test Plan

Test Result


Essential Elements of an Effective PR Description Checklist
  • The purpose of the PR, such as "Fix some issue (link existing issues this PR will resolve)".
  • The test plan, such as providing test command.
  • The test results, such as pasting the results comparison before and after, or e2e results
  • (Optional) The necessary documentation update, such as updating supported_models.md and examples for a new model.
  • (Optional) Release notes update. If your change is user facing, please update the release notes draft in the Google Doc.

Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>
@DarkLight1337 DarkLight1337 added this to the v0.14.0 milestone Jan 12, 2026
@DarkLight1337 DarkLight1337 added the ready ONLY add when PR is ready to merge/full CI is needed label Jan 12, 2026
@mergify mergify bot added the new-model Requests to new models label Jan 12, 2026
Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

The pull request correctly propagates the trust_remote_code argument to the try_get_class_from_dynamic_module function. However, there is a critical issue in vllm/transformers_utils/dynamic_module.py where resolve_trust_remote_code is explicitly called with hardcoded has_local_code=False and has_remote_code=True. This bypasses the internal logic of transformers.dynamic_module_utils.get_class_from_dynamic_module, which already handles these parameters correctly. This redundancy can lead to incorrect security assessments, especially when loading models from local paths, and should be addressed.

@Isotr0py Isotr0py merged commit 78d13ea into vllm-project:main Jan 13, 2026
59 of 60 checks passed
@github-project-automation github-project-automation bot moved this from Todo to Done in Transformers backend Jan 13, 2026
@DarkLight1337 DarkLight1337 deleted the handle-dynamic-module-trust branch January 13, 2026 03:13
TomerBN-Nvidia pushed a commit to TomerBN-Nvidia/vllm that referenced this pull request Jan 13, 2026
…ject#32194)

Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>
Signed-off-by: Tomer Natan <tbarnatan@computelab-frontend-8.nvidia.com>
sammysun0711 pushed a commit to sammysun0711/vllm that referenced this pull request Jan 16, 2026
…ject#32194)

Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>
akh64bit pushed a commit to akh64bit/vllm that referenced this pull request Jan 16, 2026
…ject#32194)

Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>
dsuhinin pushed a commit to dsuhinin/vllm that referenced this pull request Jan 21, 2026
…ject#32194)

Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>
Signed-off-by: dsuhinin <suhinin.dmitriy@gmail.com>
npanpaliya pushed a commit to odh-on-pz/vllm-cpu that referenced this pull request Feb 16, 2026
…ject/vllm#32194)

Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>
npanpaliya pushed a commit to odh-on-pz/vllm-cpu that referenced this pull request Feb 16, 2026
- [Misc] Implement `TokenizerLike.convert_tokens_to_ids`
(vllm-project/vllm#31796)
  [INFERENG-4151](https://issues.redhat.com/browse/INFERENG-4151)
- [Bug] Revert torch warning fix (vllm-project/vllm#31585)
  [INFERENG-4152](https://issues.redhat.com/browse/INFERENG-4152)
- [Bug] Fix AttributeError: `ColumnParallelLinear` object has no
attribute `weight_scale_inv` (vllm-project/vllm#30823)
  [INFERENG-4153](https://issues.redhat.com/browse/INFERENG-4153)
- Avoid `opencv-python-headless==4.13.0.90`, it's broken. See
opencv/opencv-python#1183
- [Bugfix] Handle mistral tokenizer in get_hf_processor
(vllm-project/vllm#31817)
  [INFERENG-4151](https://issues.redhat.com/browse/INFERENG-4151)
- [Bugfix] Fix Whisper/encoder-decoder GPU memory leak
vllm-project/vllm#32789
- [Model] Handle `trust_remote_code` for transformers backend
(vllm-project/vllm#32194) (fixes
GHSA-2pc9-4j83-qjmr)
- [Bugfix] CUDA: fix segfault by bumping numba to `numba==0.63.1`
([AIPCC-9384](https://issues.redhat.com/browse/AIPCC-9384))
- [Bugfix] pin `mistral_common==1.8.5` to avoid crash with Voxtral
([INFERENG-4154](https://issues.redhat.com/browse/INFERENG-4154))
- [Bugfix] fix tokenizer loading for mistral models
(vllm-project/vllm#33175)
  [INFERENG-4151](https://issues.redhat.com/browse/INFERENG-4151)
@hmellor hmellor removed the new-model Requests to new models label Feb 19, 2026
@mergify mergify bot added the new-model Requests to new models label Feb 19, 2026
ItzDEXX pushed a commit to ItzDEXX/vllm that referenced this pull request Feb 19, 2026
…ject#32194)

Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>
Shafi-Hussain pushed a commit to odh-on-pz/vllm-cpu that referenced this pull request Mar 17, 2026
…ject/vllm#32194)

Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

new-model Requests to new models ready ONLY add when PR is ready to merge/full CI is needed

Projects

Status: Done

Development

Successfully merging this pull request may close these issues.

3 participants