Skip to content

Fix bug hf_token argument to LLM in Python SDK ignored in vllm.transformer_utils.config#31974

Closed
benglewis wants to merge 0 commit intovllm-project:mainfrom
benglewis:main
Closed

Fix bug hf_token argument to LLM in Python SDK ignored in vllm.transformer_utils.config#31974
benglewis wants to merge 0 commit intovllm-project:mainfrom
benglewis:main

Conversation

@benglewis
Copy link
Copy Markdown

@benglewis benglewis commented Jan 8, 2026

Purpose

Fix the hf_token argument not being passed through the transformers's AutoConfig correctly. Fixes #31894

Test Plan

Try passing an hf_token and loaded a gated model which requires an hf_token via the vLLM Python code.

Test Result

WIP


Essential Elements of an Effective PR Description Checklist
  • The purpose of the PR, such as "Fix some issue (link existing issues this PR will resolve)".
  • The test plan, such as providing test command.
  • The test results, such as pasting the results comparison before and after, or e2e results
  • (Optional) The necessary documentation update, such as updating supported_models.md and examples for a new model.
  • (Optional) Release notes update. If your change is user facing, please update the release notes draft in the Google Doc.

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request aims to fix a bug where the hf_token argument was ignored when loading model configurations, which is crucial for accessing gated models. The changes correctly propagate the hf_token in several places. However, the fix is incomplete. I've identified two critical spots in vllm/transformers_utils/config.py where the hf_token is still not being used, which would cause the intended functionality to fail. My review provides details on how to complete the fix.

Comment on lines 129 to 134
trust_remote_code: bool,
revision: str | None = None,
code_revision: str | None = None,
hf_token: str | None = None,
**kwargs,
) -> tuple[dict, PretrainedConfig]:
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

critical

The hf_token is not consistently passed to all Hugging Face Hub API calls within this function. While it's correctly passed to PretrainedConfig.get_config_dict, it's missing from the calls to config_class.from_pretrained and AutoConfig.from_pretrained.

This will cause authentication issues when loading a gated model's config if it falls into one of those paths. The new test test_get_config_passes_hf_token will likely fail because of this.

To fix this, you should pass token=hf_token or _get_hf_token() to these calls as well:

# ...
        if model_type in _CONFIG_REGISTRY:
            config_class = _CONFIG_REGISTRY[model_type]
            config = config_class.from_pretrained(
                model,
                revision=revision,
                code_revision=code_revision,
                trust_remote_code=trust_remote_code,
                token=hf_token or _get_hf_token(),
                **kwargs,
            )
        else:
            try:
                kwargs = _maybe_update_auto_config_kwargs(kwargs, model_type=model_type)
                config = AutoConfig.from_pretrained(
                    model,
                    trust_remote_code=trust_remote_code,
                    revision=revision,
                    code_revision=code_revision,
                    token=hf_token or _get_hf_token(),
                    **kwargs,
                )
# ...

Comment on lines 1008 to 1012
trust_remote_code: bool,
revision: str | None = None,
config_format: str | ConfigFormat = "auto",
hf_token: str | None = None,
) -> GenerationConfig | None:
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

critical

The hf_token is not passed to the GenerationConfig.from_pretrained call within this function. This could lead to authentication failures when trying to download the generation config for a gated model.

You should pass the token to this call:

    try:
        return GenerationConfig.from_pretrained(
            model,
            revision=revision,
            token=hf_token or _get_hf_token(),
        )

@github-actions
Copy link
Copy Markdown

github-actions bot commented Jan 8, 2026

👋 Hi! Thank you for contributing to the vLLM project.

💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels.

Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors.

You ask your reviewers to trigger select CI tests on top of fastcheck CI.

Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging.

To run CI, PR reviewers can either: Add ready label to the PR or enable auto-merge.

If you have any questions, please reach out to us on Slack at https://slack.vllm.ai.

🚀

@fon60
Copy link
Copy Markdown

fon60 commented Jan 11, 2026

Hi @benglewis ,
I really appreciate you're making this fix, cause I'm facing the same issues with couple of models that require to be logged in, and even if I download the models using hf_cli, vllm will not start because of this configuration issue. Will you be able to finish this PR?
You are so close to providing this fix 🙏

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

bug Something isn't working

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[Bug] hf_token argument to LLM in Python SDK ignored in vllm.transformer_utils.config

2 participants