Fix bug hf_token argument to LLM in Python SDK ignored in vllm.transformer_utils.config#31974
Fix bug hf_token argument to LLM in Python SDK ignored in vllm.transformer_utils.config#31974benglewis wants to merge 0 commit intovllm-project:mainfrom
hf_token argument to LLM in Python SDK ignored in vllm.transformer_utils.config#31974Conversation
There was a problem hiding this comment.
Code Review
This pull request aims to fix a bug where the hf_token argument was ignored when loading model configurations, which is crucial for accessing gated models. The changes correctly propagate the hf_token in several places. However, the fix is incomplete. I've identified two critical spots in vllm/transformers_utils/config.py where the hf_token is still not being used, which would cause the intended functionality to fail. My review provides details on how to complete the fix.
| trust_remote_code: bool, | ||
| revision: str | None = None, | ||
| code_revision: str | None = None, | ||
| hf_token: str | None = None, | ||
| **kwargs, | ||
| ) -> tuple[dict, PretrainedConfig]: |
There was a problem hiding this comment.
The hf_token is not consistently passed to all Hugging Face Hub API calls within this function. While it's correctly passed to PretrainedConfig.get_config_dict, it's missing from the calls to config_class.from_pretrained and AutoConfig.from_pretrained.
This will cause authentication issues when loading a gated model's config if it falls into one of those paths. The new test test_get_config_passes_hf_token will likely fail because of this.
To fix this, you should pass token=hf_token or _get_hf_token() to these calls as well:
# ...
if model_type in _CONFIG_REGISTRY:
config_class = _CONFIG_REGISTRY[model_type]
config = config_class.from_pretrained(
model,
revision=revision,
code_revision=code_revision,
trust_remote_code=trust_remote_code,
token=hf_token or _get_hf_token(),
**kwargs,
)
else:
try:
kwargs = _maybe_update_auto_config_kwargs(kwargs, model_type=model_type)
config = AutoConfig.from_pretrained(
model,
trust_remote_code=trust_remote_code,
revision=revision,
code_revision=code_revision,
token=hf_token or _get_hf_token(),
**kwargs,
)
# ...| trust_remote_code: bool, | ||
| revision: str | None = None, | ||
| config_format: str | ConfigFormat = "auto", | ||
| hf_token: str | None = None, | ||
| ) -> GenerationConfig | None: |
There was a problem hiding this comment.
The hf_token is not passed to the GenerationConfig.from_pretrained call within this function. This could lead to authentication failures when trying to download the generation config for a gated model.
You should pass the token to this call:
try:
return GenerationConfig.from_pretrained(
model,
revision=revision,
token=hf_token or _get_hf_token(),
)|
👋 Hi! Thank you for contributing to the vLLM project. 💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels. Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run You ask your reviewers to trigger select CI tests on top of Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging. To run CI, PR reviewers can either: Add If you have any questions, please reach out to us on Slack at https://slack.vllm.ai. 🚀 |
|
Hi @benglewis , |
Purpose
Fix the
hf_tokenargument not being passed through thetransformers'sAutoConfigcorrectly. Fixes #31894Test Plan
Try passing an
hf_tokenand loaded a gated model which requires anhf_tokenvia the vLLM Python code.Test Result
WIP
Essential Elements of an Effective PR Description Checklist
supported_models.mdandexamplesfor a new model.