fix wrong using getattr to get dict value#232
Merged
WoosukKwon merged 1 commit intovllm-project:mainfrom Jun 25, 2023
Merged
Conversation
Contributor
Author
|
Should use kwargs.get("use_fast", True) to get dict params value |
WoosukKwon
approved these changes
Jun 25, 2023
Collaborator
WoosukKwon
left a comment
There was a problem hiding this comment.
@metacryptom Nice catch! Thanks for fixing the bug.
michaelfeil
pushed a commit
to michaelfeil/vllm
that referenced
this pull request
Jul 1, 2023
hongxiayang
pushed a commit
to hongxiayang/vllm
that referenced
this pull request
Feb 13, 2024
mht-sharma
pushed a commit
to mht-sharma/vllm
that referenced
this pull request
Oct 30, 2024
wuhuikx
pushed a commit
to wuhuikx/vllm
that referenced
this pull request
Mar 27, 2025
### What this PR does / why we need it? Install `wget` to fix image build Backport vllm-project/vllm-ascend#231 ### Does this PR introduce _any_ user-facing change? No ### How was this patch tested? CI passed --------- Signed-off-by: Yikun Jiang <yikunkero@gmail.com>
yma11
pushed a commit
to yma11/vllm
that referenced
this pull request
Jul 1, 2025
…ect#232) * use quant scheme from config.json * update logic * make var name more accurate
iwooook
pushed a commit
to moreh-dev/vllm
that referenced
this pull request
Nov 29, 2025
…uired (vllm-project#232) Extend vLLM v0 by adding automatic compat sampling fallbacks to device sampling. This offers substantial performance benefits when not all requests require advanced features like structured outputs. always_compat_sampling should now never be set in production - whenever something otherwise unsupported, such as structured outputs non-greedy sampling for models that don't support it on device non-uniform top-k top-p on host or with a model that doesn't support it is detected, the system will switch to host compat sampling. This also means that we never override the temperature, p or k to force it to be uniform across batch. Also contains a fix for vllm-project#229 (it will fallback to compat sampling if non-uniform is not supported on device or device sampling is disabled).
dtrifiro
added a commit
to dtrifiro/vllm
that referenced
this pull request
Dec 11, 2025
SUMMARY: * update "docker-bake.hcl" to drop spurious tags. also update quay repository to partition by accelerator. * rename `docker-bake.hcl` to `docker-bake-release.hcl` * add `docker-bake-accept.hcl` to generate an image tagged by GHA `run_id` and git commit sha. this will enable us to debug the images as well as the wheels. these images will be placed into `automation-vllm`. TEST PLAN: i'll be pushing images from this branch.
ivanium
pushed a commit
to ivanium/vllm
that referenced
this pull request
Apr 25, 2026
Signed-off-by: Woosuk Kwon <woosuk@inferact.ai>
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
No description provided.