tokenizer: Add fastokens support#41741
Merged
DarkLight1337 merged 3 commits intovllm-project:mainfrom May 7, 2026
Merged
Conversation
Contributor
|
Documentation preview: https://vllm--41741.org.readthedocs.build/en/41741/ |
Contributor
There was a problem hiding this comment.
Code Review
This pull request introduces a new tokenizer_backend configuration option, allowing users to choose between the default Hugging Face tokenizers library and the fastokens Rust backend for BPE tokenizers. The implementation includes documentation updates, CLI and API argument additions, and logic to apply fastokens monkey-patches when enabled. I have no feedback to provide.
Signed-off-by: AlonKejzman <alonkeizman@gmail.com>
a3e6c01 to
376ee65
Compare
BugenZhao
reviewed
May 6, 2026
Signed-off-by: AlonKejzman <alonkeizman@gmail.com>
d2e121c to
51c9ac4
Compare
DarkLight1337
approved these changes
May 7, 2026
Collaborator
|
@AlonKejzman when I launch gptoss with fastokens I am getting this error. |
4 tasks
libinta
pushed a commit
to libinta/vllm
that referenced
this pull request
May 8, 2026
Signed-off-by: AlonKejzman <alonkeizman@gmail.com> Co-authored-by: Roger Wang <hey@rogerw.io> Signed-off-by: Libin Tang <libin.tang@intel.com>
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Purpose
Purpose
Adds a new
--tokenizer-backendargument that selects the engine powering the HuggingFace tokenizer. Two values are supported:huggingface(default) - the standard tokenizers library, current behavior.fastokens- uses the fastokens backend.tokenizer_backendis orthogonal to tokenizer_mode: it only takes effect when the resolved mode is "hf". Non-HF modes (mistral, deepseek_v32, etc.) ignore it and continue to use their own tokenizer engines.The
fastokenspackage is imported lazily; if it isn't installed, a clear ImportError is raised only when the user opts in.No existing issue — opening this as a feature addition. Searched open PRs for tokenizer-backend / fastokens and found no duplicate work.
Test Plan
Test Result
1 - All tests passed save for those behind gated repos
2 - OK
3 - Same GSM8K scores (~0.86), 10% reduction in TTFT on 32K prompt with 30K shared prefix
4 - OK
5 - OK
Essential Elements of an Effective PR Description Checklist
supported_models.mdandexamplesfor a new model.