Updating the incremental de-tokenizer#18840
Updating the incremental de-tokenizer#18840ArthurZucker wants to merge 1 commit intovllm-project:mainfrom
Conversation
|
👋 Hi! Thank you for contributing to the vLLM project. 💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels. Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging. To run CI, PR reviewers can either: Add 🚀 |
|
@ArthurZucker sorry for taking so long to respond here. Thanks ever so much for this, the changes look great. If I understand correctly, this adds the ability to initialize the decode stream with a list of token ids and also to perform "batch" incremental detokenization across a number of requests at once (avoiding multiple py->rust calls)? We could exploit the first one of these immediately, the second should be useful but will need some refactoring of how our output processor is structured. Re different tokenizers in the same batch, yes this is possible in theory but would be a rare/niche case so it should be fine to handle on the vLLM side i.e. by having a separate DecodeStream per tokenizer. We did hit an issue where some tokenizers can infrequently behave in ways that violate assumptions made by the DecodeStream logic, leaving it in a broken state. We are working around this now by replacing the DecodeStream with an empty one when this happens but that wouldn't be an option if we were using them in a batched mode (see: #19449). |
|
closing as I won't have time to finish @njhill but we shipped ability to init with a sequence and update with a sequence!@ |
It was requested a long time ago, not sure if it is still relevant.
The related pr: huggingface/tokenizers#1780
I am actually unsure about how many times you have a different tokenizer, but I'll update the API to support passing a tokenizer per request Id if needed?
I am kinda trying to find the API that would fit the most usecases, happy to hear what you think!