Conversation
|
👋 Hi! Thank you for contributing to the vLLM project. 💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels. Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging. To run CI, PR reviewers can either: Add 🚀 |
|
|
||
| outputs = llm.generate(prompt_token_ids=prompt_ids, | ||
| sampling_params=sampling_params) | ||
| outputs = llm.generate(prompts=prompts, sampling_params=sampling_params) |
There was a problem hiding this comment.
It looks like you simply removed the alternative input. We still support prompt token ID inputs, but it has to be inside a dictionary passed to prompts. You can check the type annotations for more information.
|
Closing as superseded by #18800 |
vllm/examples/offline_inference/eagle.py:132: DeprecationWarning: The keyword arguments {'prompt_token_ids'} are deprecated and will be removed in a future update. Please use the 'prompts' parameter instead.