[V1][PP] Fix intermediate tensor values#13417
Conversation
|
👋 Hi! Thank you for contributing to the vLLM project. 💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels. Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging. To run CI, PR reviewers can either: Add 🚀 |
WoosukKwon
left a comment
There was a problem hiding this comment.
Awesome! Thanks for the fix!
Signed-off-by: Cody Yu <hao.yu.cody@gmail.com>
Signed-off-by: Cody Yu <hao.yu.cody@gmail.com>
Signed-off-by: Cody Yu <hao.yu.cody@gmail.com> Signed-off-by: Louis Ulmer <ulmerlouis@gmail.com>
Signed-off-by: Cody Yu <hao.yu.cody@gmail.com>
#13353 cached the intermediate tensors to make CUDA graph work. However, we forgot to copy the values of intermediate tensors to the cached tensors. This PR fixes it.
With this PR and #13339, I've verified that the PP is working with the correct outputs in a single node.
cc @WoosukKwon @ruisearch42