Conversation
There was a problem hiding this comment.
Code Review
This pull request adds support for Decode Context Parallelism (DCP) with FP8 KV cache in Multi-head Latent Attention (MLA). The changes involve removing assertions that previously blocked this combination, replacing a specialized cache gathering operation with a more general one that handles dequantization, and updating the decode path to correctly process FP8 tensors with DCP. The changes appear correct and consistent with the goal of enabling this feature. I have not found any high or critical severity issues in this pull request.
|
This pull request has merge conflicts that must be resolved before it can be |
|
@LucasWilkinson Are you still working on this? Otherwise I can do a separate compatibility PR where DCP works as it currently does (in bf16) and then quantizes to fp8 if that's what is needed. It unlocks usage of fp8 KV Cache with DCP, leaving the optimization of DCP comms for later. That's the idea in principle at least, haven't fully tested it out. Any preferences? This would synergize with #34597 on expanding Triton MLA to fp8 KV. |
|
I added a new rebased PR that supports this feature + tests: #34795 |
FIX #32010
Signed-off-by: Lucas Wilkinson lwilkins@redhat.com