Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
This PR improves FlashMLA performance on the CPU for token generation (TG) with long contexts. The same strategy should also improve FA performance of GQA models, but something is not quite right there, so I have enabled only for MLA for now.
Here is a performance comparison between the main branch and this PR for DeepSeek-Lite on a Ryzen-7950X CPU
For TG the
V*softmax(K*Q)is parallelized along the heads, so given enough threads, theK*Qoperation computed by each thread becomes a GEMV, which is notoriously memory bound. In this PR parallelization is done along the K-cache entries, with theK*Qportions computed by each thread being GEMM, which is faster. But this requires one additional thread synchronization before combining the results of the threads. My guess is that this extra barrier leads to the observed slightly lower performance for short contexts (where with the main branch implementationK*Qis fast despite being GEMV).To put the above table into perspective, TG speed with a context of 16k tokens is around 10 t/s without MLA and FA for this model on this CPU.