[rl] Fix CI loss=0 and logprob=NaN#3232
Merged
wwwjn merged 1 commit intopytorch:mainfrom May 6, 2026
Merged
Conversation
tianyu-l
reviewed
May 5, 2026
| # After pytorch/pytorch#179760, FA2 also accepts num_splits and | ||
| # auto-selects num_splits>1 for paged KV, which can produce NaN. | ||
| # Always set num_splits=1 for FA2 with paged KV. | ||
| if is_in_batch_invariant_mode() or current_flash_attention_impl() != "FA3": |
Contributor
There was a problem hiding this comment.
Always using num_splits=1 doesn't sound a fix.
Do we know what's the root cause? Are we tracking / creating issues? At least add a TODO here?
Contributor
Author
There was a problem hiding this comment.
Do we know what's the root cause?
Not yet, raise this error to @liangel-02 , let me add a issue to tracking this regression
Contributor
There was a problem hiding this comment.
can we use something like == FA2 instead of != FA3? I don't know what's the set of options.
0d00d62 to
25b35f9
Compare
wwwjn
commented
May 6, 2026
| # After pytorch/pytorch#179760, FA2 also accepts num_splits and | ||
| # auto-selects num_splits>1 for paged KV, which can produce NaN. | ||
| # Always set num_splits=1 for FA2 with paged KV. | ||
| if is_in_batch_invariant_mode() or current_flash_attention_impl() != "FA3": |
tianyu-l
approved these changes
May 6, 2026
Contributor
tianyu-l
left a comment
There was a problem hiding this comment.
please address comments
| # After pytorch/pytorch#179760, FA2 also accepts num_splits and | ||
| # auto-selects num_splits>1 for paged KV, which can produce NaN. | ||
| # Always set num_splits=1 for FA2 with paged KV. | ||
| if is_in_batch_invariant_mode() or current_flash_attention_impl() != "FA3": |
Contributor
There was a problem hiding this comment.
can we use something like == FA2 instead of != FA3? I don't know what's the set of options.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
3 Fixes:
export VLLM_USE_FLASHINFER_SAMPLER=0. We will need VLLM_USE_FLASHINFER_SAMPLER=0 because [Perf] Enable FlashInfer top-k/top-p sampler by default vllm-project/vllm#40376 landed Apr. 29. For our CI environment, we didn't install nvcc so it won't support FlashInfer to be JIT compiled.