-
Notifications
You must be signed in to change notification settings - Fork 587
fix: correct PDL parameter handling in RopeQuantize kernel #1982
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
WalkthroughThis PR introduces PDL (Programmatic Stream Serialization) support to the RoPE quantization pipeline by adding an Changes
Sequence DiagramsequenceDiagram
participant Test/Bench as Test or Benchmark
participant Python as Python API<br/>(rope.py)
participant Binding as C++ Binding<br/>(rope_binding.cu)
participant Kernel as CUDA Kernel<br/>(rope.cu)
participant Launch as Launch Config<br/>(pos_enc.cuh)
Test/Bench->>Python: rope_quantize_fp8(..., enable_pdl)
activate Python
Python->>Python: _rope_quantize(..., enable_pdl)
Python->>Binding: rope_quantize(..., enable_pdl)
activate Binding
Binding->>Kernel: rope_quantize(..., enable_pdl)
activate Kernel
Kernel->>Launch: Prepare launch config with enable_pdl
activate Launch
Launch->>Launch: Set programmaticStreamSerializationAllowed<br/>= enable_pdl ? 1 : 0
Note over Launch: Single cudaLaunchAttribute<br/>always supplied
deactivate Launch
Kernel->>Kernel: Launch RopeQuantize kernel<br/>with attribute config
deactivate Kernel
deactivate Binding
deactivate Python
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~20 minutes
Possibly related PRs
Suggested reviewers
Poem
Pre-merge checks and finishing touches❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
Comment |
Summary of ChangesHello @cicirori, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request addresses critical issues related to the handling of Programmatic Dependent Launch (PDL) within the Highlights
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request correctly addresses a critical bug in the RopeQuantize kernel related to PDL parameter handling. The primary issue was a dangling pointer caused by incorrect variable scope in the CUDA kernel launch configuration, which has been resolved effectively. The new enable_pdl parameter has been consistently propagated through the Python API, CUDA bindings, and kernel implementation. Additionally, comprehensive updates to the benchmarks and tests have been included to validate the fix and measure its performance impact. The changes are well-executed and improve the robustness of the kernel.
yzh119
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the bugfix @cicirori !

📌 Description
1. Fixed Parameter Alignment
streamparameter was being passed to the wrong position in theRopeQuantizefunction call due to missingenable_pdlparameter. SGLang will hang before this pr.enable_pdlparameter to the function signature and properly aligned all parameters2. Fixed PDL Launch Configuration
enable_pdl=true, the kernel would throw CUDA errors due to incorrect PDL attribute handlingcsrc/fmhaReduction.cu.🔍 Related Issues
🚀 Pull Request Checklist
Thank you for contributing to FlashInfer! Before we review your pull request, please make sure the following items are complete.
✅ Pre-commit Checks
pre-commitby runningpip install pre-commit(or used your preferred method).pre-commit install.pre-commit run --all-filesand fixed any reported issues.🧪 Tests
unittest, etc.).Reviewer Notes
Summary by CodeRabbit
New Features
Tests