Skip to content

Conversation

@bkryu
Copy link
Collaborator

@bkryu bkryu commented Nov 7, 2025

📌 Description

tests/moe/test_trtllm_gen_routed_fused_moe.py was newly added in #2049, but does not have an SM arch check, which causes unit test failures on non SM10X devices.

Current PR adds skips

🔍 Related Issues

🚀 Pull Request Checklist

Thank you for contributing to FlashInfer! Before we review your pull request, please make sure the following items are complete.

✅ Pre-commit Checks

  • I have installed pre-commit by running pip install pre-commit (or used your preferred method).
  • I have installed the hooks with pre-commit install.
  • I have run the hooks manually with pre-commit run --all-files and fixed any reported issues.

If you are unsure about how to set up pre-commit, see the pre-commit documentation.

🧪 Tests

  • Tests have been added or updated as needed.
  • All tests are passing (unittest, etc.).

Reviewer Notes

Summary by CodeRabbit

  • Tests
    • Added GPU compute capability checks to MOE tests. Tests are now skipped on unsupported hardware, requiring SM100 or SM103 GPUs to execute.

@bkryu
Copy link
Collaborator Author

bkryu commented Nov 7, 2025

/bot run

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Nov 7, 2025

Note

Other AI code review bot(s) detected

CodeRabbit has detected other AI code review bot(s) in this pull request and will avoid duplicating their findings in the review comments. This may lead to a less comprehensive review.

Walkthrough

The pull request adds GPU compute capability checks to test functions in the TRTLLM MOE test suite. Tests are conditionally skipped on GPUs with SM versions other than SM100/SM103 (Blackwell-class GPUs) by detecting the compute capability and triggering early returns when requirements are not met.

Changes

Cohort / File(s) Summary
Compute capability guards
tests/moe/test_trtllm_gen_routed_fused_moe.py
Adds get_compute_capability import and runtime GPU SM version checks to skip test functions on non-SM100/SM103 devices with informational messages

Estimated code review effort

🎯 1 (Trivial) | ⏱️ ~5 minutes

  • Simple skip condition additions with minimal logic density
  • Single file affected with consistent, repetitive pattern
  • No behavioral changes to core test logic

Possibly related PRs

Suggested reviewers

  • yzh119

Poem

🐰 A skippy little hop so fine,
When SM's not quite in line,
Blackwell GPUs make the grade,
While others fade to skip-parade,
Tests now wise, forever mine! ✨

Pre-merge checks and finishing touches

❌ Failed checks (1 warning, 1 inconclusive)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 0.00% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
Description check ❓ Inconclusive The PR description addresses the core issue (missing SM arch check) and indicates that skips are being added, but the description is incomplete and vague about what exactly is being implemented. Clarify what specific skips are being added, confirm which test functions are affected, and complete the Related Issues section by linking to PR #2049 for better context.
✅ Passed checks (1 passed)
Check name Status Explanation
Title check ✅ Passed The title clearly and concisely describes the main change: adding SM architecture checks to skip unsupported devices in the newly added trtllm MoE test.
✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @bkryu, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request addresses an issue where a recently introduced TRT-LLM MoE test was causing failures on GPU devices with unsupported Streaming Multiprocessor (SM) architectures. By incorporating a compute capability check, the test now gracefully skips execution on non-SM10X GPUs, ensuring test stability and preventing unnecessary failures on incompatible hardware.

Highlights

  • Test Skipping Logic: Introduced a mechanism to skip the test_trtllm_gen_routed_fused_moe test on GPU architectures that are not SM10X (e.g., SM100, SM103).
  • Dependency Import: Added an import for get_compute_capability from flashinfer.utils to determine the GPU's compute architecture.
  • Bug Fix: Resolved unit test failures occurring on non-SM10X devices, which were caused by the newly added TRT-LLM MoE test lacking an SM architecture check.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@flashinfer-bot
Copy link
Collaborator

GitLab MR !119 has been created, and the CI pipeline #38051844 is currently running. I'll report back once the pipeline job completes.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request correctly adds a check to skip a new MoE test on unsupported SM architectures, fixing a test failure bug. My review suggests a refactoring to improve the implementation of this check. Instead of an inline skip, I recommend using a reusable pytest marker. This improves maintainability by avoiding code duplication and adheres to pytest best practices, making the test's requirements more explicit.

Comment on lines +65 to +67
compute_capability = get_compute_capability(torch.device(device="cuda"))
if compute_capability[0] not in [10]:
pytest.skip("These tests are only guaranteed to work on SM100 and SM103 GPUs.")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

This compute capability check is duplicated from tests/moe/test_trtllm_gen_fused_moe.py. To improve maintainability and follow pytest best practices, it's better to use a custom marker with skipif. This avoids runtime checks inside the test body and makes the requirement explicit in the test declaration.

You could define a marker in a shared conftest.py file:

# tests/conftest.py or a shared test utility file
import pytest
import torch

sm10x_only = pytest.mark.skipif(
    not torch.cuda.is_available() or torch.cuda.get_device_capability()[0] != 10,
    reason="Test requires SM 10.x architecture (e.g., SM100, SM103)"
)

Then apply it to the test function. This would allow you to remove these lines, and also the get_compute_capability import on line 39.

# tests/moe/test_trtllm_gen_routed_fused_moe.py
# from tests.conftest import sm10x_only

@sm10x_only
@pytest.mark.parametrize(...)
# ...
def test_trtllm_gen_routed_fused_moe(...):
    # No need for the manual skip check here
    torch.manual_seed(42)
    # ...

This approach is cleaner, reusable, and evaluated at test collection time.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (2)
tests/moe/test_trtllm_gen_routed_fused_moe.py (2)

39-40: Consolidate imports from flashinfer.utils.

Line 31 already imports from flashinfer.utils. Consider consolidating both imports into a single statement for better organization.

Apply this diff:

-from flashinfer.utils import device_support_pdl
+from flashinfer.utils import device_support_pdl, get_compute_capability

-from flashinfer.utils import get_compute_capability
-

65-67: Consider using the same device for capability check and test execution.

The compute capability is checked on a generic cuda device (line 65), but the test executes on cuda:0 (line 69). While this works in most cases, using the same device for both ensures consistency, especially in multi-GPU systems.

Apply this diff:

+    device = torch.device("cuda:0")
-    compute_capability = get_compute_capability(torch.device(device="cuda"))
+    compute_capability = get_compute_capability(device)
     if compute_capability[0] not in [10]:
         pytest.skip("These tests are only guaranteed to work on SM100 and SM103 GPUs.")
     torch.manual_seed(42)
-    device = torch.device("cuda:0")
📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between adcc5dd and 793d89a.

📒 Files selected for processing (1)
  • tests/moe/test_trtllm_gen_routed_fused_moe.py (2 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
tests/moe/test_trtllm_gen_routed_fused_moe.py (1)
flashinfer/utils.py (1)
  • get_compute_capability (252-255)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Deploy Docs

@bkryu bkryu marked this pull request as draft November 7, 2025 03:00
@bkryu bkryu self-assigned this Nov 7, 2025
@bkryu bkryu marked this pull request as ready for review November 7, 2025 05:54
@bkryu bkryu requested review from IwakuraRein and yzh119 November 7, 2025 05:54
@flashinfer-bot
Copy link
Collaborator

[SUCCESS] Pipeline #38051844: 13/17 passed

@yzh119 yzh119 merged commit 36d2463 into flashinfer-ai:main Nov 7, 2025
4 checks passed
@bkryu bkryu deleted the test_trtllm_moe_skip branch November 7, 2025 23:43
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants