Skip to content
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 2 additions & 0 deletions tests/gdn/test_decode_delta_rule.py
Original file line number Diff line number Diff line change
Expand Up @@ -23,6 +23,8 @@
import torch
import pytest

pytestmark = pytest.mark.skip(reason="Temporarily skipped due to CI failures.")
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

While skipping tests can be a temporary solution for CI failures, it's crucial to track this technical debt. Please add a reference to a tracking issue in the reason string to ensure these tests are re-enabled once the underlying problem is fixed.

Suggested change
pytestmark = pytest.mark.skip(reason="Temporarily skipped due to CI failures.")
pytestmark = pytest.mark.skip(reason="Temporarily skipped due to CI failures. See issue #XYZ for tracking.")

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Prefer a targeted skipif using flashinfer.utils rather than an unconditional blanket skip; also add a tracking reference.

Three concerns:

  1. Coding guideline: Per project rules, test files must use flashinfer.utils functions (get_compute_capability, is_sm90a_supported, etc.) to skip tests on unsupported GPU architectures. An unconditional pytest.mark.skip silently drops the tests on every machine and every CI run, including ones where they would pass. If the root cause is that the tests fail on a specific architecture, convert this to a skipif:
πŸ’‘ Architecture-conditional skip (guideline-compliant alternative)
-pytestmark = pytest.mark.skip(reason="Temporarily skipped due to CI failures.")
+from flashinfer.utils import get_compute_capability as _get_cc
+import torch as _torch
+pytestmark = pytest.mark.skipif(
+    _get_cc(_torch.device("cuda"))[0] not in [9, 10, 11, 12],
+    reason="GDN decode requires SM90+ (temporarily skipped on non-SM90 CI runners).",
+)
  1. Import execution: pytestmark only suppresses test execution, not module-level code. Lines 28–44 (importing reference_delta_rule, flashinfer.gdn_decode, and flashinfer.utils) still run during pytest collection. If the CI failures are import-time errors (e.g., a missing compiled kernel), the skip won't prevent a collection error β€” the module would need a try/except ImportError guard around those imports, or a conftest.py-level collect_ignore.

  2. No tracking reference: "Temporarily skipped due to CI failures." lacks a link to the issue or ticket tracking the root cause, making it hard to know when it is safe to re-enable.

As per coding guidelines: "Test files must use flashinfer.utils functions (get_compute_capability, is_sm90a_supported, etc.) to skip tests on unsupported GPU architectures."

πŸ€– Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/gdn/test_decode_delta_rule.py` at line 26, Replace the unconditional
pytestmark skip with a targeted pytest.mark.skipif that uses flashinfer.utils
helpers (e.g., get_compute_capability and is_sm90a_supported) to detect
unsupported GPU architectures and skip only when appropriate; wrap the
module-level imports of reference_delta_rule, flashinfer.gdn_decode, and
flashinfer.utils in a try/except ImportError (or guard with an explicit
collect-time check) so import-time failures don’t break test collection, and
update the skip reason to include a tracking reference (issue/ticket ID) so the
intent and owner are recorded.


try:
from .reference_delta_rule import decode_delta_rule, verify_delta_rule
except ImportError:
Expand Down
Loading