Skip to content

[CYTHON] Fix stream passing bug#68

Merged
tqchen merged 1 commit intoapache:mainfrom
tqchen:stream
Sep 28, 2025
Merged

[CYTHON] Fix stream passing bug#68
tqchen merged 1 commit intoapache:mainfrom
tqchen:stream

Conversation

@tqchen
Copy link
Copy Markdown
Member

@tqchen tqchen commented Sep 28, 2025

This PR fixes a bug in stream passing which breaks expected stream passing behavior. Also added a regression case via load_inline_cuda to guard this issue (needs CUDA env to run atm).

This PR fixes a bug in stream passing which breaks expected stream passing behavior.
Also added a regression case via load_inline_cuda to guard this issue (needs CUDA env to run atm).
@gemini-code-assist
Copy link
Copy Markdown
Contributor

Summary of Changes

Hello @tqchen, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request addresses and resolves a critical bug in the Cython-based Foreign Function Interface (FFI) that affected the proper handling and passing of CUDA streams. The fix ensures that device contexts are correctly identified, preventing issues with stream management. To safeguard against future regressions, comprehensive test cases have been introduced for CUDA stream passing, validating the behavior with both default and explicitly managed PyTorch streams. Additionally, the package version has been updated to reflect these changes.

Highlights

  • Stream Passing Bug Fix: Corrected a bug in the Cython FFI where the device context check for stream passing was incorrectly using device_id instead of device_type, leading to unexpected stream behavior.
  • Regression Test for CUDA Stream Passing: Added a new regression test case using load_inline_cuda to validate correct CUDA stream passing. This test now explicitly passes a raw_stream argument and verifies that the stream used within the CUDA kernel matches the provided stream.
  • Expanded Test Coverage: The new regression test includes scenarios for both default (raw stream 0) and explicit PyTorch CUDA streams, ensuring robust validation of the stream passing mechanism.
  • Version Update: The package version has been incremented from 0.1.0b10 to 0.1.0b11 in pyproject.toml and python/tvm_ffi/__init__.py.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request addresses a bug in stream passing within the Cython FFI layer by switching from checking device_id to the more robust device_type to determine if a device context has been established. This is a good improvement for correctness. The PR also introduces a valuable regression test for CUDA stream handling, which validates behavior for both default and user-created streams. Additionally, the package version is updated. My review includes one suggestion to remove a redundant check in the new test code to improve its clarity.

@tqchen tqchen merged commit fde8dab into apache:main Sep 28, 2025
7 checks passed
yzh119 added a commit to flashinfer-ai/flashinfer that referenced this pull request Sep 29, 2025
<!-- .github/pull_request_template.md -->

## 📌 Description

The codegen logic for pytorch and tvm should unify after #1641 , and
this PR cleans up the related codegen functions in tvm_bindings.

Other changes:
1. update tvm-ffi to 0.1.0b11 to incorporate
apache/tvm-ffi#67 and
apache/tvm-ffi#68
2. rename of source files: `_ops.cu` and `_pybind.cu` renamed to
`_binding.cu`
3. remove torch related header include/library linking in ninja files
(#1642 (comment))
4. remove the use of `use_torch_stream` in unittests, they are no longer
required after apache/tvm-ffi#68

## 🔍 Related Issues

#1641 

## 🚀 Pull Request Checklist

Thank you for contributing to FlashInfer! Before we review your pull
request, please make sure the following items are complete.

### ✅ Pre-commit Checks

- [ ] I have installed `pre-commit` by running `pip install pre-commit`
(or used your preferred method).
- [ ] I have installed the hooks with `pre-commit install`.
- [ ] I have run the hooks manually with `pre-commit run --all-files`
and fixed any reported issues.

> If you are unsure about how to set up `pre-commit`, see [the
pre-commit documentation](https://pre-commit.com/).

## 🧪 Tests

- [ ] Tests have been added or updated as needed.
- [ ] All tests are passing (`unittest`, etc.).

## Reviewer Notes

cc @MasterJH5574 please let us know what changes do we need to make to
help you bump to the latest version of flashinfer in MLC.
@tqchen tqchen deleted the stream branch October 3, 2025 00:36
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants