Skip to content

flashinfer-bench: add auto workload dump in decorator#2305

Closed
yyihuang wants to merge 5 commits intoflashinfer-ai:mainfrom
yyihuang:decorator_dump_workload
Closed

flashinfer-bench: add auto workload dump in decorator#2305
yyihuang wants to merge 5 commits intoflashinfer-ai:mainfrom
yyihuang:decorator_dump_workload

Conversation

@yyihuang
Copy link
Copy Markdown
Collaborator

@yyihuang yyihuang commented Jan 7, 2026

📌 Description

  • Dump API arguments by API name (dir name) and params name (safetensor key) to safetensor files.
  • Safetensor files named by uuid.
  • Move the remained workflow to flashinfer-trace (JSON generation depends on schema in flashinfer-trace).

🔍 Related Issues

🚀 Pull Request Checklist

Thank you for contributing to FlashInfer! Before we review your pull request, please make sure the following items are complete.

✅ Pre-commit Checks

  • I have installed pre-commit by running pip install pre-commit (or used your preferred method).
  • I have installed the hooks with pre-commit install.
  • I have run the hooks manually with pre-commit run --all-files and fixed any reported issues.

If you are unsure about how to set up pre-commit, see the pre-commit documentation.

🧪 Tests

  • Tests have been added or updated as needed.
  • All tests are passing (unittest, etc.).

Reviewer Notes

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai bot commented Jan 7, 2026

Important

Review skipped

Draft detected.

Please check the settings in the CodeRabbit UI or the .coderabbit.yaml file in this repository. To trigger a single review, invoke the @coderabbitai review command.

You can disable this status message by setting the reviews.review_status to false in the CodeRabbit configuration file.


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@gemini-code-assist
Copy link
Copy Markdown
Contributor

Summary of Changes

Hello @yyihuang, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request enhances the flashinfer_api decorator by integrating an automatic workload dumping mechanism. When enabled via environment variables, the decorator will capture and save the input arguments, particularly tensors, of decorated functions to files. This feature is designed to facilitate benchmarking and replay of API calls, providing a structured way to collect and analyze workload data without modifying the original function calls.

Highlights

  • Workload Dumping Feature: Introduced automatic workload dumping for benchmarking within the flashinfer_api decorator, allowing for the capture and saving of function input arguments.
  • Configuration via Environment Variables: Added new environment variables FLASHINFER_BENCH_LOG to enable/disable workload dumping and FLASHINFER_BENCH_LOG_DIR to specify the output directory for dumped data.
  • Zero Overhead Clarification: The flashinfer_api decorator now maintains truly zero overhead only when both API logging (FLASHINFER_LOGLEVEL=0) and benchmark logging (FLASHINFER_BENCH_LOG=0) are disabled.
  • Data Format: Workloads are dumped using safetensors for tensor arguments and JSONL files for associated metadata, facilitating structured replay and analysis.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a workload dumping feature for benchmarking, controlled by new environment variables. The changes correctly update the decorator's docstrings and control flow to support this new feature. However, the implementation appears to be incomplete. The core function _dump_workload is called but not defined, which will cause a runtime error when the feature is enabled. This critical issue needs to be addressed. Additionally, tests for this new functionality are missing.

# Dump workload BEFORE execution (crash-safe) if bench logging is enabled
if _BENCH_LOG_ENABLED:
try:
_dump_workload(f, func_name, args, kwargs)
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

critical

The function _dump_workload is called here, but it is not defined anywhere in the file. This will cause a NameError at runtime when FLASHINFER_BENCH_LOG is enabled. Please add the implementation for the _dump_workload function.

@yyihuang
Copy link
Copy Markdown
Collaborator Author

yyihuang commented Jan 8, 2026

Closed. #2206 implments the requested feature.

@yyihuang yyihuang closed this Jan 8, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant