Improve sampling benchmarks.#2374
Conversation
|
Important Review skippedDraft detected. Please check the settings in the CodeRabbit UI or the You can disable this status message by setting the
✨ Finishing touches🧪 Generate unit tests (beta)
Tip 🧪 Unit Test Generation v2 is now available!We have significantly improved our unit test generation capabilities. To enable: Add this to your reviews:
finishing_touches:
unit_tests:
enabled: trueTry it out by using the Have feedback? Share your thoughts on our Discord thread! Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
Summary of ChangesHello @vincentzed, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request significantly expands the FlashInfer benchmarking framework by adding dedicated routines for evaluating the performance of various sampling strategies. This includes fundamental sampling from probability distributions, advanced techniques like Top-P (nucleus) and Top-K sampling, their combined application, and utility functions for probability renormalization and logit masking. The integration ensures that users can now comprehensively assess the efficiency of these critical components in large language model inference workflows, providing valuable insights for optimization and development. Highlights
🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console. Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Code Review
This pull request introduces comprehensive benchmark tests for various sampling routines in FlashInfer, which is a great addition for performance tracking. The changes include a new sampling.py routine file, updates to the main benchmark script and utilities to integrate the new tests, and documentation updates in the README.md.
The code is well-structured, but I have a few suggestions to improve maintainability and correctness:
- Refactor duplicated code in
flashinfer_benchmark_utils.pyfor defining supported compute capabilities. - Adhere to PEP 8 naming conventions for functions in the new
sampling.pyfile. - Add a reference check to the
testTopPRenormProbsbenchmark for correctness validation.
Details are in the specific comments. Overall, this is a solid contribution.
| # SAMPLING - supported on all architectures | ||
| "sampling_from_probs": { | ||
| "7.5": ["cuda"], | ||
| "8.0": ["cuda"], | ||
| "8.6": ["cuda"], | ||
| "8.9": ["cuda"], | ||
| "9.0": ["cuda"], | ||
| "10.0": ["cuda"], | ||
| "10.3": ["cuda"], | ||
| "12.0": ["cuda"], | ||
| }, | ||
| "top_p_sampling_from_probs": { | ||
| "7.5": ["cuda"], | ||
| "8.0": ["cuda"], | ||
| "8.6": ["cuda"], | ||
| "8.9": ["cuda"], | ||
| "9.0": ["cuda"], | ||
| "10.0": ["cuda"], | ||
| "10.3": ["cuda"], | ||
| "12.0": ["cuda"], | ||
| }, | ||
| "top_k_sampling_from_probs": { | ||
| "7.5": ["cuda"], | ||
| "8.0": ["cuda"], | ||
| "8.6": ["cuda"], | ||
| "8.9": ["cuda"], | ||
| "9.0": ["cuda"], | ||
| "10.0": ["cuda"], | ||
| "10.3": ["cuda"], | ||
| "12.0": ["cuda"], | ||
| }, | ||
| "top_k_top_p_sampling_from_probs": { | ||
| "7.5": ["cuda"], | ||
| "8.0": ["cuda"], | ||
| "8.6": ["cuda"], | ||
| "8.9": ["cuda"], | ||
| "9.0": ["cuda"], | ||
| "10.0": ["cuda"], | ||
| "10.3": ["cuda"], | ||
| "12.0": ["cuda"], | ||
| }, | ||
| "top_k_renorm_probs": { | ||
| "7.5": ["cuda"], | ||
| "8.0": ["cuda"], | ||
| "8.6": ["cuda"], | ||
| "8.9": ["cuda"], | ||
| "9.0": ["cuda"], | ||
| "10.0": ["cuda"], | ||
| "10.3": ["cuda"], | ||
| "12.0": ["cuda"], | ||
| }, | ||
| "top_p_renorm_probs": { | ||
| "7.5": ["cuda"], | ||
| "8.0": ["cuda"], | ||
| "8.6": ["cuda"], | ||
| "8.9": ["cuda"], | ||
| "9.0": ["cuda"], | ||
| "10.0": ["cuda"], | ||
| "10.3": ["cuda"], | ||
| "12.0": ["cuda"], | ||
| }, | ||
| "top_k_mask_logits": { | ||
| "7.5": ["cuda"], | ||
| "8.0": ["cuda"], | ||
| "8.6": ["cuda"], | ||
| "8.9": ["cuda"], | ||
| "9.0": ["cuda"], | ||
| "10.0": ["cuda"], | ||
| "10.3": ["cuda"], | ||
| "12.0": ["cuda"], | ||
| }, | ||
| } |
There was a problem hiding this comment.
There's a lot of code duplication here for defining the supported compute capabilities for sampling routines. All sampling routines share the same support matrix. To improve maintainability and reduce code duplication, you can define the support dictionary once and reuse it for all sampling routines. A dictionary comprehension can make this more concise.
# SAMPLING - supported on all architectures
**{
routine: {
"7.5": ["cuda"],
"8.0": ["cuda"],
"8.6": ["cuda"],
"8.9": ["cuda"],
"9.0": ["cuda"],
"10.0": ["cuda"],
"10.3": ["cuda"],
"12.0": ["cuda"],
}
for routine in benchmark_apis["sampling"]
},
}| if args.routine == "sampling_from_probs": | ||
| return testSamplingFromProbs(args) | ||
| if args.routine == "top_p_sampling_from_probs": | ||
| return testTopPSamplingFromProbs(args) | ||
| if args.routine == "top_k_sampling_from_probs": | ||
| return testTopKSamplingFromProbs(args) | ||
| if args.routine == "top_k_top_p_sampling_from_probs": | ||
| return testTopKTopPSamplingFromProbs(args) | ||
| if args.routine == "top_k_renorm_probs": | ||
| return testTopKRenormProbs(args) | ||
| if args.routine == "top_p_renorm_probs": | ||
| return testTopPRenormProbs(args) | ||
| if args.routine == "top_k_mask_logits": | ||
| return testTopKMaskLogits(args) | ||
| raise ValueError(f"Unsupported routine: {args.routine}") |
There was a problem hiding this comment.
The function names in this file (testSamplingFromProbs, testTopPSamplingFromProbs, etc.) do not follow the PEP 8 style guide, which recommends snake_case for function names. For consistency with the rest of the Python ecosystem and to improve readability, please rename these functions and their definitions. For example, testSamplingFromProbs should be test_sampling_from_probs.
| if args.routine == "sampling_from_probs": | |
| return testSamplingFromProbs(args) | |
| if args.routine == "top_p_sampling_from_probs": | |
| return testTopPSamplingFromProbs(args) | |
| if args.routine == "top_k_sampling_from_probs": | |
| return testTopKSamplingFromProbs(args) | |
| if args.routine == "top_k_top_p_sampling_from_probs": | |
| return testTopKTopPSamplingFromProbs(args) | |
| if args.routine == "top_k_renorm_probs": | |
| return testTopKRenormProbs(args) | |
| if args.routine == "top_p_renorm_probs": | |
| return testTopPRenormProbs(args) | |
| if args.routine == "top_k_mask_logits": | |
| return testTopKMaskLogits(args) | |
| raise ValueError(f"Unsupported routine: {args.routine}") | |
| if args.routine == "sampling_from_probs": | |
| return test_sampling_from_probs(args) | |
| if args.routine == "top_p_sampling_from_probs": | |
| return test_top_p_sampling_from_probs(args) | |
| if args.routine == "top_k_sampling_from_probs": | |
| return test_top_k_sampling_from_probs(args) | |
| if args.routine == "top_k_top_p_sampling_from_probs": | |
| return test_top_k_top_p_sampling_from_probs(args) | |
| if args.routine == "top_k_renorm_probs": | |
| return test_top_k_renorm_probs(args) | |
| if args.routine == "top_p_renorm_probs": | |
| return test_top_p_renorm_probs(args) | |
| if args.routine == "top_k_mask_logits": | |
| return test_top_k_mask_logits(args) | |
| raise ValueError(f"Unsupported routine: {args.routine}") |
benchmarks/routines/sampling.py
Outdated
| def testTopPRenormProbs(args): | ||
| """Test top_p_renorm_probs API. | ||
|
|
||
| This test: | ||
| 1. Generates random probability distributions | ||
| 2. Runs top_p_renorm_probs (renormalize by top-p thresholding) | ||
| 3. Measures performance metrics | ||
|
|
||
| Args: | ||
| args: Parsed command line arguments containing test configuration | ||
|
|
||
| Returns: | ||
| dict: List of dictionaries containing performance results | ||
|
|
||
| """ |
There was a problem hiding this comment.
The testTopPRenormProbs function is missing a reference check (refcheck) to validate the correctness of the implementation. Other similar test functions in this file, like testTopKRenormProbs, include this check. Adding a reference implementation using PyTorch and comparing the results would increase confidence in the benchmark's correctness. You can find an example of a PyTorch reference implementation for top-p in tests/utils/test_sampling.py.
|
Hi @vincentzed would you mind checking the following files:
and see whether there are some components we can reuse? |
Signed-off-by: vincentzed <207368749+vincentzed@users.noreply.github.com> style check Signed-off-by: vincentzed <207368749+vincentzed@users.noreply.github.com> minor style change Signed-off-by: vincentzed <207368749+vincentzed@users.noreply.github.com> more Signed-off-by: vincentzed <207368749+vincentzed@users.noreply.github.com>
df66f58 to
fddeca5
Compare
|
I add some refacotr in 5e5e811. Cmd:flashinfer ❯ cd /sgl-workspace/sglang/flashinfer/benchmarks && \
FLASHINFER_DISABLE_VERSION_CHECK=1 bash -c '
for r in sampling_from_probs top_p_sampling_from_probs top_k_sampling_from_probs top_k_top_p_sampling_from_probs; do
for b in 1 2 4 8 16 32 64 128 256 512 1024 2048 4096 8192; do
echo "=== $r batch_size=$b ==="
python flashinfer_benchmark.py --routine "$r" --batch_size "$b" -v
done
done
'Result=== sampling_from_probs batch_size=1 ===
[INFO] args = Namespace(routine='sampling_from_probs', no_cuda_graph=False, use_cupti=False, use_cuda_events=False, refcheck=False, allow_output_mismatch=False, random_seed=42, verbose=1, output_path=None, num_iters=30, dry_run_iters=5, case_tag=None, generate_repro_command=False, repro_command='', batch_size=1, vocab_size=128256, input_dtype='float32', top_p=0.9, top_k=50, no_deterministic=False, backends=['cuda'])
[INFO] Running testSamplingFromProbs
[INFO] FlashInfer version: 0.6.2
[PERF] cuda :: median time 0.019 ms; std 0.009 ms; achieved tflops 0.007 TFLOPs/sec; achieved tb_per_sec 0.027 TB/sec
=== sampling_from_probs batch_size=2 ===
[INFO] args = Namespace(routine='sampling_from_probs', no_cuda_graph=False, use_cupti=False, use_cuda_events=False, refcheck=False, allow_output_mismatch=False, random_seed=42, verbose=1, output_path=None, num_iters=30, dry_run_iters=5, case_tag=None, generate_repro_command=False, repro_command='', batch_size=2, vocab_size=128256, input_dtype='float32', top_p=0.9, top_k=50, no_deterministic=False, backends=['cuda'])
[INFO] Running testSamplingFromProbs
[INFO] FlashInfer version: 0.6.2
[PERF] cuda :: median time 0.028 ms; std 0.009 ms; achieved tflops 0.009 TFLOPs/sec; achieved tb_per_sec 0.036 TB/sec
=== sampling_from_probs batch_size=4 ===
[INFO] args = Namespace(routine='sampling_from_probs', no_cuda_graph=False, use_cupti=False, use_cuda_events=False, refcheck=False, allow_output_mismatch=False, random_seed=42, verbose=1, output_path=None, num_iters=30, dry_run_iters=5, case_tag=None, generate_repro_command=False, repro_command='', batch_size=4, vocab_size=128256, input_dtype='float32', top_p=0.9, top_k=50, no_deterministic=False, backends=['cuda'])
[INFO] Running testSamplingFromProbs
[INFO] FlashInfer version: 0.6.2
[PERF] cuda :: median time 0.033 ms; std 0.007 ms; achieved tflops 0.015 TFLOPs/sec; achieved tb_per_sec 0.061 TB/sec
=== sampling_from_probs batch_size=8 ===
[INFO] args = Namespace(routine='sampling_from_probs', no_cuda_graph=False, use_cupti=False, use_cuda_events=False, refcheck=False, allow_output_mismatch=False, random_seed=42, verbose=1, output_path=None, num_iters=30, dry_run_iters=5, case_tag=None, generate_repro_command=False, repro_command='', batch_size=8, vocab_size=128256, input_dtype='float32', top_p=0.9, top_k=50, no_deterministic=False, backends=['cuda'])
[INFO] Running testSamplingFromProbs
[INFO] FlashInfer version: 0.6.2
[PERF] cuda :: median time 0.039 ms; std 0.003 ms; achieved tflops 0.026 TFLOPs/sec; achieved tb_per_sec 0.106 TB/sec
=== sampling_from_probs batch_size=16 ===
[INFO] args = Namespace(routine='sampling_from_probs', no_cuda_graph=False, use_cupti=False, use_cuda_events=False, refcheck=False, allow_output_mismatch=False, random_seed=42, verbose=1, output_path=None, num_iters=30, dry_run_iters=5, case_tag=None, generate_repro_command=False, repro_command='', batch_size=16, vocab_size=128256, input_dtype='float32', top_p=0.9, top_k=50, no_deterministic=False, backends=['cuda'])
[INFO] Running testSamplingFromProbs
[INFO] FlashInfer version: 0.6.2
[PERF] cuda :: median time 0.040 ms; std 0.002 ms; achieved tflops 0.052 TFLOPs/sec; achieved tb_per_sec 0.208 TB/sec
=== sampling_from_probs batch_size=32 ===
[INFO] args = Namespace(routine='sampling_from_probs', no_cuda_graph=False, use_cupti=False, use_cuda_events=False, refcheck=False, allow_output_mismatch=False, random_seed=42, verbose=1, output_path=None, num_iters=30, dry_run_iters=5, case_tag=None, generate_repro_command=False, repro_command='', batch_size=32, vocab_size=128256, input_dtype='float32', top_p=0.9, top_k=50, no_deterministic=False, backends=['cuda'])
[INFO] Running testSamplingFromProbs
[INFO] FlashInfer version: 0.6.2
[PERF] cuda :: median time 0.041 ms; std 0.001 ms; achieved tflops 0.100 TFLOPs/sec; achieved tb_per_sec 0.398 TB/sec
=== sampling_from_probs batch_size=64 ===
[INFO] args = Namespace(routine='sampling_from_probs', no_cuda_graph=False, use_cupti=False, use_cuda_events=False, refcheck=False, allow_output_mismatch=False, random_seed=42, verbose=1, output_path=None, num_iters=30, dry_run_iters=5, case_tag=None, generate_repro_command=False, repro_command='', batch_size=64, vocab_size=128256, input_dtype='float32', top_p=0.9, top_k=50, no_deterministic=False, backends=['cuda'])
[INFO] Running testSamplingFromProbs
[INFO] FlashInfer version: 0.6.2
[PERF] cuda :: median time 0.042 ms; std 0.001 ms; achieved tflops 0.195 TFLOPs/sec; achieved tb_per_sec 0.779 TB/sec
=== sampling_from_probs batch_size=128 ===
[INFO] args = Namespace(routine='sampling_from_probs', no_cuda_graph=False, use_cupti=False, use_cuda_events=False, refcheck=False, allow_output_mismatch=False, random_seed=42, verbose=1, output_path=None, num_iters=30, dry_run_iters=5, case_tag=None, generate_repro_command=False, repro_command='', batch_size=128, vocab_size=128256, input_dtype='float32', top_p=0.9, top_k=50, no_deterministic=False, backends=['cuda'])
[INFO] Running testSamplingFromProbs
[INFO] FlashInfer version: 0.6.2
[PERF] cuda :: median time 0.044 ms; std 0.001 ms; achieved tflops 0.372 TFLOPs/sec; achieved tb_per_sec 1.489 TB/sec
=== sampling_from_probs batch_size=256 ===
[INFO] args = Namespace(routine='sampling_from_probs', no_cuda_graph=False, use_cupti=False, use_cuda_events=False, refcheck=False, allow_output_mismatch=False, random_seed=42, verbose=1, output_path=None, num_iters=30, dry_run_iters=5, case_tag=None, generate_repro_command=False, repro_command='', batch_size=256, vocab_size=128256, input_dtype='float32', top_p=0.9, top_k=50, no_deterministic=False, backends=['cuda'])
[INFO] Running testSamplingFromProbs
[INFO] FlashInfer version: 0.6.2
[PERF] cuda :: median time 0.069 ms; std 0.002 ms; achieved tflops 0.479 TFLOPs/sec; achieved tb_per_sec 1.914 TB/sec
=== sampling_from_probs batch_size=512 ===
[INFO] args = Namespace(routine='sampling_from_probs', no_cuda_graph=False, use_cupti=False, use_cuda_events=False, refcheck=False, allow_output_mismatch=False, random_seed=42, verbose=1, output_path=None, num_iters=30, dry_run_iters=5, case_tag=None, generate_repro_command=False, repro_command='', batch_size=512, vocab_size=128256, input_dtype='float32', top_p=0.9, top_k=50, no_deterministic=False, backends=['cuda'])
[INFO] Running testSamplingFromProbs
[INFO] FlashInfer version: 0.6.2
[PERF] cuda :: median time 0.108 ms; std 0.003 ms; achieved tflops 0.611 TFLOPs/sec; achieved tb_per_sec 2.443 TB/sec
=== sampling_from_probs batch_size=1024 ===
[INFO] args = Namespace(routine='sampling_from_probs', no_cuda_graph=False, use_cupti=False, use_cuda_events=False, refcheck=False, allow_output_mismatch=False, random_seed=42, verbose=1, output_path=None, num_iters=30, dry_run_iters=5, case_tag=None, generate_repro_command=False, repro_command='', batch_size=1024, vocab_size=128256, input_dtype='float32', top_p=0.9, top_k=50, no_deterministic=False, backends=['cuda'])
[INFO] Running testSamplingFromProbs
[INFO] FlashInfer version: 0.6.2
[PERF] cuda :: median time 0.182 ms; std 0.002 ms; achieved tflops 0.722 TFLOPs/sec; achieved tb_per_sec 2.890 TB/sec
=== sampling_from_probs batch_size=2048 ===
[INFO] args = Namespace(routine='sampling_from_probs', no_cuda_graph=False, use_cupti=False, use_cuda_events=False, refcheck=False, allow_output_mismatch=False, random_seed=42, verbose=1, output_path=None, num_iters=30, dry_run_iters=5, case_tag=None, generate_repro_command=False, repro_command='', batch_size=2048, vocab_size=128256, input_dtype='float32', top_p=0.9, top_k=50, no_deterministic=False, backends=['cuda'])
[INFO] Running testSamplingFromProbs
[INFO] FlashInfer version: 0.6.2
[PERF] cuda :: median time 0.329 ms; std 0.004 ms; achieved tflops 0.799 TFLOPs/sec; achieved tb_per_sec 3.196 TB/sec
=== sampling_from_probs batch_size=4096 ===
[INFO] args = Namespace(routine='sampling_from_probs', no_cuda_graph=False, use_cupti=False, use_cuda_events=False, refcheck=False, allow_output_mismatch=False, random_seed=42, verbose=1, output_path=None, num_iters=30, dry_run_iters=5, case_tag=None, generate_repro_command=False, repro_command='', batch_size=4096, vocab_size=128256, input_dtype='float32', top_p=0.9, top_k=50, no_deterministic=False, backends=['cuda'])
[INFO] Running testSamplingFromProbs
[INFO] FlashInfer version: 0.6.2
[PERF] cuda :: median time 0.627 ms; std 0.006 ms; achieved tflops 0.837 TFLOPs/sec; achieved tb_per_sec 3.350 TB/sec
=== sampling_from_probs batch_size=8192 ===
[INFO] args = Namespace(routine='sampling_from_probs', no_cuda_graph=False, use_cupti=False, use_cuda_events=False, refcheck=False, allow_output_mismatch=False, random_seed=42, verbose=1, output_path=None, num_iters=30, dry_run_iters=5, case_tag=None, generate_repro_command=False, repro_command='', batch_size=8192, vocab_size=128256, input_dtype='float32', top_p=0.9, top_k=50, no_deterministic=False, backends=['cuda'])
[INFO] Running testSamplingFromProbs
[INFO] FlashInfer version: 0.6.2
[PERF] cuda :: median time 1.217 ms; std 0.006 ms; achieved tflops 0.863 TFLOPs/sec; achieved tb_per_sec 3.454 TB/sec
=== top_p_sampling_from_probs batch_size=1 ===
[INFO] args = Namespace(routine='top_p_sampling_from_probs', no_cuda_graph=False, use_cupti=False, use_cuda_events=False, refcheck=False, allow_output_mismatch=False, random_seed=42, verbose=1, output_path=None, num_iters=30, dry_run_iters=5, case_tag=None, generate_repro_command=False, repro_command='', batch_size=1, vocab_size=128256, input_dtype='float32', top_p=0.9, top_k=50, no_deterministic=False, backends=['cuda'])
[INFO] Running testTopPSamplingFromProbs
[INFO] FlashInfer version: 0.6.2
[PERF] cuda :: median time 0.171 ms; std 0.056 ms; achieved tflops 0.002 TFLOPs/sec; achieved tb_per_sec 0.003 TB/sec
=== top_p_sampling_from_probs batch_size=2 ===
[INFO] args = Namespace(routine='top_p_sampling_from_probs', no_cuda_graph=False, use_cupti=False, use_cuda_events=False, refcheck=False, allow_output_mismatch=False, random_seed=42, verbose=1, output_path=None, num_iters=30, dry_run_iters=5, case_tag=None, generate_repro_command=False, repro_command='', batch_size=2, vocab_size=128256, input_dtype='float32', top_p=0.9, top_k=50, no_deterministic=False, backends=['cuda'])
[INFO] Running testTopPSamplingFromProbs
[INFO] FlashInfer version: 0.6.2
[PERF] cuda :: median time 0.181 ms; std 0.054 ms; achieved tflops 0.003 TFLOPs/sec; achieved tb_per_sec 0.006 TB/sec
=== top_p_sampling_from_probs batch_size=4 ===
[INFO] args = Namespace(routine='top_p_sampling_from_probs', no_cuda_graph=False, use_cupti=False, use_cuda_events=False, refcheck=False, allow_output_mismatch=False, random_seed=42, verbose=1, output_path=None, num_iters=30, dry_run_iters=5, case_tag=None, generate_repro_command=False, repro_command='', batch_size=4, vocab_size=128256, input_dtype='float32', top_p=0.9, top_k=50, no_deterministic=False, backends=['cuda'])
[INFO] Running testTopPSamplingFromProbs
[INFO] FlashInfer version: 0.6.2
[PERF] cuda :: median time 0.200 ms; std 0.066 ms; achieved tflops 0.005 TFLOPs/sec; achieved tb_per_sec 0.010 TB/sec
=== top_p_sampling_from_probs batch_size=8 ===
[INFO] args = Namespace(routine='top_p_sampling_from_probs', no_cuda_graph=False, use_cupti=False, use_cuda_events=False, refcheck=False, allow_output_mismatch=False, random_seed=42, verbose=1, output_path=None, num_iters=30, dry_run_iters=5, case_tag=None, generate_repro_command=False, repro_command='', batch_size=8, vocab_size=128256, input_dtype='float32', top_p=0.9, top_k=50, no_deterministic=False, backends=['cuda'])
[INFO] Running testTopPSamplingFromProbs
[INFO] FlashInfer version: 0.6.2
[PERF] cuda :: median time 0.212 ms; std 0.058 ms; achieved tflops 0.010 TFLOPs/sec; achieved tb_per_sec 0.019 TB/sec
=== top_p_sampling_from_probs batch_size=16 ===
[INFO] args = Namespace(routine='top_p_sampling_from_probs', no_cuda_graph=False, use_cupti=False, use_cuda_events=False, refcheck=False, allow_output_mismatch=False, random_seed=42, verbose=1, output_path=None, num_iters=30, dry_run_iters=5, case_tag=None, generate_repro_command=False, repro_command='', batch_size=16, vocab_size=128256, input_dtype='float32', top_p=0.9, top_k=50, no_deterministic=False, backends=['cuda'])
[INFO] Running testTopPSamplingFromProbs
[INFO] FlashInfer version: 0.6.2
[PERF] cuda :: median time 0.292 ms; std 0.098 ms; achieved tflops 0.014 TFLOPs/sec; achieved tb_per_sec 0.028 TB/sec
=== top_p_sampling_from_probs batch_size=32 ===
[INFO] args = Namespace(routine='top_p_sampling_from_probs', no_cuda_graph=False, use_cupti=False, use_cuda_events=False, refcheck=False, allow_output_mismatch=False, random_seed=42, verbose=1, output_path=None, num_iters=30, dry_run_iters=5, case_tag=None, generate_repro_command=False, repro_command='', batch_size=32, vocab_size=128256, input_dtype='float32', top_p=0.9, top_k=50, no_deterministic=False, backends=['cuda'])
[INFO] Running testTopPSamplingFromProbs
[INFO] FlashInfer version: 0.6.2
[PERF] cuda :: median time 0.339 ms; std 0.077 ms; achieved tflops 0.024 TFLOPs/sec; achieved tb_per_sec 0.048 TB/sec
=== top_p_sampling_from_probs batch_size=64 ===
[INFO] args = Namespace(routine='top_p_sampling_from_probs', no_cuda_graph=False, use_cupti=False, use_cuda_events=False, refcheck=False, allow_output_mismatch=False, random_seed=42, verbose=1, output_path=None, num_iters=30, dry_run_iters=5, case_tag=None, generate_repro_command=False, repro_command='', batch_size=64, vocab_size=128256, input_dtype='float32', top_p=0.9, top_k=50, no_deterministic=False, backends=['cuda'])
[INFO] Running testTopPSamplingFromProbs
[INFO] FlashInfer version: 0.6.2
[PERF] cuda :: median time 0.385 ms; std 0.070 ms; achieved tflops 0.043 TFLOPs/sec; achieved tb_per_sec 0.085 TB/sec
=== top_p_sampling_from_probs batch_size=128 ===
[INFO] args = Namespace(routine='top_p_sampling_from_probs', no_cuda_graph=False, use_cupti=False, use_cuda_events=False, refcheck=False, allow_output_mismatch=False, random_seed=42, verbose=1, output_path=None, num_iters=30, dry_run_iters=5, case_tag=None, generate_repro_command=False, repro_command='', batch_size=128, vocab_size=128256, input_dtype='float32', top_p=0.9, top_k=50, no_deterministic=False, backends=['cuda'])
[INFO] Running testTopPSamplingFromProbs
[INFO] FlashInfer version: 0.6.2
[PERF] cuda :: median time 0.404 ms; std 0.053 ms; achieved tflops 0.081 TFLOPs/sec; achieved tb_per_sec 0.163 TB/sec
=== top_p_sampling_from_probs batch_size=256 ===
[INFO] args = Namespace(routine='top_p_sampling_from_probs', no_cuda_graph=False, use_cupti=False, use_cuda_events=False, refcheck=False, allow_output_mismatch=False, random_seed=42, verbose=1, output_path=None, num_iters=30, dry_run_iters=5, case_tag=None, generate_repro_command=False, repro_command='', batch_size=256, vocab_size=128256, input_dtype='float32', top_p=0.9, top_k=50, no_deterministic=False, backends=['cuda'])
[INFO] Running testTopPSamplingFromProbs
[INFO] FlashInfer version: 0.6.2
[PERF] cuda :: median time 0.526 ms; std 0.072 ms; achieved tflops 0.125 TFLOPs/sec; achieved tb_per_sec 0.250 TB/sec
=== top_p_sampling_from_probs batch_size=512 ===
[INFO] args = Namespace(routine='top_p_sampling_from_probs', no_cuda_graph=False, use_cupti=False, use_cuda_events=False, refcheck=False, allow_output_mismatch=False, random_seed=42, verbose=1, output_path=None, num_iters=30, dry_run_iters=5, case_tag=None, generate_repro_command=False, repro_command='', batch_size=512, vocab_size=128256, input_dtype='float32', top_p=0.9, top_k=50, no_deterministic=False, backends=['cuda'])
[INFO] Running testTopPSamplingFromProbs
[INFO] FlashInfer version: 0.6.2
[PERF] cuda :: median time 0.817 ms; std 0.072 ms; achieved tflops 0.161 TFLOPs/sec; achieved tb_per_sec 0.321 TB/sec
=== top_p_sampling_from_probs batch_size=1024 ===
[INFO] args = Namespace(routine='top_p_sampling_from_probs', no_cuda_graph=False, use_cupti=False, use_cuda_events=False, refcheck=False, allow_output_mismatch=False, random_seed=42, verbose=1, output_path=None, num_iters=30, dry_run_iters=5, case_tag=None, generate_repro_command=False, repro_command='', batch_size=1024, vocab_size=128256, input_dtype='float32', top_p=0.9, top_k=50, no_deterministic=False, backends=['cuda'])
[INFO] Running testTopPSamplingFromProbs
[INFO] FlashInfer version: 0.6.2
[PERF] cuda :: median time 1.379 ms; std 0.054 ms; achieved tflops 0.190 TFLOPs/sec; achieved tb_per_sec 0.381 TB/sec
=== top_p_sampling_from_probs batch_size=2048 ===
[INFO] args = Namespace(routine='top_p_sampling_from_probs', no_cuda_graph=False, use_cupti=False, use_cuda_events=False, refcheck=False, allow_output_mismatch=False, random_seed=42, verbose=1, output_path=None, num_iters=30, dry_run_iters=5, case_tag=None, generate_repro_command=False, repro_command='', batch_size=2048, vocab_size=128256, input_dtype='float32', top_p=0.9, top_k=50, no_deterministic=False, backends=['cuda'])
[INFO] Running testTopPSamplingFromProbs
[INFO] FlashInfer version: 0.6.2
[PERF] cuda :: median time 2.538 ms; std 0.136 ms; achieved tflops 0.207 TFLOPs/sec; achieved tb_per_sec 0.414 TB/sec
=== top_p_sampling_from_probs batch_size=4096 ===
[INFO] args = Namespace(routine='top_p_sampling_from_probs', no_cuda_graph=False, use_cupti=False, use_cuda_events=False, refcheck=False, allow_output_mismatch=False, random_seed=42, verbose=1, output_path=None, num_iters=30, dry_run_iters=5, case_tag=None, generate_repro_command=False, repro_command='', batch_size=4096, vocab_size=128256, input_dtype='float32', top_p=0.9, top_k=50, no_deterministic=False, backends=['cuda'])
[INFO] Running testTopPSamplingFromProbs
[INFO] FlashInfer version: 0.6.2
[PERF] cuda :: median time 4.750 ms; std 0.041 ms; achieved tflops 0.221 TFLOPs/sec; achieved tb_per_sec 0.442 TB/sec
=== top_p_sampling_from_probs batch_size=8192 ===
[INFO] args = Namespace(routine='top_p_sampling_from_probs', no_cuda_graph=False, use_cupti=False, use_cuda_events=False, refcheck=False, allow_output_mismatch=False, random_seed=42, verbose=1, output_path=None, num_iters=30, dry_run_iters=5, case_tag=None, generate_repro_command=False, repro_command='', batch_size=8192, vocab_size=128256, input_dtype='float32', top_p=0.9, top_k=50, no_deterministic=False, backends=['cuda'])
[INFO] Running testTopPSamplingFromProbs
[INFO] FlashInfer version: 0.6.2
[PERF] cuda :: median time 10.388 ms; std 0.068 ms; achieved tflops 0.202 TFLOPs/sec; achieved tb_per_sec 0.405 TB/sec
=== top_k_sampling_from_probs batch_size=1 ===
[INFO] args = Namespace(routine='top_k_sampling_from_probs', no_cuda_graph=False, use_cupti=False, use_cuda_events=False, refcheck=False, allow_output_mismatch=False, random_seed=42, verbose=1, output_path=None, num_iters=30, dry_run_iters=5, case_tag=None, generate_repro_command=False, repro_command='', batch_size=1, vocab_size=128256, input_dtype='float32', top_p=0.9, top_k=50, no_deterministic=False, backends=['cuda'])
[INFO] Running testTopKSamplingFromProbs
[INFO] FlashInfer version: 0.6.2
[PERF] cuda :: median time 1.489 ms; std 0.438 ms; achieved tflops 0.000 TFLOPs/sec; achieved tb_per_sec 0.000 TB/sec
=== top_k_sampling_from_probs batch_size=2 ===
[INFO] args = Namespace(routine='top_k_sampling_from_probs', no_cuda_graph=False, use_cupti=False, use_cuda_events=False, refcheck=False, allow_output_mismatch=False, random_seed=42, verbose=1, output_path=None, num_iters=30, dry_run_iters=5, case_tag=None, generate_repro_command=False, repro_command='', batch_size=2, vocab_size=128256, input_dtype='float32', top_p=0.9, top_k=50, no_deterministic=False, backends=['cuda'])
[INFO] Running testTopKSamplingFromProbs
[INFO] FlashInfer version: 0.6.2
[PERF] cuda :: median time 1.951 ms; std 0.571 ms; achieved tflops 0.000 TFLOPs/sec; achieved tb_per_sec 0.001 TB/sec
=== top_k_sampling_from_probs batch_size=4 ===
[INFO] args = Namespace(routine='top_k_sampling_from_probs', no_cuda_graph=False, use_cupti=False, use_cuda_events=False, refcheck=False, allow_output_mismatch=False, random_seed=42, verbose=1, output_path=None, num_iters=30, dry_run_iters=5, case_tag=None, generate_repro_command=False, repro_command='', batch_size=4, vocab_size=128256, input_dtype='float32', top_p=0.9, top_k=50, no_deterministic=False, backends=['cuda'])
[INFO] Running testTopKSamplingFromProbs
[INFO] FlashInfer version: 0.6.2
[PERF] cuda :: median time 2.153 ms; std 0.496 ms; achieved tflops 0.000 TFLOPs/sec; achieved tb_per_sec 0.001 TB/sec
=== top_k_sampling_from_probs batch_size=8 ===
[INFO] args = Namespace(routine='top_k_sampling_from_probs', no_cuda_graph=False, use_cupti=False, use_cuda_events=False, refcheck=False, allow_output_mismatch=False, random_seed=42, verbose=1, output_path=None, num_iters=30, dry_run_iters=5, case_tag=None, generate_repro_command=False, repro_command='', batch_size=8, vocab_size=128256, input_dtype='float32', top_p=0.9, top_k=50, no_deterministic=False, backends=['cuda'])
[INFO] Running testTopKSamplingFromProbs
[INFO] FlashInfer version: 0.6.2
[PERF] cuda :: median time 2.503 ms; std 0.496 ms; achieved tflops 0.001 TFLOPs/sec; achieved tb_per_sec 0.002 TB/sec
=== top_k_sampling_from_probs batch_size=16 ===
[INFO] args = Namespace(routine='top_k_sampling_from_probs', no_cuda_graph=False, use_cupti=False, use_cuda_events=False, refcheck=False, allow_output_mismatch=False, random_seed=42, verbose=1, output_path=None, num_iters=30, dry_run_iters=5, case_tag=None, generate_repro_command=False, repro_command='', batch_size=16, vocab_size=128256, input_dtype='float32', top_p=0.9, top_k=50, no_deterministic=False, backends=['cuda'])
[INFO] Running testTopKSamplingFromProbs
[INFO] FlashInfer version: 0.6.2
[PERF] cuda :: median time 2.628 ms; std 0.351 ms; achieved tflops 0.002 TFLOPs/sec; achieved tb_per_sec 0.003 TB/sec
=== top_k_sampling_from_probs batch_size=32 ===
[INFO] args = Namespace(routine='top_k_sampling_from_probs', no_cuda_graph=False, use_cupti=False, use_cuda_events=False, refcheck=False, allow_output_mismatch=False, random_seed=42, verbose=1, output_path=None, num_iters=30, dry_run_iters=5, case_tag=None, generate_repro_command=False, repro_command='', batch_size=32, vocab_size=128256, input_dtype='float32', top_p=0.9, top_k=50, no_deterministic=False, backends=['cuda'])
[INFO] Running testTopKSamplingFromProbs
[INFO] FlashInfer version: 0.6.2
[PERF] cuda :: median time 2.780 ms; std 0.423 ms; achieved tflops 0.003 TFLOPs/sec; achieved tb_per_sec 0.006 TB/sec
=== top_k_sampling_from_probs batch_size=64 ===
[INFO] args = Namespace(routine='top_k_sampling_from_probs', no_cuda_graph=False, use_cupti=False, use_cuda_events=False, refcheck=False, allow_output_mismatch=False, random_seed=42, verbose=1, output_path=None, num_iters=30, dry_run_iters=5, case_tag=None, generate_repro_command=False, repro_command='', batch_size=64, vocab_size=128256, input_dtype='float32', top_p=0.9, top_k=50, no_deterministic=False, backends=['cuda'])
[INFO] Running testTopKSamplingFromProbs
[INFO] FlashInfer version: 0.6.2
[PERF] cuda :: median time 2.815 ms; std 0.293 ms; achieved tflops 0.006 TFLOPs/sec; achieved tb_per_sec 0.012 TB/sec
=== top_k_sampling_from_probs batch_size=128 ===
[INFO] args = Namespace(routine='top_k_sampling_from_probs', no_cuda_graph=False, use_cupti=False, use_cuda_events=False, refcheck=False, allow_output_mismatch=False, random_seed=42, verbose=1, output_path=None, num_iters=30, dry_run_iters=5, case_tag=None, generate_repro_command=False, repro_command='', batch_size=128, vocab_size=128256, input_dtype='float32', top_p=0.9, top_k=50, no_deterministic=False, backends=['cuda'])
[INFO] Running testTopKSamplingFromProbs
[INFO] FlashInfer version: 0.6.2
[PERF] cuda :: median time 3.173 ms; std 0.305 ms; achieved tflops 0.010 TFLOPs/sec; achieved tb_per_sec 0.021 TB/sec
=== top_k_sampling_from_probs batch_size=256 ===
[INFO] args = Namespace(routine='top_k_sampling_from_probs', no_cuda_graph=False, use_cupti=False, use_cuda_events=False, refcheck=False, allow_output_mismatch=False, random_seed=42, verbose=1, output_path=None, num_iters=30, dry_run_iters=5, case_tag=None, generate_repro_command=False, repro_command='', batch_size=256, vocab_size=128256, input_dtype='float32', top_p=0.9, top_k=50, no_deterministic=False, backends=['cuda'])
[INFO] Running testTopKSamplingFromProbs
[INFO] FlashInfer version: 0.6.2
[PERF] cuda :: median time 4.514 ms; std 0.459 ms; achieved tflops 0.015 TFLOPs/sec; achieved tb_per_sec 0.029 TB/sec
=== top_k_sampling_from_probs batch_size=512 ===
[INFO] args = Namespace(routine='top_k_sampling_from_probs', no_cuda_graph=False, use_cupti=False, use_cuda_events=False, refcheck=False, allow_output_mismatch=False, random_seed=42, verbose=1, output_path=None, num_iters=30, dry_run_iters=5, case_tag=None, generate_repro_command=False, repro_command='', batch_size=512, vocab_size=128256, input_dtype='float32', top_p=0.9, top_k=50, no_deterministic=False, backends=['cuda'])
[INFO] Running testTopKSamplingFromProbs
[INFO] FlashInfer version: 0.6.2
[PERF] cuda :: median time 7.481 ms; std 0.587 ms; achieved tflops 0.018 TFLOPs/sec; achieved tb_per_sec 0.035 TB/sec
=== top_k_sampling_from_probs batch_size=1024 ===
[INFO] args = Namespace(routine='top_k_sampling_from_probs', no_cuda_graph=False, use_cupti=False, use_cuda_events=False, refcheck=False, allow_output_mismatch=False, random_seed=42, verbose=1, output_path=None, num_iters=30, dry_run_iters=5, case_tag=None, generate_repro_command=False, repro_command='', batch_size=1024, vocab_size=128256, input_dtype='float32', top_p=0.9, top_k=50, no_deterministic=False, backends=['cuda'])
[INFO] Running testTopKSamplingFromProbs
[INFO] FlashInfer version: 0.6.2
[PERF] cuda :: median time 13.516 ms; std 0.932 ms; achieved tflops 0.019 TFLOPs/sec; achieved tb_per_sec 0.039 TB/sec
=== top_k_sampling_from_probs batch_size=2048 ===
[INFO] args = Namespace(routine='top_k_sampling_from_probs', no_cuda_graph=False, use_cupti=False, use_cuda_events=False, refcheck=False, allow_output_mismatch=False, random_seed=42, verbose=1, output_path=None, num_iters=30, dry_run_iters=5, case_tag=None, generate_repro_command=False, repro_command='', batch_size=2048, vocab_size=128256, input_dtype='float32', top_p=0.9, top_k=50, no_deterministic=False, backends=['cuda'])
[INFO] Running testTopKSamplingFromProbs
[INFO] FlashInfer version: 0.6.2
[PERF] cuda :: median time 25.172 ms; std 1.728 ms; achieved tflops 0.021 TFLOPs/sec; achieved tb_per_sec 0.042 TB/sec
=== top_k_sampling_from_probs batch_size=4096 ===
[INFO] args = Namespace(routine='top_k_sampling_from_probs', no_cuda_graph=False, use_cupti=False, use_cuda_events=False, refcheck=False, allow_output_mismatch=False, random_seed=42, verbose=1, output_path=None, num_iters=30, dry_run_iters=5, case_tag=None, generate_repro_command=False, repro_command='', batch_size=4096, vocab_size=128256, input_dtype='float32', top_p=0.9, top_k=50, no_deterministic=False, backends=['cuda'])
[INFO] Running testTopKSamplingFromProbs
[INFO] FlashInfer version: 0.6.2
[PERF] cuda :: median time 47.695 ms; std 2.669 ms; achieved tflops 0.022 TFLOPs/sec; achieved tb_per_sec 0.044 TB/sec
=== top_k_sampling_from_probs batch_size=8192 ===
[INFO] args = Namespace(routine='top_k_sampling_from_probs', no_cuda_graph=False, use_cupti=False, use_cuda_events=False, refcheck=False, allow_output_mismatch=False, random_seed=42, verbose=1, output_path=None, num_iters=30, dry_run_iters=5, case_tag=None, generate_repro_command=False, repro_command='', batch_size=8192, vocab_size=128256, input_dtype='float32', top_p=0.9, top_k=50, no_deterministic=False, backends=['cuda'])
[INFO] Running testTopKSamplingFromProbs
[INFO] FlashInfer version: 0.6.2
[PERF] cuda :: median time 91.673 ms; std 3.163 ms; achieved tflops 0.023 TFLOPs/sec; achieved tb_per_sec 0.046 TB/sec
=== top_k_top_p_sampling_from_probs batch_size=1 ===
[INFO] args = Namespace(routine='top_k_top_p_sampling_from_probs', no_cuda_graph=False, use_cupti=False, use_cuda_events=False, refcheck=False, allow_output_mismatch=False, random_seed=42, verbose=1, output_path=None, num_iters=30, dry_run_iters=5, case_tag=None, generate_repro_command=False, repro_command='', batch_size=1, vocab_size=128256, input_dtype='float32', top_p=0.9, top_k=50, no_deterministic=False, backends=['cuda'])
[INFO] Running testTopKTopPSamplingFromProbs
[INFO] FlashInfer version: 0.6.2
[PERF] cuda :: median time 0.209 ms; std 0.032 ms; achieved tflops 0.002 TFLOPs/sec; achieved tb_per_sec 0.002 TB/sec
=== top_k_top_p_sampling_from_probs batch_size=2 ===
[INFO] args = Namespace(routine='top_k_top_p_sampling_from_probs', no_cuda_graph=False, use_cupti=False, use_cuda_events=False, refcheck=False, allow_output_mismatch=False, random_seed=42, verbose=1, output_path=None, num_iters=30, dry_run_iters=5, case_tag=None, generate_repro_command=False, repro_command='', batch_size=2, vocab_size=128256, input_dtype='float32', top_p=0.9, top_k=50, no_deterministic=False, backends=['cuda'])
[INFO] Running testTopKTopPSamplingFromProbs
[INFO] FlashInfer version: 0.6.2
[PERF] cuda :: median time 0.225 ms; std 0.044 ms; achieved tflops 0.003 TFLOPs/sec; achieved tb_per_sec 0.005 TB/sec
=== top_k_top_p_sampling_from_probs batch_size=4 ===
[INFO] args = Namespace(routine='top_k_top_p_sampling_from_probs', no_cuda_graph=False, use_cupti=False, use_cuda_events=False, refcheck=False, allow_output_mismatch=False, random_seed=42, verbose=1, output_path=None, num_iters=30, dry_run_iters=5, case_tag=None, generate_repro_command=False, repro_command='', batch_size=4, vocab_size=128256, input_dtype='float32', top_p=0.9, top_k=50, no_deterministic=False, backends=['cuda'])
[INFO] Running testTopKTopPSamplingFromProbs
[INFO] FlashInfer version: 0.6.2
[PERF] cuda :: median time 0.239 ms; std 0.062 ms; achieved tflops 0.006 TFLOPs/sec; achieved tb_per_sec 0.009 TB/sec
=== top_k_top_p_sampling_from_probs batch_size=8 ===
[INFO] args = Namespace(routine='top_k_top_p_sampling_from_probs', no_cuda_graph=False, use_cupti=False, use_cuda_events=False, refcheck=False, allow_output_mismatch=False, random_seed=42, verbose=1, output_path=None, num_iters=30, dry_run_iters=5, case_tag=None, generate_repro_command=False, repro_command='', batch_size=8, vocab_size=128256, input_dtype='float32', top_p=0.9, top_k=50, no_deterministic=False, backends=['cuda'])
[INFO] Running testTopKTopPSamplingFromProbs
[INFO] FlashInfer version: 0.6.2
[PERF] cuda :: median time 0.247 ms; std 0.062 ms; achieved tflops 0.012 TFLOPs/sec; achieved tb_per_sec 0.017 TB/sec
=== top_k_top_p_sampling_from_probs batch_size=16 ===
[INFO] args = Namespace(routine='top_k_top_p_sampling_from_probs', no_cuda_graph=False, use_cupti=False, use_cuda_events=False, refcheck=False, allow_output_mismatch=False, random_seed=42, verbose=1, output_path=None, num_iters=30, dry_run_iters=5, case_tag=None, generate_repro_command=False, repro_command='', batch_size=16, vocab_size=128256, input_dtype='float32', top_p=0.9, top_k=50, no_deterministic=False, backends=['cuda'])
[INFO] Running testTopKTopPSamplingFromProbs
[INFO] FlashInfer version: 0.6.2
[PERF] cuda :: median time 0.329 ms; std 0.083 ms; achieved tflops 0.019 TFLOPs/sec; achieved tb_per_sec 0.025 TB/sec
=== top_k_top_p_sampling_from_probs batch_size=32 ===
[INFO] args = Namespace(routine='top_k_top_p_sampling_from_probs', no_cuda_graph=False, use_cupti=False, use_cuda_events=False, refcheck=False, allow_output_mismatch=False, random_seed=42, verbose=1, output_path=None, num_iters=30, dry_run_iters=5, case_tag=None, generate_repro_command=False, repro_command='', batch_size=32, vocab_size=128256, input_dtype='float32', top_p=0.9, top_k=50, no_deterministic=False, backends=['cuda'])
[INFO] Running testTopKTopPSamplingFromProbs
[INFO] FlashInfer version: 0.6.2
[PERF] cuda :: median time 0.377 ms; std 0.073 ms; achieved tflops 0.033 TFLOPs/sec; achieved tb_per_sec 0.044 TB/sec
=== top_k_top_p_sampling_from_probs batch_size=64 ===
[INFO] args = Namespace(routine='top_k_top_p_sampling_from_probs', no_cuda_graph=False, use_cupti=False, use_cuda_events=False, refcheck=False, allow_output_mismatch=False, random_seed=42, verbose=1, output_path=None, num_iters=30, dry_run_iters=5, case_tag=None, generate_repro_command=False, repro_command='', batch_size=64, vocab_size=128256, input_dtype='float32', top_p=0.9, top_k=50, no_deterministic=False, backends=['cuda'])
[INFO] Running testTopKTopPSamplingFromProbs
[INFO] FlashInfer version: 0.6.2
[PERF] cuda :: median time 0.446 ms; std 0.048 ms; achieved tflops 0.055 TFLOPs/sec; achieved tb_per_sec 0.074 TB/sec
=== top_k_top_p_sampling_from_probs batch_size=128 ===
[INFO] args = Namespace(routine='top_k_top_p_sampling_from_probs', no_cuda_graph=False, use_cupti=False, use_cuda_events=False, refcheck=False, allow_output_mismatch=False, random_seed=42, verbose=1, output_path=None, num_iters=30, dry_run_iters=5, case_tag=None, generate_repro_command=False, repro_command='', batch_size=128, vocab_size=128256, input_dtype='float32', top_p=0.9, top_k=50, no_deterministic=False, backends=['cuda'])
[INFO] Running testTopKTopPSamplingFromProbs
[INFO] FlashInfer version: 0.6.2
[PERF] cuda :: median time 0.502 ms; std 0.052 ms; achieved tflops 0.098 TFLOPs/sec; achieved tb_per_sec 0.131 TB/sec
=== top_k_top_p_sampling_from_probs batch_size=256 ===
[INFO] args = Namespace(routine='top_k_top_p_sampling_from_probs', no_cuda_graph=False, use_cupti=False, use_cuda_events=False, refcheck=False, allow_output_mismatch=False, random_seed=42, verbose=1, output_path=None, num_iters=30, dry_run_iters=5, case_tag=None, generate_repro_command=False, repro_command='', batch_size=256, vocab_size=128256, input_dtype='float32', top_p=0.9, top_k=50, no_deterministic=False, backends=['cuda'])
[INFO] Running testTopKTopPSamplingFromProbs
[INFO] FlashInfer version: 0.6.2
[PERF] cuda :: median time 0.752 ms; std 0.060 ms; achieved tflops 0.131 TFLOPs/sec; achieved tb_per_sec 0.175 TB/sec
=== top_k_top_p_sampling_from_probs batch_size=512 ===
[INFO] args = Namespace(routine='top_k_top_p_sampling_from_probs', no_cuda_graph=False, use_cupti=False, use_cuda_events=False, refcheck=False, allow_output_mismatch=False, random_seed=42, verbose=1, output_path=None, num_iters=30, dry_run_iters=5, case_tag=None, generate_repro_command=False, repro_command='', batch_size=512, vocab_size=128256, input_dtype='float32', top_p=0.9, top_k=50, no_deterministic=False, backends=['cuda'])
[INFO] Running testTopKTopPSamplingFromProbs
[INFO] FlashInfer version: 0.6.2
[PERF] cuda :: median time 1.226 ms; std 0.067 ms; achieved tflops 0.161 TFLOPs/sec; achieved tb_per_sec 0.214 TB/sec
=== top_k_top_p_sampling_from_probs batch_size=1024 ===
[INFO] args = Namespace(routine='top_k_top_p_sampling_from_probs', no_cuda_graph=False, use_cupti=False, use_cuda_events=False, refcheck=False, allow_output_mismatch=False, random_seed=42, verbose=1, output_path=None, num_iters=30, dry_run_iters=5, case_tag=None, generate_repro_command=False, repro_command='', batch_size=1024, vocab_size=128256, input_dtype='float32', top_p=0.9, top_k=50, no_deterministic=False, backends=['cuda'])
[INFO] Running testTopKTopPSamplingFromProbs
[INFO] FlashInfer version: 0.6.2
[PERF] cuda :: median time 2.112 ms; std 0.052 ms; achieved tflops 0.187 TFLOPs/sec; achieved tb_per_sec 0.249 TB/sec
=== top_k_top_p_sampling_from_probs batch_size=2048 ===
[INFO] args = Namespace(routine='top_k_top_p_sampling_from_probs', no_cuda_graph=False, use_cupti=False, use_cuda_events=False, refcheck=False, allow_output_mismatch=False, random_seed=42, verbose=1, output_path=None, num_iters=30, dry_run_iters=5, case_tag=None, generate_repro_command=False, repro_command='', batch_size=2048, vocab_size=128256, input_dtype='float32', top_p=0.9, top_k=50, no_deterministic=False, backends=['cuda'])
[INFO] Running testTopKTopPSamplingFromProbs
[INFO] FlashInfer version: 0.6.2
[PERF] cuda :: median time 4.009 ms; std 0.072 ms; achieved tflops 0.197 TFLOPs/sec; achieved tb_per_sec 0.262 TB/sec
=== top_k_top_p_sampling_from_probs batch_size=4096 ===
[INFO] args = Namespace(routine='top_k_top_p_sampling_from_probs', no_cuda_graph=False, use_cupti=False, use_cuda_events=False, refcheck=False, allow_output_mismatch=False, random_seed=42, verbose=1, output_path=None, num_iters=30, dry_run_iters=5, case_tag=None, generate_repro_command=False, repro_command='', batch_size=4096, vocab_size=128256, input_dtype='float32', top_p=0.9, top_k=50, no_deterministic=False, backends=['cuda'])
[INFO] Running testTopKTopPSamplingFromProbs
[INFO] FlashInfer version: 0.6.2
[PERF] cuda :: median time 7.743 ms; std 0.067 ms; achieved tflops 0.204 TFLOPs/sec; achieved tb_per_sec 0.271 TB/sec
=== top_k_top_p_sampling_from_probs batch_size=8192 ===
[INFO] args = Namespace(routine='top_k_top_p_sampling_from_probs', no_cuda_graph=False, use_cupti=False, use_cuda_events=False, refcheck=False, allow_output_mismatch=False, random_seed=42, verbose=1, output_path=None, num_iters=30, dry_run_iters=5, case_tag=None, generate_repro_command=False, repro_command='', batch_size=8192, vocab_size=128256, input_dtype='float32', top_p=0.9, top_k=50, no_deterministic=False, backends=['cuda'])
[INFO] Running testTopKTopPSamplingFromProbs
[INFO] FlashInfer version: 0.6.2
[PERF] cuda :: median time 15.173 ms; std 0.066 ms; achieved tflops 0.208 TFLOPs/sec; achieved tb_per_sec 0.277 TB/sec |
|
Hi @vincentzed, I have #2484 that expands microbenchmark support to RoPe and sampling kernels. It seems like I inadvertently covered the sampling APIs this PR covers. Please let me know if you have concerns about #2484 |
Sure, feel free to copy any useful codes or ideas. |
|
Actually, maybe #2484 can add: |
This is a good suggestion. I've added refchecks to applicable sampling APIs in the PR. For CUDA graphs, it was necessary to provide the random seed and offset so that we don't need to sample them. |
📌 Description
Later, we will also add topk
flashinfer.topk, since the only test in codebase are in tests/utils/test_topk.py and no performance understanding that is tracked.Motivation: sgl-project/sglang#17243 and other analysis to see if sampling can be improved (relatively trivial time still)
🔍 Related Issues
🚀 Pull Request Checklist
Thank you for contributing to FlashInfer! Before we review your pull request, please make sure the following items are complete.
✅ Pre-commit Checks
pre-commitby runningpip install pre-commit(or used your preferred method).pre-commit install.pre-commit run --all-filesand fixed any reported issues.🧪 Tests
unittest, etc.).Reviewer Notes