Skip to content

Commit da8975a

Browse files
committed
Merge branch 'hotfix-2056' into fix_flaky_xqa_test
2 parents 7504aa2 + 68f105b commit da8975a

36 files changed

+2845
-460
lines changed

README.md

Lines changed: 24 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -15,12 +15,12 @@ Kernel Library for LLM Serving
1515
[![Build Status](https://ci.tlcpack.ai/job/flashinfer-ci/job/main/badge/icon)](https://ci.tlcpack.ai/job/flashinfer-ci/job/main/)
1616
[![Documentation](https://github.com/flashinfer-ai/flashinfer/actions/workflows/build-doc.yml/badge.svg)](https://github.com/flashinfer-ai/flashinfer/actions/workflows/build-doc.yml)
1717

18-
1918
FlashInfer is a library and kernel generator for Large Language Models that provides high-performance implementation of LLM GPU kernels such as FlashAttention, SparseAttention, PageAttention, Sampling, and more. FlashInfer focuses on LLM serving and inference, and delivers state-of-the-art performance across diverse scenarios.
2019

2120
Check our [v0.2 release blog](https://flashinfer.ai/2024/12/16/flashinfer-v02-release.html) for new features!
2221

2322
The core features of FlashInfer include:
23+
2424
1. **Efficient Sparse/Dense Attention Kernels**: Efficient single/batch attention for sparse(paged)/dense KV-storage on CUDA Cores and Tensor Cores (both FA2 & FA3) templates. The vector-sparse attention can achieve 90% of the bandwidth of dense kernels with same problem size.
2525
2. **Load-Balanced Scheduling**: FlashInfer decouples `plan`/`run` stage of attention computation where we schedule the computation of variable-length inputs in `plan` stage to alleviate load-imbalance issue.
2626
3. **Memory Efficiency**: FlashInfer offers [Cascade Attention](https://docs.flashinfer.ai/api/cascade.html#flashinfer.cascade.MultiLevelCascadeAttentionWrapper) for hierarchical KV-Cache, and implements Head-Query fusion for accelerating Grouped-Query Attention, and efficient kernels for low-precision attention and fused-RoPE attention for compressed KV-Cache.
@@ -31,6 +31,7 @@ The core features of FlashInfer include:
3131
FlashInfer supports PyTorch, TVM and C++ (header-only) APIs, and can be easily integrated into existing projects.
3232

3333
## News
34+
3435
- [Mar 10, 2025] [Blog Post](https://flashinfer.ai/2025/03/10/sampling.html) Sorting-Free GPU Kernels for LLM Sampling, which explains the design of sampling kernels in FlashInfer.
3536
- [Mar 1, 2025] Checkout flashinfer's [intra-kernel profiler](https://github.com/flashinfer-ai/flashinfer/tree/main/profiler) for visualizing the timeline of each threadblock in GPU kernels.
3637
- [Dec 16, 2024] [Blog Post](https://flashinfer.ai/2024/12/16/flashinfer-v02-release.html) FlashInfer 0.2 - Efficient and Customizable Kernels for LLM Inference Serving
@@ -51,11 +52,13 @@ pip install flashinfer-python
5152
```
5253

5354
**Package Options:**
55+
5456
- **flashinfer-python**: Core package that compiles/downloads kernels on first use
5557
- **flashinfer-cubin**: Pre-compiled kernel binaries for all supported GPU architectures
5658
- **flashinfer-jit-cache**: Pre-built kernel cache for specific CUDA versions
5759

5860
**For faster initialization and offline usage**, install the optional packages to have most kernels pre-compiled:
61+
5962
```bash
6063
pip install flashinfer-python flashinfer-cubin
6164
# JIT cache package (replace cu129 with your CUDA version: cu128, cu129, or cu130)
@@ -75,22 +78,25 @@ python -m pip install -v .
7578
```
7679

7780
**For development**, install in editable mode:
81+
7882
```bash
7983
python -m pip install --no-build-isolation -e . -v
8084
```
8185

8286
**Build optional packages:**
8387

8488
`flashinfer-cubin`:
89+
8590
```bash
8691
cd flashinfer-cubin
8792
python -m build --no-isolation --wheel
8893
python -m pip install dist/*.whl
8994
```
9095

9196
`flashinfer-jit-cache` (customize `FLASHINFER_CUDA_ARCH_LIST` for your target GPUs):
97+
9298
```bash
93-
export FLASHINFER_CUDA_ARCH_LIST="7.5 8.0 8.9 10.0a 10.3a 11.0a 12.0f"
99+
export FLASHINFER_CUDA_ARCH_LIST="7.5 8.0 8.9 9.0a 10.0a 10.3a 11.0a 12.0f"
94100
cd flashinfer-jit-cache
95101
python -m build --no-isolation --wheel
96102
python -m pip install dist/*.whl
@@ -120,6 +126,7 @@ flashinfer show-config
120126
```
121127

122128
This command displays:
129+
123130
- FlashInfer version and installed packages (flashinfer-python, flashinfer-cubin, flashinfer-jit-cache)
124131
- PyTorch and CUDA version information
125132
- Environment variables and artifact paths
@@ -162,6 +169,20 @@ o = flashinfer.single_prefill_with_kv_cache(q, k, v, causal=False) # prefill att
162169

163170
Check out [documentation](https://docs.flashinfer.ai/) for usage of batch decode/append/prefill kernels and shared-prefix cascading kernels.
164171

172+
## API Logging
173+
174+
FlashInfer provides comprehensive API logging for debugging. Enable it using environment variables:
175+
176+
```bash
177+
# Enable logging (levels: 0=off (default), 1=basic, 3=detailed, 5=statistics)
178+
export FLASHINFER_LOGLEVEL=3
179+
180+
# Set log destination (stdout (default), stderr, or file path)
181+
export FLASHINFER_LOGDEST=stdout
182+
```
183+
184+
For detailed information about logging levels, configuration, and advanced features, see [LOGGING.md](LOGGING.md).
185+
165186
## Custom Attention Variants
166187

167188
Starting from FlashInfer v0.2, users can customize their own attention variants with additional parameters. For more details, refer to our [JIT examples](https://github.com/flashinfer-ai/flashinfer/blob/main/tests/utils/test_jit_example.py).
@@ -173,6 +194,7 @@ FlashInfer currently provides support for NVIDIA SM architectures 75 and higher
173194
## Adoption
174195

175196
We are thrilled to share that FlashInfer is being adopted by many cutting-edge projects, including but not limited to:
197+
176198
- [MLC-LLM](https://github.com/mlc-ai/mlc-llm)
177199
- [Punica](https://github.com/punica-ai/punica)
178200
- [SGLang](https://github.com/sgl-project/sglang)

0 commit comments

Comments
 (0)