Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion python/sglang/multimodal_gen/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -65,7 +65,7 @@ For more usage examples (e.g. OpenAI compatible API, server mode), check [cli.md

## Contributing

All contributions are welcome.
All contributions are welcome. The contribution guide is available [here](https://github.com/sgl-project/sglang/tree/main/python/sglang/multimodal_gen/docs/contributing.md).

## Acknowledgement

Expand Down
46 changes: 46 additions & 0 deletions python/sglang/multimodal_gen/docs/contributing.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,46 @@
# Contributing to SGLang Diffusion

This guide outlines the requirements for contributing to the SGLang Diffusion module (`sglang.multimodal_gen`).

## 1. Commit Message Convention

We follow a structured commit message format to maintain a clean history.

**Format:**
```text
[diffusion] <scope>: <subject>
```

**Examples:**
- `[diffusion] cli: add --perf-dump-path argument`
- `[diffusion] scheduler: fix deadlock in batch processing`
- `[diffusion] model: support Stable Diffusion 3.5`

**Rules:**
- **Prefix**: Always start with `[diffusion]`.
- **Scope** (Optional): `cli`, `scheduler`, `model`, `pipeline`, `docs`, etc.
- **Subject**: Imperative mood, short and clear (e.g., "add feature" not "added feature").

## 2. Performance Reporting

For PRs that impact **latency**, **throughput**, or **memory usage**, you **should** provide a performance comparison report.

### How to Generate a Report

1. **Baseline**: run the benchmark (for a single generation task)
```bash
$ sglang generate --model-path <model> --prompt "A benchmark prompt" --perf-dump-path baseline.json
```

2. **New**: run the same benchmark, without modifying any server_args or sampling_params
```bash
$ sglang generate --model-path <model> --prompt "A benchmark prompt" --perf-dump-path new.json
```

3. **Compare**: run the compare script, which will print a Markdown table to the console
```bash
$ python python/sglang/multimodal_gen/benchmarks/compare_perf.py baseline.json new.json
### Performance Comparison Report
...
```
4. **Paste**: paste the table into the PR description
Loading