Add B300 config: kimi-k2.5-int4-vllm#1057
Conversation
|
Thanks for the contribution! For vLLM & SGLang, please ensure that your recipes is similar to the official vLLM recipes and/or the SGLang cookbook If it is not, please create a PR first before we can merge your PR into the master branch. Let's ensure that the documentation is first class such that the entire ML community can benefit from your hard work! Thank you
PR authors are responsible for ensuring that after merging, all GitHub Action jobs fully pass. A lot of the time, failures are just flakes and simply re-running the failed jobs will fix it. If re-running failed jobs is attempted, PR authors are responsible for ensuring it passes. See GitHub's docs on re-running failed jobs: https://docs.github.com/en/actions/how-tos/manage-workflow-runs/re-run-workflows-and-jobs#re-running-failed-jobs-in-a-workflow If additional help is needed, PR authors can reach out to core maintainers over Slack. |
2 similar comments
|
Thanks for the contribution! For vLLM & SGLang, please ensure that your recipes is similar to the official vLLM recipes and/or the SGLang cookbook If it is not, please create a PR first before we can merge your PR into the master branch. Let's ensure that the documentation is first class such that the entire ML community can benefit from your hard work! Thank you
PR authors are responsible for ensuring that after merging, all GitHub Action jobs fully pass. A lot of the time, failures are just flakes and simply re-running the failed jobs will fix it. If re-running failed jobs is attempted, PR authors are responsible for ensuring it passes. See GitHub's docs on re-running failed jobs: https://docs.github.com/en/actions/how-tos/manage-workflow-runs/re-run-workflows-and-jobs#re-running-failed-jobs-in-a-workflow If additional help is needed, PR authors can reach out to core maintainers over Slack. |
|
Thanks for the contribution! For vLLM & SGLang, please ensure that your recipes is similar to the official vLLM recipes and/or the SGLang cookbook If it is not, please create a PR first before we can merge your PR into the master branch. Let's ensure that the documentation is first class such that the entire ML community can benefit from your hard work! Thank you
PR authors are responsible for ensuring that after merging, all GitHub Action jobs fully pass. A lot of the time, failures are just flakes and simply re-running the failed jobs will fix it. If re-running failed jobs is attempted, PR authors are responsible for ensuring it passes. See GitHub's docs on re-running failed jobs: https://docs.github.com/en/actions/how-tos/manage-workflow-runs/re-run-workflows-and-jobs#re-running-failed-jobs-in-a-workflow If additional help is needed, PR authors can reach out to core maintainers over Slack. |
| - "Add Kimi-K2.5 INT4 B300 vLLM benchmark" | ||
| - "Image: vllm/vllm-openai:v0.15.1" | ||
| - "At the time of submission, https://docs.vllm.ai/projects/recipes/en/latest/moonshotai/Kimi-K2.5.html does not have a B300-specific recipe, so this reuses the existing Kimi-K2.5 INT4 B200 vLLM recipe as-is" | ||
| pr-link: https://github.com/SemiAnalysisAI/InferenceX/pull/1057 |
There was a problem hiding this comment.
🟡 The new kimik2.5-int4-b300-vllm entry in perf-changelog.yaml uses a placeholder pull/XXXX instead of the actual PR number. Please replace XXXX with 1057 before merging.
Extended reasoning...
The new perf-changelog.yaml entry added by this PR (line 1414) contains a placeholder URL: https://github.com/SemiAnalysisAI/InferenceX/pull/XXXX. This placeholder was never replaced with the actual PR number, which is known at submission time to be 1057.
How it manifests: Any tooling or human reader that tries to follow the changelog link will land on a nonexistent GitHub URL, making it impossible to trace back what PR introduced the kimik2.5-int4-b300-vllm benchmark config.
Code path: The diff shows the entry was added at the bottom of perf-changelog.yaml with pr-link: https://github.com/SemiAnalysisAI/InferenceX/pull/XXXX. The author appears to have copied a template and forgot to substitute the PR number.
Why existing code doesn't prevent it: There is no CI validation enforcing that pr-link values contain a real PR number rather than a placeholder. The file is plain YAML with no schema enforcement on link format.
Impact: Low functional impact — the benchmark config itself is correct. However, the changelog entry becomes untraceable: downstream consumers, auditors, or developers reviewing history cannot click through to understand what changed, why the B200 recipe was reused, or who approved it. Changelog hygiene matters for a public benchmarking project.
Fix: Replace XXXX with 1057 on line 1414:
pr-link: https://github.com/SemiAnalysisAI/InferenceX/pull/1057Step-by-step proof:
- PR Add B300 config: kimi-k2.5-int4-vllm #1057 is opened with title "Add B300 config: kimi-k2.5-int4-vllm".
- The diff adds a new block to perf-changelog.yaml ending with
pr-link: https://github.com/SemiAnalysisAI/InferenceX/pull/XXXX. - Navigating to
https://github.com/SemiAnalysisAI/InferenceX/pull/XXXXreturns a 404/invalid URL — the page does not exist. - The correct URL
https://github.com/SemiAnalysisAI/InferenceX/pull/1057resolves to this very PR. - Note: seven other pre-existing entries in the file also use
pull/XXXplaceholders (lines 12, 19, 315, 790, 818, 855, 872), but those are pre-existing issues unrelated to this PR. This PR introduces one new instance of this pattern that is immediately fixable.
a5585b8 to
69b9cfb
Compare
At the time of submission, the vLLM Kimi-K2.5 recipes page (https://docs.vllm.ai/projects/recipes/en/latest/moonshotai/Kimi-K2.5.html) does not have a B300-specific recipe, so this config reuses the existing Kimi-K2.5 INT4 B200 vLLM recipe as-is until B300-specific tuning is available. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Align with the standard B300 vLLM image used by other B300 vLLM configs. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
69b9cfb to
7119ead
Compare
…1267) * Add B300 config: kimi-k2.5-int4-vllm (vLLM 0.20.0 + TP=4/EP=1 sweep) - New `kimik2.5-int4-b300-vllm` config with the corresponding `benchmarks/single_node/kimik2.5_int4_b300.sh` launch script (mirrors the existing INT4 B200 vLLM recipe; the upstream vLLM Kimi-K2.5 recipes page does not yet ship B300-specific tuning). - Image: `vllm/vllm-openai:v0.20.0-cu130` — the original draft (#1057, reverted in #1070, reopened as #1071) carried `v0.19.0` while we waited on a working release; 0.20.0 has now shipped. - Search-space per (ISL, OSL): the existing TP=8 sweep plus a new TP=4 / EP=1 entry covering the lower-TP / expert-parallel variant on the same B300 nodes. Supersedes #1071 — opening fresh from main since the merge base had drifted (b200 schema migrated from `seq-len-configs` to `scenarios.fixed-seq-len`) and the user preferred a clean reopen over a rebase. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> * perf-changelog: move kimik2.5-int4-b300-vllm entry to bottom AGENTS.md requires new perf-changelog entries to be appended to the end of the file (oldest at top, newest at bottom). The original commit prepended the new entry above PR #95; move it after the current last entry (PR #1265) to satisfy the convention. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> --------- Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
…emiAnalysisAI#1267) * Add B300 config: kimi-k2.5-int4-vllm (vLLM 0.20.0 + TP=4/EP=1 sweep) - New `kimik2.5-int4-b300-vllm` config with the corresponding `benchmarks/single_node/kimik2.5_int4_b300.sh` launch script (mirrors the existing INT4 B200 vLLM recipe; the upstream vLLM Kimi-K2.5 recipes page does not yet ship B300-specific tuning). - Image: `vllm/vllm-openai:v0.20.0-cu130` — the original draft (SemiAnalysisAI#1057, reverted in SemiAnalysisAI#1070, reopened as SemiAnalysisAI#1071) carried `v0.19.0` while we waited on a working release; 0.20.0 has now shipped. - Search-space per (ISL, OSL): the existing TP=8 sweep plus a new TP=4 / EP=1 entry covering the lower-TP / expert-parallel variant on the same B300 nodes. Supersedes SemiAnalysisAI#1071 — opening fresh from main since the merge base had drifted (b200 schema migrated from `seq-len-configs` to `scenarios.fixed-seq-len`) and the user preferred a clean reopen over a rebase. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> * perf-changelog: move kimik2.5-int4-b300-vllm entry to bottom AGENTS.md requires new perf-changelog entries to be appended to the end of the file (oldest at top, newest at bottom). The original commit prepended the new entry above PR SemiAnalysisAI#95; move it after the current last entry (PR SemiAnalysisAI#1265) to satisfy the convention. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> --------- Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Summary
kimik2.5-int4-b300-vllmbenchmark config and the correspondingbenchmarks/single_node/kimik2.5_int4_b300.shlaunch scriptvllm/vllm-openai:v0.15.1(same as B200), runner:b300, same TP=8 and concurrency 4-64 search-space as B200Test plan
kimik2.5-int4-b300-vllmsingle-node benchmark on a B300 node and confirm server starts, benchmark completes, and result file is produced🤖 Generated with Claude Code