Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
21 changes: 21 additions & 0 deletions .github/configs/nvidia-master.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -2007,6 +2007,27 @@ kimik2.5-int4-b200-vllm:
search-space:
- { tp: 8, conc-start: 4, conc-end: 64 }

# NOTE: At the time of submission, https://docs.vllm.ai/projects/recipes/en/latest/moonshotai/Kimi-K2.5.html
# does not have a B300-specific recipe, so this config reuses the existing
# Kimi-K2.5 INT4 B200 vLLM recipe as-is until B300-specific tuning is available.
kimik2.5-int4-b300-vllm:
image: vllm/vllm-openai:v0.19.0-cu130
model: moonshotai/Kimi-K2.5
model-prefix: kimik2.5
runner: b300
precision: int4
framework: vllm
multinode: false
seq-len-configs:
- isl: 1024
osl: 1024
search-space:
- { tp: 8, conc-start: 4, conc-end: 64 }
- isl: 8192
osl: 1024
search-space:
- { tp: 8, conc-start: 4, conc-end: 64 }

kimik2.5-int4-h200-vllm:
image: vllm/vllm-openai:v0.16.0
model: moonshotai/Kimi-K2.5
Expand Down
81 changes: 81 additions & 0 deletions benchmarks/single_node/kimik2.5_int4_b300.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,81 @@
#!/usr/bin/env bash

# NOTE: At the time of submission, https://docs.vllm.ai/projects/recipes/en/latest/moonshotai/Kimi-K2.5.html
# does not have a B300-specific recipe, so this script reuses the existing
# Kimi-K2.5 INT4 B200 vLLM recipe as-is until B300-specific tuning is available.

source "$(dirname "$0")/../benchmark_lib.sh"

check_env_vars \
MODEL \
TP \
CONC \
ISL \
OSL \
MAX_MODEL_LEN \
RANDOM_RANGE_RATIO \
RESULT_FILENAME

if [[ -n "$SLURM_JOB_ID" ]]; then
echo "JOB $SLURM_JOB_ID running on $SLURMD_NODENAME"
fi

hf download "$MODEL"

nvidia-smi

export PYTHONNOUSERSITE=1
export VLLM_USE_FLASHINFER_MOE_INT4=1

SERVER_LOG=/workspace/server.log
PORT=${PORT:-8888}

if [ "${EVAL_ONLY}" = "true" ]; then
setup_eval_context
MAX_MODEL_LEN="$EVAL_MAX_MODEL_LEN"
fi
# Start GPU monitoring (power, temperature, clocks every second)
start_gpu_monitor

set -x
vllm serve $MODEL --host 0.0.0.0 --port $PORT \
--gpu-memory-utilization 0.95 \
--tensor-parallel-size $TP \
--max-model-len $MAX_MODEL_LEN \
--max-num-seqs $CONC \
--reasoning-parser kimi_k2 \
--tool-call-parser kimi_k2 \
--compilation_config.pass_config.fuse_allreduce_rms true \
--trust-remote-code \
--disable-log-requests \
--no-enable-prefix-caching > $SERVER_LOG 2>&1 &

SERVER_PID=$!

# Wait for server to be ready
wait_for_server_ready --port "$PORT" --server-log "$SERVER_LOG" --server-pid "$SERVER_PID"

pip install -q datasets pandas

run_benchmark_serving \
--model "$MODEL" \
--port "$PORT" \
--backend vllm \
--input-len "$ISL" \
--output-len "$OSL" \
--random-range-ratio "$RANDOM_RANGE_RATIO" \
--num-prompts $(( CONC * 10 )) \
--max-concurrency "$CONC" \
--result-filename "$RESULT_FILENAME" \
--result-dir /workspace/ \
--trust-remote-code

# After throughput, run evaluation only if RUN_EVAL is true
if [ "${RUN_EVAL}" = "true" ]; then
run_eval --framework lm-eval --port "$PORT"
append_lm_eval_summary
fi

# Stop GPU monitoring
stop_gpu_monitor
set +x
8 changes: 8 additions & 0 deletions perf-changelog.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -1468,3 +1468,11 @@
- "Image: vllm/vllm-openai:v0.19.0-cu130"
- "At the time of submission, https://docs.vllm.ai/projects/recipes/en/latest/moonshotai/Kimi-K2.5.html does not have a B300-specific recipe, so this reuses the existing Kimi-K2.5 FP4 B200 vLLM recipe as-is"
pr-link: https://github.com/SemiAnalysisAI/InferenceX/pull/1056

- config-keys:
- kimik2.5-int4-b300-vllm
description:
- "Add Kimi-K2.5 INT4 B300 vLLM benchmark"
- "Image: vllm/vllm-openai:v0.19.0-cu130"
- "At the time of submission, https://docs.vllm.ai/projects/recipes/en/latest/moonshotai/Kimi-K2.5.html does not have a B300-specific recipe, so this reuses the existing Kimi-K2.5 INT4 B200 vLLM recipe as-is"
pr-link: https://github.com/SemiAnalysisAI/InferenceX/pull/1057
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟡 The new kimik2.5-int4-b300-vllm entry in perf-changelog.yaml uses a placeholder pull/XXXX instead of the actual PR number. Please replace XXXX with 1057 before merging.

Extended reasoning...

The new perf-changelog.yaml entry added by this PR (line 1414) contains a placeholder URL: https://github.com/SemiAnalysisAI/InferenceX/pull/XXXX. This placeholder was never replaced with the actual PR number, which is known at submission time to be 1057.

How it manifests: Any tooling or human reader that tries to follow the changelog link will land on a nonexistent GitHub URL, making it impossible to trace back what PR introduced the kimik2.5-int4-b300-vllm benchmark config.

Code path: The diff shows the entry was added at the bottom of perf-changelog.yaml with pr-link: https://github.com/SemiAnalysisAI/InferenceX/pull/XXXX. The author appears to have copied a template and forgot to substitute the PR number.

Why existing code doesn't prevent it: There is no CI validation enforcing that pr-link values contain a real PR number rather than a placeholder. The file is plain YAML with no schema enforcement on link format.

Impact: Low functional impact — the benchmark config itself is correct. However, the changelog entry becomes untraceable: downstream consumers, auditors, or developers reviewing history cannot click through to understand what changed, why the B200 recipe was reused, or who approved it. Changelog hygiene matters for a public benchmarking project.

Fix: Replace XXXX with 1057 on line 1414:

  pr-link: https://github.com/SemiAnalysisAI/InferenceX/pull/1057

Step-by-step proof:

  1. PR Add B300 config: kimi-k2.5-int4-vllm #1057 is opened with title "Add B300 config: kimi-k2.5-int4-vllm".
  2. The diff adds a new block to perf-changelog.yaml ending with pr-link: https://github.com/SemiAnalysisAI/InferenceX/pull/XXXX.
  3. Navigating to https://github.com/SemiAnalysisAI/InferenceX/pull/XXXX returns a 404/invalid URL — the page does not exist.
  4. The correct URL https://github.com/SemiAnalysisAI/InferenceX/pull/1057 resolves to this very PR.
  5. Note: seven other pre-existing entries in the file also use pull/XXX placeholders (lines 12, 19, 315, 790, 818, 855, 872), but those are pre-existing issues unrelated to this PR. This PR introduces one new instance of this pattern that is immediately fixable.

Loading