Skip to content

[recipe,sglang] feat: add Truncated importance sampling + sglang recipe#4462

Open
eternally-z wants to merge 5 commits intoverl-project:mainfrom
meituan-search:recipe/flashRL_sglang
Open

[recipe,sglang] feat: add Truncated importance sampling + sglang recipe#4462
eternally-z wants to merge 5 commits intoverl-project:mainfrom
meituan-search:recipe/flashRL_sglang

Conversation

@eternally-z
Copy link
Contributor

@eternally-z eternally-z commented Dec 9, 2025

What does this PR do?

This PR introduces a recipe for Truncated Importance Sampling (TIS) rollout using SGLang as the inference engine.

Previous TIS configurations and experiments primarily used vLLM as the inference engine. For the detailed principles regarding TIS, please refer to PR #2953. We present the first TIS rollout recipe using SGLang and provide the experimental results below.

We added a patch in SGLang that applies flash-rl to quantization, supporting various FP8 granularities.Please refer to sglang PR 9650 & 15440(implementation) and 14870(logic abstraction).

Configuration

  • Model: Qwen/Qwen3-8B-Base
  • Training Recipe: DAPO
  • Training Dataset: DAPO-Math-17k
  • Quantization Scheme: dynamic blockwise fp8
  • Validation: AIME-2024
  • Prompt batch size 32, n=16.
  • Rollout batch size: 32 * 3 * 16
  • Train_batch_size & ppo_mini_batch_size 32
  • Token-level TIS, C=2
  • 8*H20. veRL

Experiments results

@Wilboludriver @AniZpZ


Observations

Accuracy of Quantization: Increasing val scores and reasonable response logs confirm (not shown here) that training precision remains intact under Blockwise FP8 Rollout. In contrast, Per-channel FP8 quantization leads to significant precision degradation during the generation phase.

Effects on Training: With the same number of training steps, FP8 Rollout yields longer response lengths and consistently higher AIME-2024 val scores compared to BF16. This performance boost might be attributed to noise introduced by FP8; further investigation using mismatch metrics is required.

Gen. Throughput: FP8 Rollout initially exhibits higher throughput than BF16 but is overtaken in later stages. Profiling is currently underway to identify the bottleneck.

Checklist Before Starting

  • Search for similar PRs. Paste at least one query link here: ...
  • Format the PR title as [{modules}] {type}: {description} (This will be checked by the CI)
    • {modules} include fsdp, megatron, sglang, vllm, rollout, trainer, ci, training_utils, recipe, hardware, deployment, ray, worker, single_controller, misc, perf, model, algo, env, tool, ckpt, doc, data
    • If this PR involves multiple modules, separate them with , like [megatron, fsdp, doc]
    • {type} is in feat, fix, refactor, chore, test
    • If this PR breaks any API (CLI arguments, config, function signature, etc.), add [BREAKING] to the beginning of the title.
    • Example: [BREAKING][fsdp, megatron] feat: dynamic batching

Test

For changes that can not be tested by CI (e.g., algorithm implementation, new model support), validate by experiment(s) and show results like training curve plots, evaluation results, etc.

API and Usage Example

Demonstrate how the API changes if any, and provide usage example(s) if possible.

# Add code snippet or script demonstrating how to use this

Design & Code Changes

Demonstrate the high-level design if this PR is complex, and list the specific changes.

Checklist Before Submitting

Important

Please check all the following items before requesting a review, otherwise the reviewer might deprioritize this PR for review.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a new training recipe for FlashRL with SGLang. The main addition is the run_flashrl_sglang.sh script. My review focuses on improving the robustness of this script. I've identified a critical issue in the Ray cluster startup logic that could lead to silent failures and hard-to-debug problems. I've also pointed out a potential issue with an unquoted command-line argument that could affect script portability across different shell environments. The proposed changes will make the script more reliable and prevent unexpected behavior.

Comment on lines +37 to +57
echo "Ray server is not running, starting new Ray cluster..."
# Start new Ray cluster with 8 GPUs
ray start --head \
--port=${RAY_PORT} \
--dashboard-port=${RAY_DASHBOARD_PORT} \
--num-gpus=8 \
--dashboard-host=0.0.0.0 \
--temp-dir=${RAY_TEMP_DIR} \
--disable-usage-stats || true
echo "Ray cluster started successfully on localhost:${RAY_PORT}"
# Wait a moment for Ray to fully initialize
echo "Waiting for Ray cluster to be ready..."
sleep 5
# Verify Ray is actually running
if ray status --address="localhost:${RAY_PORT}" >/dev/null 2>&1; then
echo "Ray cluster is ready and accessible."
RAY_RUNNING=true
else
echo "Warning: Ray cluster may not be fully ready yet, but continuing..."
RAY_RUNNING=false
fi
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

critical

The Ray cluster startup logic is not robust and could lead to hard-to-debug failures:

  1. Error Suppression: The || true on line 45 suppresses the exit code from ray start. If ray start fails, the script will continue, causing later commands to fail unexpectedly.
  2. Unreliable Wait: Using a fixed sleep 5 is not a reliable way to wait for the cluster to initialize.
  3. Ignoring Failures: The script continues even if the cluster is not ready. The RAY_RUNNING variable is set but never used, so the script will attempt to run the Python application regardless of the cluster's status.

The script should fail fast if the Ray cluster cannot be started and should reliably wait for it to be ready.

    echo "Ray server is not running, starting new Ray cluster..."
    # Start new Ray cluster with 8 GPUs. The script will exit if this fails due to `set -e`.
    ray start --head \
        --port=${RAY_PORT} \
        --dashboard-port=${RAY_DASHBOARD_PORT} \
        --num-gpus=8 \
        --dashboard-host=0.0.0.0 \
        --temp-dir=${RAY_TEMP_DIR} \
        --disable-usage-stats

    # Wait for Ray to be ready with a timeout
    echo "Waiting for Ray cluster to be ready..."
    for i in {1..15}; do
        if ray status --address="localhost:${RAY_PORT}" >/dev/null 2>&1; then
            echo "Ray cluster is ready and accessible."
            break
        fi
        if [[ $i -eq 15 ]]; then
            echo "Error: Timed out waiting for Ray cluster to start on localhost:${RAY_PORT}." >&2
            exit 1
        fi
        sleep 2
    done

reward_model.overlong_buffer.enable=True \
reward_model.overlong_buffer.len=4096 \
reward_model.overlong_buffer.penalty_factor=1.0 \
trainer.logger=['console'] \
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

Unquoted list assignments for Hydra can be fragile and may be misinterpreted by different shells, leading to errors. It is safer to quote the value to ensure it's passed as a single string to the Python script.

Suggested change
trainer.logger=['console'] \
trainer.logger="['console']" \

@eternally-z eternally-z changed the title [recipe,sglang] feat: add Flash RL + sglang recipe [recipe,sglang] feat: add Truncated importance sampling + sglang recipe Dec 9, 2025
Co-authored-by: AniZpZ <aniz1905@gmail.com>
Co-authored-by: Wilboludriver <wilbolu@outlook.com>
@eternally-z eternally-z force-pushed the recipe/flashRL_sglang branch from e36aab2 to 027bde5 Compare December 9, 2025 11:15
@wuxibin89
Copy link
Collaborator

Better move the script to examples/rollout_correction folder?

@wuxibin89 wuxibin89 requested a review from szrlee December 9, 2025 12:42
@wuxibin89
Copy link
Collaborator

With TIS the mismatch between per-channel quantized FP8 rollout and native BF16 rollout gets alleviated, which can be also inferred from the rewards curves.

Do you implement sglang fp8 rollout? We already have an implementation in #4415

@szrlee
Copy link
Collaborator

szrlee commented Dec 9, 2025

@eternally-z please take a look on https://verl.readthedocs.io/en/latest/algo/rollout_corr.html

@eternally-z
Copy link
Contributor Author

@eternally-z please take a look on https://verl.readthedocs.io/en/latest/algo/rollout_corr.html

Thanks for catching this. The API usage error was due to my local environment running an outdated version of verl. I have updated the code to match the latest API and pushed the fix.

@AniZpZ
Copy link
Contributor

AniZpZ commented Dec 10, 2025

With TIS the mismatch between per-channel quantized FP8 rollout and native BF16 rollout gets alleviated, which can be also inferred from the rewards curves.

Do you implement sglang fp8 rollout? We already have an implementation in #4415

we have a pr that adds a patch in sglang that apply flash-rl in sglang quantization which supporting various FP8 quantization granularities, while remaining completely non-intrusive to verl

@eternally-z
Copy link
Contributor Author

Better move the script to examples/rollout_correction folder?

Thanks for the suggestion.The script has been moved to examples/rollout_correction/ .

@CLAassistant
Copy link

CLAassistant commented Dec 15, 2025

CLA assistant check
All committers have signed the CLA.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants