fix: migrate VerlBackend to new EngineWorker path (verl 0.7.1)#483
Merged
listar2000 merged 6 commits intomainfrom Apr 4, 2026
Merged
fix: migrate VerlBackend to new EngineWorker path (verl 0.7.1)#483listar2000 merged 6 commits intomainfrom
listar2000 merged 6 commits intomainfrom
Conversation
The loss function wrapper was receiving a raw OmegaConf DictConfig (with struct mode ON) from the VerlBackend, but ppo_loss expects a Python ActorConfig dataclass with runtime-only fields like global_batch_info. Use omega_conf_to_dataclass() to bridge the gap, mirroring what Verl's own engine worker does at initialization. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
This was referenced Apr 4, 2026
JasonWei05
added a commit
that referenced
this pull request
Apr 6, 2026
* fix(trainer): supplement dfed770 by adding missing update_weights in sdk trainer to fix vllm engine weight loss and Ascend PositionEmbedding OOB error * Fix norm_adv_by_std_in_grpo read from algorithm not stepwise_advantage Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com> * Add multi-server support to MCPEnvironment * fix: update verl import paths for verl 0.7.1+ compatibility verl 0.7.1 refactored fully_async_policy.ray_trainer into separation.ray_trainer (PR verl-project/verl#5184). Update imports: - FullyAsyncRayPPOTrainer → SeparateRayPPOTrainer - FullyAsyncAgentLoopManager → AgentLoopManager - fully_async_policy.fully_async_main → separation.utils Fixes #470 Signed-off-by: Lidang-Jiang <lidangjiang@gmail.com> * test: add import path verification tests for verl 0.7.1 Signed-off-by: Lidang-Jiang <lidangjiang@gmail.com> * additional fixes of sdk trainer * fix: migrate VerlBackend to new EngineWorker path (verl 0.7.1) (#483) * fix: make VerlBackend work with new engine workers only * fix code tool and reward import issues * lazy import of autoprocessor * fix: convert OmegaConf config to ActorConfig dataclass in CustomPPOLoss The loss function wrapper was receiving a raw OmegaConf DictConfig (with struct mode ON) from the VerlBackend, but ppo_loss expects a Python ActorConfig dataclass with runtime-only fields like global_batch_info. Use omega_conf_to_dataclass() to bridge the gap, mirroring what Verl's own engine worker does at initialization. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * turn assertion into force conversion for non-disable legacy worker setting --------- Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * feat: add hf_template tokenize_and_mask method + verl SFTTrainer compat 1. RLLMSFTDataset.__init__ now accepts processor and max_samples kwargs, matching verl's create_sft_dataset() call signature. Without this, using RLLMSFTDataset as custom_cls with verl's SFTTrainer(config) crashes with TypeError. 2. Add hf_template tokenization method that uses tokenizer.apply_chat_template() directly instead of rLLM's ChatTemplateParser. The existing cumulative/stepwise methods render tool calls as JSON-in-XML, which is wrong for models with native XML tool call format (e.g. Qwen3-Coder). The hf_template method produces the model's native format. Config: data.rllm.tokenize_and_mask_method: hf_template * fix: handle signal.signal ValueError in non-main threads (#484) Module-level `signal.signal(signal.SIGALRM, timeout_handler)` raises `ValueError: signal only works in main thread` when taco.py is imported in Ray worker threads (common during GRPO training with verl). Wrap in try/except so the module can be safely imported from any thread. The timeout handler is only functional in the main thread regardless. * fix: resolve CI failures — E501 lint, tinker test deps, disable Claude actions (#486) * fix: resolve CI failures — E501 lint errors, tinker test deps, disable Claude actions - Fix all E501 (line > 200 chars) violations across ~45 Python files by wrapping long lines using standard Python continuation patterns - Add per-file-ignores in pyproject.toml for 31 prompt/string-heavy files where long lines are intentional (agent prompts, system instructions) - Add --extra dev to tinker CI workflow so pytest is available - Disable Claude Code and Claude Code Review workflows due to credential issue Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * remove unused var * keep fixing * fix: run pre-commit on changed files only for PRs, show diffs on failure - For pull requests: use --from-ref/--to-ref to only check files changed in the PR, matching local developer behavior - For pushes to main: keep --all-files as a safety net - Add --show-diff-on-fail so CI output shows exactly what needs fixing - Add fetch-depth: 0 so git history is available for ref comparison Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> --------- Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * style: auto-format 21 files to fix ruff-format pre-commit failures (#487) * style: auto-format 21 files with ruff-format to fix pre-commit failures Apply ruff-format to pre-existing formatting issues across the codebase: - Wrap long lines (dicts, function signatures, string literals) - Collapse short multi-line forms that fit on one line - Add missing trailing newline (conftest.py) - Expand __all__ lists for readability Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix: exclude notebooks from ruff-format pre-commit hook The ruff-format hook was missing the .ipynb exclusion that the ruff lint hook already had, causing pre-commit to fail on notebook formatting. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> --------- Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * Integrate fully async training to UnifiedTrainer (#481) * init new feature on unified fully async design * add coordinator control and refactor queue * cherrypick Kyle's async design refinements from kyle/deepresearch Adopts core async architecture improvements: BufferedEpisodeGroup with EpisodeGroupAccumulator, simplified SyncCoordinator with throttle and pause/resume, fire-and-forget generation loop, streaming gradient accumulation, and weight sync gate mechanism on RolloutEngine. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * Refactor chat parser and migrate experimental rollout to engine (#435) * start refactoring * revert chat template parser and override tinker parser test * revert and fix chat parser test * refactor tinker engine to use tinker parser * deprecate bypass renderer mentions * move experimental rollout out * dump changes to rollout_engine into main file * refactor base rollout engine class to standardize gating behaviors * make tinker backend fully compatible * merge Kyle's fork * bump vllm, deepcopy msgs in Step's post_init * [wip] make fully-async unified trainer compatible with agent flow engines * fix staleness thottling * enfore concurrency across engines * fix fully async, refactor metrics * revert engine/rollout to main, restore experimental/rollout engines Move enhanced rollout engines (tinker, verl, completer, types) back to rllm/experimental/rollout/ and revert rllm/engine/rollout/ to match main. Fix import paths in experimental code and tinker backend/transform. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * revert TinkerChatTemplateParser and parser changes for separate PR Revert parser files to main (tinker_parser.py, conftest, tests, __init__, chat_template_parser, utils). Revert tinker_engine to main's ChatTemplateParser approach, keeping only super().__init__() and _get_model_response rename. Also restore pyproject.toml to main. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * revert bypass_render_with_parser and tinker parser-related changes Revert config, docs, examples, and rollout files that referenced bypass_render_with_parser (now staying in tinker_engine since we reverted to main's ChatTemplateParser approach). Clean up tinker_backend to only retain async-related changes. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * remove engine/gateway-level gate mechanism The per-request gate on RolloutEngine is unnecessary: - partial_rollout=True: verl handles abort/resume at server level, Tinker hot-swaps weights in place - partial_rollout=False: coordination happens at task dispatch level (coordinator pause/resume), not per-request Remove close_gate/open_gate/wait_for_gate/wait_for_drain from RolloutEngine, GatewayManager, and model-gateway proxy/server/client. Remove needs_weight_sync_gate from BackendProtocol. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * refactor: move task tracking to coordinator, revert validation rename, cleanup - Move _in_flight_tasks tracking from UnifiedTrainer to SyncCoordinator - Add epoch start/end hooks to async generation loop - Remove dead _EPISODE_STRIP_KEYS constants from buffer - Revert is_validation rename in engine/ (defer to future PR) - Restore rllm-model-gateway/ to main Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * restore load_balancer assertion in verl_engine, revert tool_base to main Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix: add future annotations to rollout_engine for TYPE_CHECKING imports Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * style: fix ruff lint and format issues on unified-fully-async branch Auto-fixed import sorting, unused imports, and formatting across 13 files. Manual fixes: TYPE_CHECKING import for tqdm in buffer.py, isinstance union syntax in metrics.py, moved logger below imports in unified_trainer.py, split long log line. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> --------- Co-authored-by: listar2000 <35262801+listar2000@users.noreply.github.com> Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com> * fix(verl): disable vllm compile cache to work around corruption bug (#490) --------- Signed-off-by: Lidang-Jiang <lidangjiang@gmail.com> Co-authored-by: ZhihaoSun <bitszh3271@163.com> Co-authored-by: Zakir Jiwani <108548454+JiwaniZakir@users.noreply.github.com> Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com> Co-authored-by: taivu1998 <46636857+taivu1998@users.noreply.github.com> Co-authored-by: Lidang-Jiang <lidangjiang@gmail.com> Co-authored-by: Kyle Montgomery <54512765+kylemontgomery1@users.noreply.github.com> Co-authored-by: listar2000 <35262801+listar2000@users.noreply.github.com> Co-authored-by: yifannnwu <yifannn.wu@gmail.com> Co-authored-by: Yifan Wu <17992118+yifannnwu@users.noreply.github.com> Co-authored-by: Bryan Lu <55512809+luyuzhe111@users.noreply.github.com>
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
Migrates the unified trainer's
VerlBackendto Verl v0.7.1's new EngineWorker path (use_legacy_worker_impl: disable), which uses TensorDict + no-padding format instead of rawDataProto. This is a follow-up to #474, which applied the same adaptation to the olderagent_workflow_trainer.Key changes:
validate_confignow requirestrainer.use_legacy_worker_impl=disable. The legacyDataProto-based worker path is no longer supported byVerlBackend.process_backend_batchconvertsDataProto → TensorDict → no-paddingonce and reuses the result forcompute_log_prob,compute_ref_log_prob, andcritic.infer_batch. Results are round-tripped back to paddedDataProtoviano_padding_2_padding. This follows the exact pattern from Verl's ownray_trainer.py.mini_batch_size,epochs,seed, etc.), eliminating the need for_pad_dataproto_for_megatron_trainingand the re-padding step inupdate_policy. This also fixes the prior issue where duplicate-padded samples participated in training loss.set_loss_fnRPC: The oldpatch_verl_actor_for_loss_overridetargeted the legacyDataParallelPPOActorclass (which doesn't exist in the new EngineWorker path). Replaced with aCustomPPOLosscallable sent to remote workers via Verl's publicset_loss_fn(Dispatch.ONE_TO_ALL)API — no Verl fork or monkey-patch needed.CustomPPOLoss.__init__now callsomega_conf_to_dataclass()to convert the raw OmegaConf DictConfig (which has struct mode ON and lacks runtime-only fields likeglobal_batch_info) into the proper Python dataclass. This mirrors what Verl's engine worker does atengine_workers.py:505.compute_advantage_verlnow guards the emptynon_last_step_batchcase. Previously, workflows where all trajectories have exactly one step (solver-judge, single-turn QA) crashed withRuntimeError: stack expects a non-empty TensorListwhen usinguse_rllm=false.temperatureinmeta_info: Previously, temperature reached the batch only accidentally throughcompute_log_prob's return value union. Now set explicitly, matching Verl's own trainer.Test plan
CustomPPOLossconfig conversion verified:omega_conf_to_dataclassproducesActorConfigwithglobal_batch_info: {}CustomPPOLosscloudpickle round-trip verified (serializable for Ray RPC)compute_advantage_verlwith all-Trueis_last_stepbatch succeeds🤖 Generated with Claude Code