Skip to content

fix: migrate VerlBackend to new EngineWorker path (verl 0.7.1)#483

Merged
listar2000 merged 6 commits intomainfrom
fix/verl-new-engine
Apr 4, 2026
Merged

fix: migrate VerlBackend to new EngineWorker path (verl 0.7.1)#483
listar2000 merged 6 commits intomainfrom
fix/verl-new-engine

Conversation

@listar2000
Copy link
Copy Markdown
Collaborator

@listar2000 listar2000 commented Apr 4, 2026

Summary

Migrates the unified trainer's VerlBackend to Verl v0.7.1's new EngineWorker path (use_legacy_worker_impl: disable), which uses TensorDict + no-padding format instead of raw DataProto. This is a follow-up to #474, which applied the same adaptation to the older agent_workflow_trainer.

Key changes:

  • Enforce new EngineWorker path: validate_config now requires trainer.use_legacy_worker_impl=disable. The legacy DataProto-based worker path is no longer supported by VerlBackend.
  • TensorDict + no-padding for all worker calls: process_backend_batch converts DataProto → TensorDict → no-padding once and reuses the result for compute_log_prob, compute_ref_log_prob, and critic.infer_batch. Results are round-tripped back to padded DataProto via no_padding_2_padding. This follows the exact pattern from Verl's own ray_trainer.py.
  • Remove manual re-padding before actor/critic update: The new workers handle micro-batching internally via metadata (mini_batch_size, epochs, seed, etc.), eliminating the need for _pad_dataproto_for_megatron_training and the re-padding step in update_policy. This also fixes the prior issue where duplicate-padded samples participated in training loss.
  • Replace legacy monkey patch with set_loss_fn RPC: The old patch_verl_actor_for_loss_override targeted the legacy DataParallelPPOActor class (which doesn't exist in the new EngineWorker path). Replaced with a CustomPPOLoss callable sent to remote workers via Verl's public set_loss_fn(Dispatch.ONE_TO_ALL) API — no Verl fork or monkey-patch needed.
  • Convert OmegaConf config to ActorConfig dataclass: CustomPPOLoss.__init__ now calls omega_conf_to_dataclass() to convert the raw OmegaConf DictConfig (which has struct mode ON and lacks runtime-only fields like global_batch_info) into the proper Python dataclass. This mirrors what Verl's engine worker does at engine_workers.py:505.
  • Fix crash on single-step trajectories: compute_advantage_verl now guards the empty non_last_step_batch case. Previously, workflows where all trajectories have exactly one step (solver-judge, single-turn QA) crashed with RuntimeError: stack expects a non-empty TensorList when using use_rllm=false.
  • Explicit temperature in meta_info: Previously, temperature reached the batch only accidentally through compute_log_prob's return value union. Now set explicitly, matching Verl's own trainer.

Test plan

  • CustomPPOLoss config conversion verified: omega_conf_to_dataclass produces ActorConfig with global_batch_info: {}
  • CustomPPOLoss cloudpickle round-trip verified (serializable for Ray RPC)
  • Crash fix verified: compute_advantage_verl with all-True is_last_step batch succeeds
  • Ruff lint passes on all changed files
  • End-to-end training with solver-judge workflow (requires GPU)
  • End-to-end training with single-turn workflow (requires GPU)

🤖 Generated with Claude Code

listar2000 and others added 6 commits April 2, 2026 23:29
The loss function wrapper was receiving a raw OmegaConf DictConfig
(with struct mode ON) from the VerlBackend, but ppo_loss expects a
Python ActorConfig dataclass with runtime-only fields like
global_batch_info. Use omega_conf_to_dataclass() to bridge the gap,
mirroring what Verl's own engine worker does at initialization.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
@listar2000 listar2000 merged commit ec8bd7a into main Apr 4, 2026
0 of 3 checks passed
JasonWei05 added a commit that referenced this pull request Apr 6, 2026
* fix(trainer): supplement dfed770 by adding missing update_weights in sdk trainer to fix vllm engine weight loss and Ascend PositionEmbedding OOB error

* Fix norm_adv_by_std_in_grpo read from algorithm not stepwise_advantage

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* Add multi-server support to MCPEnvironment

* fix: update verl import paths for verl 0.7.1+ compatibility

verl 0.7.1 refactored fully_async_policy.ray_trainer into
separation.ray_trainer (PR verl-project/verl#5184). Update imports:

- FullyAsyncRayPPOTrainer → SeparateRayPPOTrainer
- FullyAsyncAgentLoopManager → AgentLoopManager
- fully_async_policy.fully_async_main → separation.utils

Fixes #470

Signed-off-by: Lidang-Jiang <lidangjiang@gmail.com>

* test: add import path verification tests for verl 0.7.1

Signed-off-by: Lidang-Jiang <lidangjiang@gmail.com>

* additional fixes of sdk trainer

* fix: migrate VerlBackend to new EngineWorker path (verl 0.7.1) (#483)

* fix: make VerlBackend work with new engine workers only

* fix code tool and reward import issues

* lazy import of autoprocessor

* fix: convert OmegaConf config to ActorConfig dataclass in CustomPPOLoss

The loss function wrapper was receiving a raw OmegaConf DictConfig
(with struct mode ON) from the VerlBackend, but ppo_loss expects a
Python ActorConfig dataclass with runtime-only fields like
global_batch_info. Use omega_conf_to_dataclass() to bridge the gap,
mirroring what Verl's own engine worker does at initialization.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* turn assertion into force conversion for non-disable legacy worker setting

---------

Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* feat: add hf_template tokenize_and_mask method + verl SFTTrainer compat

1. RLLMSFTDataset.__init__ now accepts processor and max_samples kwargs,
   matching verl's create_sft_dataset() call signature. Without this,
   using RLLMSFTDataset as custom_cls with verl's SFTTrainer(config)
   crashes with TypeError.

2. Add hf_template tokenization method that uses tokenizer.apply_chat_template()
   directly instead of rLLM's ChatTemplateParser. The existing cumulative/stepwise
   methods render tool calls as JSON-in-XML, which is wrong for models with native
   XML tool call format (e.g. Qwen3-Coder). The hf_template method produces the
   model's native format.

   Config: data.rllm.tokenize_and_mask_method: hf_template

* fix: handle signal.signal ValueError in non-main threads (#484)

Module-level `signal.signal(signal.SIGALRM, timeout_handler)` raises
`ValueError: signal only works in main thread` when taco.py is imported
in Ray worker threads (common during GRPO training with verl).

Wrap in try/except so the module can be safely imported from any thread.
The timeout handler is only functional in the main thread regardless.

* fix: resolve CI failures — E501 lint, tinker test deps, disable Claude actions (#486)

* fix: resolve CI failures — E501 lint errors, tinker test deps, disable Claude actions

- Fix all E501 (line > 200 chars) violations across ~45 Python files by
  wrapping long lines using standard Python continuation patterns
- Add per-file-ignores in pyproject.toml for 31 prompt/string-heavy files
  where long lines are intentional (agent prompts, system instructions)
- Add --extra dev to tinker CI workflow so pytest is available
- Disable Claude Code and Claude Code Review workflows due to credential issue

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* remove unused var

* keep fixing

* fix: run pre-commit on changed files only for PRs, show diffs on failure

- For pull requests: use --from-ref/--to-ref to only check files changed
  in the PR, matching local developer behavior
- For pushes to main: keep --all-files as a safety net
- Add --show-diff-on-fail so CI output shows exactly what needs fixing
- Add fetch-depth: 0 so git history is available for ref comparison

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

---------

Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* style: auto-format 21 files to fix ruff-format pre-commit failures (#487)

* style: auto-format 21 files with ruff-format to fix pre-commit failures

Apply ruff-format to pre-existing formatting issues across the codebase:
- Wrap long lines (dicts, function signatures, string literals)
- Collapse short multi-line forms that fit on one line
- Add missing trailing newline (conftest.py)
- Expand __all__ lists for readability

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* fix: exclude notebooks from ruff-format pre-commit hook

The ruff-format hook was missing the .ipynb exclusion that the ruff lint
hook already had, causing pre-commit to fail on notebook formatting.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

---------

Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* Integrate fully async training to UnifiedTrainer (#481)

* init new feature on unified fully async design

* add coordinator control and refactor queue

* cherrypick Kyle's async design refinements from kyle/deepresearch

Adopts core async architecture improvements: BufferedEpisodeGroup with
EpisodeGroupAccumulator, simplified SyncCoordinator with throttle and
pause/resume, fire-and-forget generation loop, streaming gradient
accumulation, and weight sync gate mechanism on RolloutEngine.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* Refactor chat parser and migrate experimental rollout to engine (#435)

* start refactoring

* revert chat template parser and override tinker parser test

* revert and fix chat parser test

* refactor tinker engine to use tinker parser

* deprecate bypass renderer mentions

* move experimental rollout out

* dump changes to rollout_engine into main file

* refactor base rollout engine class to standardize gating behaviors

* make tinker backend fully compatible

* merge Kyle's fork

* bump vllm, deepcopy msgs in Step's post_init

* [wip] make fully-async unified trainer compatible with agent flow engines

* fix staleness thottling

* enfore concurrency across engines

* fix fully async, refactor metrics

* revert engine/rollout to main, restore experimental/rollout engines

Move enhanced rollout engines (tinker, verl, completer, types) back to
rllm/experimental/rollout/ and revert rllm/engine/rollout/ to match main.
Fix import paths in experimental code and tinker backend/transform.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* revert TinkerChatTemplateParser and parser changes for separate PR

Revert parser files to main (tinker_parser.py, conftest, tests, __init__,
chat_template_parser, utils). Revert tinker_engine to main's ChatTemplateParser
approach, keeping only super().__init__() and _get_model_response rename.
Also restore pyproject.toml to main.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* revert bypass_render_with_parser and tinker parser-related changes

Revert config, docs, examples, and rollout files that referenced
bypass_render_with_parser (now staying in tinker_engine since we
reverted to main's ChatTemplateParser approach). Clean up tinker_backend
to only retain async-related changes.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* remove engine/gateway-level gate mechanism

The per-request gate on RolloutEngine is unnecessary:
- partial_rollout=True: verl handles abort/resume at server level,
  Tinker hot-swaps weights in place
- partial_rollout=False: coordination happens at task dispatch level
  (coordinator pause/resume), not per-request

Remove close_gate/open_gate/wait_for_gate/wait_for_drain from
RolloutEngine, GatewayManager, and model-gateway proxy/server/client.
Remove needs_weight_sync_gate from BackendProtocol.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* refactor: move task tracking to coordinator, revert validation rename, cleanup

- Move _in_flight_tasks tracking from UnifiedTrainer to SyncCoordinator
- Add epoch start/end hooks to async generation loop
- Remove dead _EPISODE_STRIP_KEYS constants from buffer
- Revert is_validation rename in engine/ (defer to future PR)
- Restore rllm-model-gateway/ to main

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* restore load_balancer assertion in verl_engine, revert tool_base to main

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* fix: add future annotations to rollout_engine for TYPE_CHECKING imports

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* style: fix ruff lint and format issues on unified-fully-async branch

Auto-fixed import sorting, unused imports, and formatting across 13 files.
Manual fixes: TYPE_CHECKING import for tqdm in buffer.py, isinstance union
syntax in metrics.py, moved logger below imports in unified_trainer.py,
split long log line.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

---------

Co-authored-by: listar2000 <35262801+listar2000@users.noreply.github.com>
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>

* fix(verl): disable vllm compile cache to work around corruption bug (#490)

---------

Signed-off-by: Lidang-Jiang <lidangjiang@gmail.com>
Co-authored-by: ZhihaoSun <bitszh3271@163.com>
Co-authored-by: Zakir Jiwani <108548454+JiwaniZakir@users.noreply.github.com>
Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
Co-authored-by: taivu1998 <46636857+taivu1998@users.noreply.github.com>
Co-authored-by: Lidang-Jiang <lidangjiang@gmail.com>
Co-authored-by: Kyle Montgomery <54512765+kylemontgomery1@users.noreply.github.com>
Co-authored-by: listar2000 <35262801+listar2000@users.noreply.github.com>
Co-authored-by: yifannnwu <yifannn.wu@gmail.com>
Co-authored-by: Yifan Wu <17992118+yifannnwu@users.noreply.github.com>
Co-authored-by: Bryan Lu <55512809+luyuzhe111@users.noreply.github.com>
@listar2000 listar2000 deleted the fix/verl-new-engine branch April 22, 2026 22:06
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant