Skip to content

Integrate fully async training to UnifiedTrainer#481

Merged
listar2000 merged 26 commits intomainfrom
unified-fully-async
Apr 5, 2026
Merged

Integrate fully async training to UnifiedTrainer#481
listar2000 merged 26 commits intomainfrom
unified-fully-async

Conversation

@kylemontgomery1
Copy link
Copy Markdown
Collaborator

@kylemontgomery1 kylemontgomery1 commented Apr 3, 2026

Summary

Adds fully async training to UnifiedTrainer, currently only for tinker backend.

Type of change

  • Feature
  • Fix
  • Docs
  • Refactor
  • Example / Project
  • Infra / CI

What changed

  • New SyncCoordinator, TrajectoryGroupBuffer, and MetricsAggregator for async pipeline
  • Async training loop in UnifiedTrainer with concurrent generation/training and coordinator-managed weight sync
  • on_policy_updated hook added to protocal for weight sync/checkpointing
  • Countdown examples for sync and async unified tinker training

Validation

  • pre-commit run --all-files
  • Targeted tests: pytest ...
  • Manual validation performed
  • Not run (reason below)

Validation details:
Entire repo needs linting. Tested both paths on wandb: https://wandb.ai/agentica/rllm-countdown

Breaking changes / migration notes

None

Docs / examples

  • Not needed
  • Updated docs
  • Updated examples
  • Follow-up docs needed

Related issues / PRs

  • Fixes #
  • Related to #
  • Stacked on / depends on #

Screenshots / logs

listar2000 and others added 16 commits March 6, 2026 17:57
Adopts core async architecture improvements: BufferedEpisodeGroup with
EpisodeGroupAccumulator, simplified SyncCoordinator with throttle and
pause/resume, fire-and-forget generation loop, streaming gradient
accumulation, and weight sync gate mechanism on RolloutEngine.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* start refactoring

* revert chat template parser and override tinker parser test

* revert and fix chat parser test

* refactor tinker engine to use tinker parser

* deprecate bypass renderer mentions

* move experimental rollout out
@listar2000
Copy link
Copy Markdown
Collaborator

Hi @kylemontgomery1 just want to point you on a few recent PRs to main on the (non-fully-async) Verl support in rLLM: #474 #483

These two PRs both help rLLM support the new engine worker (for VerlBackend, we no longer supports the more legacy workers, i.e. we now force self.config.trainer.use_legacy_worker_impl == "disable".

I feel that the fully async Verl support will go with a parallel, and hopefully non-conflicting, path or backend. But just in case -- maybe check how these recent PRs might or might not disrupt your ongoing work and if so I can help take a look at where the conflicts happen.

kylemontgomery1 and others added 7 commits April 4, 2026 11:56
Resolve conflict: keep rllm/experimental/rollout/rollout_engine.py from main
(was deleted on this branch, modified on main)

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Move enhanced rollout engines (tinker, verl, completer, types) back to
rllm/experimental/rollout/ and revert rllm/engine/rollout/ to match main.
Fix import paths in experimental code and tinker backend/transform.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Revert parser files to main (tinker_parser.py, conftest, tests, __init__,
chat_template_parser, utils). Revert tinker_engine to main's ChatTemplateParser
approach, keeping only super().__init__() and _get_model_response rename.
Also restore pyproject.toml to main.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Revert config, docs, examples, and rollout files that referenced
bypass_render_with_parser (now staying in tinker_engine since we
reverted to main's ChatTemplateParser approach). Clean up tinker_backend
to only retain async-related changes.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
The per-request gate on RolloutEngine is unnecessary:
- partial_rollout=True: verl handles abort/resume at server level,
  Tinker hot-swaps weights in place
- partial_rollout=False: coordination happens at task dispatch level
  (coordinator pause/resume), not per-request

Remove close_gate/open_gate/wait_for_gate/wait_for_drain from
RolloutEngine, GatewayManager, and model-gateway proxy/server/client.
Remove needs_weight_sync_gate from BackendProtocol.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
…, cleanup

- Move _in_flight_tasks tracking from UnifiedTrainer to SyncCoordinator
- Add epoch start/end hooks to async generation loop
- Remove dead _EPISODE_STRIP_KEYS constants from buffer
- Revert is_validation rename in engine/ (defer to future PR)
- Restore rllm-model-gateway/ to main

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Comment thread rllm/agents/agent.py
self.metadata = value

def model_post_init(self, __context: Any) -> None:
self.chat_completions = deepcopy(self.chat_completions)
Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This deepcopy was incorrectly removed during the refactor from dataclasses to pydantic. Many old workflows operate with:

for turn in max_turns:
    output: ModelOutput = await self.rollout_engine.get_model_response(messages)
    messages.append("role": "assistant", "content": output.content, ...}
    trajectory.steps.append(Step(chat_completions=messages, model_output=output))

If chat completions is not deepcopied, then appending a message on a future turn would mutate a previous turn's step.chat_completions.

Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I guess in the future we should really have a rLLM built-in messages format & class (similar to Tinker's Message), and ensure (1) it's as easy to work with as a plain dictionary, while (2) every step only holds a "view" of it (so no need to keep lots of copies, while earlier steps are not affected).

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agreed, I think we can spend some time this week rethinking messages/parsers.

@kylemontgomery1 kylemontgomery1 marked this pull request as ready for review April 4, 2026 22:34
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
from dataclasses import dataclass
from typing import TYPE_CHECKING

if TYPE_CHECKING:
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

have been seeing this type_checking thing a lot in the codebase. curious if it's actually needed?

Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There are mostly for static type checking purpose (code readeability, LSP support, etc.) without actually importing the files during runtime for overhead (often used when we just need the type of a variable). I would say they are in general helpful, but there can be cases where we can discard them (e.g. class type too obvious or can be directly inferred).

@listar2000
Copy link
Copy Markdown
Collaborator

@kylemontgomery1 All those annoying pre-commit issues, failing tinker engine test, etc. have been resolved in some latest PRs to the main. #487

@kylemontgomery1 kylemontgomery1 changed the title Unified fully async Integrate fully async training to UnifiedTrainer Apr 5, 2026
kylemontgomery1 and others added 2 commits April 5, 2026 13:34
Auto-fixed import sorting, unused imports, and formatting across 13 files.
Manual fixes: TYPE_CHECKING import for tqdm in buffer.py, isinstance union
syntax in metrics.py, moved logger below imports in unified_trainer.py,
split long log line.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
@listar2000 listar2000 merged commit 6d97845 into main Apr 5, 2026
5 checks passed
@listar2000
Copy link
Copy Markdown
Collaborator

@kylemontgomery1 Merged and great efforts! I think we should plan for some blogpost and definitely doc updates once the Verl's side is also ready.

JasonWei05 added a commit that referenced this pull request Apr 6, 2026
* fix(trainer): supplement dfed770 by adding missing update_weights in sdk trainer to fix vllm engine weight loss and Ascend PositionEmbedding OOB error

* Fix norm_adv_by_std_in_grpo read from algorithm not stepwise_advantage

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* Add multi-server support to MCPEnvironment

* fix: update verl import paths for verl 0.7.1+ compatibility

verl 0.7.1 refactored fully_async_policy.ray_trainer into
separation.ray_trainer (PR verl-project/verl#5184). Update imports:

- FullyAsyncRayPPOTrainer → SeparateRayPPOTrainer
- FullyAsyncAgentLoopManager → AgentLoopManager
- fully_async_policy.fully_async_main → separation.utils

Fixes #470

Signed-off-by: Lidang-Jiang <lidangjiang@gmail.com>

* test: add import path verification tests for verl 0.7.1

Signed-off-by: Lidang-Jiang <lidangjiang@gmail.com>

* additional fixes of sdk trainer

* fix: migrate VerlBackend to new EngineWorker path (verl 0.7.1) (#483)

* fix: make VerlBackend work with new engine workers only

* fix code tool and reward import issues

* lazy import of autoprocessor

* fix: convert OmegaConf config to ActorConfig dataclass in CustomPPOLoss

The loss function wrapper was receiving a raw OmegaConf DictConfig
(with struct mode ON) from the VerlBackend, but ppo_loss expects a
Python ActorConfig dataclass with runtime-only fields like
global_batch_info. Use omega_conf_to_dataclass() to bridge the gap,
mirroring what Verl's own engine worker does at initialization.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* turn assertion into force conversion for non-disable legacy worker setting

---------

Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* feat: add hf_template tokenize_and_mask method + verl SFTTrainer compat

1. RLLMSFTDataset.__init__ now accepts processor and max_samples kwargs,
   matching verl's create_sft_dataset() call signature. Without this,
   using RLLMSFTDataset as custom_cls with verl's SFTTrainer(config)
   crashes with TypeError.

2. Add hf_template tokenization method that uses tokenizer.apply_chat_template()
   directly instead of rLLM's ChatTemplateParser. The existing cumulative/stepwise
   methods render tool calls as JSON-in-XML, which is wrong for models with native
   XML tool call format (e.g. Qwen3-Coder). The hf_template method produces the
   model's native format.

   Config: data.rllm.tokenize_and_mask_method: hf_template

* fix: handle signal.signal ValueError in non-main threads (#484)

Module-level `signal.signal(signal.SIGALRM, timeout_handler)` raises
`ValueError: signal only works in main thread` when taco.py is imported
in Ray worker threads (common during GRPO training with verl).

Wrap in try/except so the module can be safely imported from any thread.
The timeout handler is only functional in the main thread regardless.

* fix: resolve CI failures — E501 lint, tinker test deps, disable Claude actions (#486)

* fix: resolve CI failures — E501 lint errors, tinker test deps, disable Claude actions

- Fix all E501 (line > 200 chars) violations across ~45 Python files by
  wrapping long lines using standard Python continuation patterns
- Add per-file-ignores in pyproject.toml for 31 prompt/string-heavy files
  where long lines are intentional (agent prompts, system instructions)
- Add --extra dev to tinker CI workflow so pytest is available
- Disable Claude Code and Claude Code Review workflows due to credential issue

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* remove unused var

* keep fixing

* fix: run pre-commit on changed files only for PRs, show diffs on failure

- For pull requests: use --from-ref/--to-ref to only check files changed
  in the PR, matching local developer behavior
- For pushes to main: keep --all-files as a safety net
- Add --show-diff-on-fail so CI output shows exactly what needs fixing
- Add fetch-depth: 0 so git history is available for ref comparison

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

---------

Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* style: auto-format 21 files to fix ruff-format pre-commit failures (#487)

* style: auto-format 21 files with ruff-format to fix pre-commit failures

Apply ruff-format to pre-existing formatting issues across the codebase:
- Wrap long lines (dicts, function signatures, string literals)
- Collapse short multi-line forms that fit on one line
- Add missing trailing newline (conftest.py)
- Expand __all__ lists for readability

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* fix: exclude notebooks from ruff-format pre-commit hook

The ruff-format hook was missing the .ipynb exclusion that the ruff lint
hook already had, causing pre-commit to fail on notebook formatting.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

---------

Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* Integrate fully async training to UnifiedTrainer (#481)

* init new feature on unified fully async design

* add coordinator control and refactor queue

* cherrypick Kyle's async design refinements from kyle/deepresearch

Adopts core async architecture improvements: BufferedEpisodeGroup with
EpisodeGroupAccumulator, simplified SyncCoordinator with throttle and
pause/resume, fire-and-forget generation loop, streaming gradient
accumulation, and weight sync gate mechanism on RolloutEngine.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* Refactor chat parser and migrate experimental rollout to engine (#435)

* start refactoring

* revert chat template parser and override tinker parser test

* revert and fix chat parser test

* refactor tinker engine to use tinker parser

* deprecate bypass renderer mentions

* move experimental rollout out

* dump changes to rollout_engine into main file

* refactor base rollout engine class to standardize gating behaviors

* make tinker backend fully compatible

* merge Kyle's fork

* bump vllm, deepcopy msgs in Step's post_init

* [wip] make fully-async unified trainer compatible with agent flow engines

* fix staleness thottling

* enfore concurrency across engines

* fix fully async, refactor metrics

* revert engine/rollout to main, restore experimental/rollout engines

Move enhanced rollout engines (tinker, verl, completer, types) back to
rllm/experimental/rollout/ and revert rllm/engine/rollout/ to match main.
Fix import paths in experimental code and tinker backend/transform.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* revert TinkerChatTemplateParser and parser changes for separate PR

Revert parser files to main (tinker_parser.py, conftest, tests, __init__,
chat_template_parser, utils). Revert tinker_engine to main's ChatTemplateParser
approach, keeping only super().__init__() and _get_model_response rename.
Also restore pyproject.toml to main.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* revert bypass_render_with_parser and tinker parser-related changes

Revert config, docs, examples, and rollout files that referenced
bypass_render_with_parser (now staying in tinker_engine since we
reverted to main's ChatTemplateParser approach). Clean up tinker_backend
to only retain async-related changes.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* remove engine/gateway-level gate mechanism

The per-request gate on RolloutEngine is unnecessary:
- partial_rollout=True: verl handles abort/resume at server level,
  Tinker hot-swaps weights in place
- partial_rollout=False: coordination happens at task dispatch level
  (coordinator pause/resume), not per-request

Remove close_gate/open_gate/wait_for_gate/wait_for_drain from
RolloutEngine, GatewayManager, and model-gateway proxy/server/client.
Remove needs_weight_sync_gate from BackendProtocol.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* refactor: move task tracking to coordinator, revert validation rename, cleanup

- Move _in_flight_tasks tracking from UnifiedTrainer to SyncCoordinator
- Add epoch start/end hooks to async generation loop
- Remove dead _EPISODE_STRIP_KEYS constants from buffer
- Revert is_validation rename in engine/ (defer to future PR)
- Restore rllm-model-gateway/ to main

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* restore load_balancer assertion in verl_engine, revert tool_base to main

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* fix: add future annotations to rollout_engine for TYPE_CHECKING imports

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* style: fix ruff lint and format issues on unified-fully-async branch

Auto-fixed import sorting, unused imports, and formatting across 13 files.
Manual fixes: TYPE_CHECKING import for tqdm in buffer.py, isinstance union
syntax in metrics.py, moved logger below imports in unified_trainer.py,
split long log line.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

---------

Co-authored-by: listar2000 <35262801+listar2000@users.noreply.github.com>
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>

* fix(verl): disable vllm compile cache to work around corruption bug (#490)

---------

Signed-off-by: Lidang-Jiang <lidangjiang@gmail.com>
Co-authored-by: ZhihaoSun <bitszh3271@163.com>
Co-authored-by: Zakir Jiwani <108548454+JiwaniZakir@users.noreply.github.com>
Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
Co-authored-by: taivu1998 <46636857+taivu1998@users.noreply.github.com>
Co-authored-by: Lidang-Jiang <lidangjiang@gmail.com>
Co-authored-by: Kyle Montgomery <54512765+kylemontgomery1@users.noreply.github.com>
Co-authored-by: listar2000 <35262801+listar2000@users.noreply.github.com>
Co-authored-by: yifannnwu <yifannn.wu@gmail.com>
Co-authored-by: Yifan Wu <17992118+yifannnwu@users.noreply.github.com>
Co-authored-by: Bryan Lu <55512809+luyuzhe111@users.noreply.github.com>
@kylemontgomery1 kylemontgomery1 deleted the unified-fully-async branch April 8, 2026 04:38
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants