-
Notifications
You must be signed in to change notification settings - Fork 307
docs: GRPO documentation and Configuration cleanup #7
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merged
Merged
Changes from all commits
Commits
Show all changes
6 commits
Select commit
Hold shift + click to select a range
a9032c5
GRPO doc update
SahilJain314 941e004
config cleanup
SahilJain314 b8836a5
improved philosophy doc
SahilJain314 6c1bf67
added diagram
SahilJain314 6854497
added diagram
SahilJain314 bb7b255
Updating grpo printout
SahilJain314 File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -1,3 +1,93 @@ | ||
| # GRPO | ||
| # An in-depth walkthrough of GRPO in Reinforcer | ||
|
|
||
| placeholder TBD | ||
| ## Quickstart: Launch a GRPO Run | ||
|
|
||
| If you want to get running quickly, the script [examples/run_grpo_math.py](../../examples/run_grpo_math.py) has an example implementation of using GRPO to train a model on math problems. This script can either be launched locally or via Slurm. For details on how to set up Ray and launch a job using Slurm, refer to the [cluster documentation](../cluster.md). | ||
|
|
||
| We recommend launching the job using `uv`: | ||
| ```bash | ||
| uv run examples/run_grpo_math.py --config <PATH TO YAML CONFIG> {overrides} | ||
SahilJain314 marked this conversation as resolved.
Show resolved
Hide resolved
|
||
| ``` | ||
| If not specified, `config` will default to [examples/configs/grpo.yaml](../../examples/configs/grpo.yaml) | ||
|
|
||
| ## Now, for the details: | ||
|
|
||
| In this guide, we'll walk through we handle | ||
| * Data | ||
| * Model training | ||
| * Fast generation | ||
| * Overall Resource Flow | ||
|
|
||
| ### Data | ||
| We support training with multiple RL "Environments" at the same time. | ||
|
|
||
| An [Environment](../../nemo_reinforcer/environments/interfaces.py) is an object that accepts a state/action history and returns an update state and rewards for the step. They run as Ray Remote Actors. Example [MathEnvironment](../../nemo_reinforcer/environments/math_environment.py). | ||
|
|
||
| To support this, we need to know: | ||
| * What environments you have | ||
| * Which data should go to which environments | ||
| * How to prepare the data from your dataset into a form we can use | ||
|
|
||
| #### Common Data Format | ||
| We define a [DatumSpec](../../nemo_reinforcer/data/interfaces.py) that holds all relevant information for each training example: | ||
SahilJain314 marked this conversation as resolved.
Show resolved
Hide resolved
|
||
| ```python | ||
| class DatumSpec(TypedDict): | ||
| message_log: LLMMessageLogType | ||
| length: int # total (concatenated) length of the message tensors | ||
| extra_env_info: Dict[str, Any] # anything your environment requires goes here, for example the 'answer' of a math problem | ||
| loss_multiplier: float # multiplier for the loss for this datum. 0 to mask out (say the sample is invalid) | ||
| idx: int | ||
| task_name: Optional[str] = "default" | ||
| __extra__: Any # This allows additional fields of any type | ||
| ``` | ||
|
|
||
| #### Data Processors | ||
| We name all distinct "environments your model wants to optimize against" "tasks". So you might define a "math" task or a "code" task. | ||
| For each task, you should provide a data processor that reads from your dataset and returns a [DatumSpec](../../nemo_reinforcer/data/interfaces.py) | ||
|
|
||
| ```python | ||
| def my_data_processor( | ||
| datum_dict: Dict[str, Any], # loaded directly from your dataset (i.e. single line of jsonl data) | ||
| task_data_spec: TaskDataSpec, | ||
| tokenizer, | ||
| max_seq_length: int, | ||
| idx: int, | ||
| ) -> DatumSpec: | ||
| ``` | ||
| We have an example of this as `math_data_processor` in [run_grpo_math.py](../../examples/run_grpo_math.py) | ||
|
|
||
| #### Putting it all together: | ||
| GRPO expects datasets to have the following form: | ||
| ```json | ||
| {"task_name": "math", <actual data>} | ||
| ``` | ||
| Then, you can set data up as such: | ||
| ```python | ||
| base_dataset = load_dataset("json", data_files=data_config["dataset_name"])["train"] | ||
| tokenizer = AutoTokenizer.from_pretrained(policy_config["model_name"]) | ||
|
|
||
| task_data_processors = defaultdict(lambda: (math_task_spec, math_data_processor)) | ||
| task_data_processors["math"] = (math_task_spec, math_data_processor) | ||
|
|
||
| math_env = MathEnvironment.remote(env_configs["math"]) # ray remote actor | ||
|
|
||
| dataset = AllTaskProcessedDataset( | ||
| base_dataset, | ||
| tokenizer, | ||
| math_task_spec, | ||
| task_data_processors, | ||
| max_seq_length=data_config["max_input_seq_length"], | ||
| ) | ||
| ``` | ||
| Notice that you provide a mapping of tasks to their processors so the dataset knows what to use when processing samples. | ||
|
|
||
|
|
||
| ### Policy Model | ||
| We define a [PolicyInterface]() that contains everything you need to train a Policy model. | ||
|
|
||
| This Policy object holds a [RayWorkerGroup](../../nemo_reinforcer/distributed/worker_groups.py) of SPMD (1 proc/gpu) processes that run HF/MCore, all coordinated by this object so it appears to you like 1 GPU! | ||
|
|
||
| ### Fast Generation | ||
| We support vLLM through the [VllmGeneration](../../nemo_reinforcer/models/generation/vllm.py) class right now. | ||
|
|
||
| The function [grpo_train](../../nemo_reinforcer/algorithms/grpo.py) contains the core GRPO training loop. | ||
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Uh oh!
There was an error while loading. Please reload this page.