Skip to content

Now, we run individual prompts through the queue.#796

Merged
finbarrtimbers merged 14 commits intomainfrom
insert-prompts
Jul 18, 2025
Merged

Now, we run individual prompts through the queue.#796
finbarrtimbers merged 14 commits intomainfrom
insert-prompts

Conversation

@finbarrtimbers
Copy link
Collaborator

@finbarrtimbers finbarrtimbers commented Jul 17, 2025

Changes the queue setup so that we run individual prompts through the queues rather than batches. This will avoid the need for load balancing and should enable us to get some nice speed ups once we enable the async engine (next PR!).

Benchmark results! From HEAD (spoiler, basically no change):

Results (excluding first batch):
Average tokens/second: 1014.84
Average MFU: 3.96%
Average generation time per batch: 2.02s
Average new tokens per sample: 2048.0 tokens
Wasted compute % (variable response length): 0.00%

From this PR:

Results (excluding first batch):
Average tokens/second: 1008.56
Average MFU: 3.94%
Average generation time per batch: 2.04s
Average new tokens per sample: 2048.0 tokens
Wasted compute % (variable response length): 0.00%

@mnoukhov
Copy link
Contributor

I'm confused why (1) the length of samples goes down and also (2) the time per batch goes down if the speed goes down too

@finbarrtimbers
Copy link
Collaborator Author

I'm confused why (1) the length of samples goes down and also (2) the time per batch goes down if the speed goes down too

@mnoukhov it's because I accidentally changed the batch size between runs! That's not the length, but the total number of tokens generated in the batch. Nice catch.

@finbarrtimbers finbarrtimbers marked this pull request as ready for review July 17, 2025 22:26
Copy link
Contributor

@mnoukhov mnoukhov left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Did a quick pass and seems good. Definitely a good precursor to async

dataset_indices_batch = []
eval_prompts = None

# Pull prompts until we have a full batch or queue is empty
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Seems like this can have more prompts than batch_size (unless .get only gets one at a time?)

I'm also confused when would you have a batch that is not full but due to timeout you will take what you get?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok, I have fixed this! Now, PromptRequest only contains a single prompt, and I have removed the timeout, so it will block until it gets a full batch.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Makes sense, you can remove the comment about non-blocking

Copy link
Contributor

@mnoukhov mnoukhov left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

other than the little questions about specific batch sizes and divisibility, seems good to me

dataset_indices_batch = []
eval_prompts = None

# Pull prompts until we have a full batch or queue is empty
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Makes sense, you can remove the comment about non-blocking

inference_results_Q, pending_queries_map, args.vllm_num_engines, training_step
inference_results_Q,
pending_queries_map,
args.num_unique_prompts_rollout * args.num_samples_per_prompt_rollout,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is it possible that at the end of an epoch or something you will have fewer than this number of results? or are we dropping the last batch

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ShufflingIterator guarantees that they're all the same size:

 class ShufflingIterator:
      def __init__(self, data: np.ndarray, batch_size: int, seed: Optional[int] = None):
          self.data = data.copy()
          self.batch_size = batch_size
          self.index = 0
          self.rng = np.random.default_rng(seed)
          self.rng.shuffle(self.data)

          # Ensure the effective dataset size is divisible by batch_size                                                  
          self.effective_size = len(self.data) - (len(self.data) % batch_size)

      def __iter__(self) -> Iterator[List[int]]:
          return self

      def __next__(self) -> List[int]:
          if self.index >= self.effective_size:
              self.index = 0
              self.rng.shuffle(self.data)

          end_index = self.index + self.batch_size
          batch = self.data[self.index : end_index].tolist()
          self.index = end_index

          return batch

Comment on lines +2089 to +2091
batch_size_per_engine = (
args.num_unique_prompts_rollout * args.num_samples_per_prompt_rollout
) // args.vllm_num_engines
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

do we know if this is equally divisible? and is it alright if its not?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It should be divisible; I changed the code to raise a ValueError if it isn't, and we can handle that then.

@finbarrtimbers finbarrtimbers merged commit 541058c into main Jul 18, 2025
3 checks passed
@finbarrtimbers finbarrtimbers deleted the insert-prompts branch July 18, 2025 19:51
saurabh111233212 added a commit that referenced this pull request Jul 21, 2025
finbarrtimbers added a commit that referenced this pull request Jul 21, 2025
finbarrtimbers added a commit that referenced this pull request Jul 22, 2025
garrett361 added a commit to garrett361/open-instruct that referenced this pull request Jul 23, 2025
* Update oe-eval.sh to set a default timeout of 48h. (allenai#789)

* Updated configs to support changes. (allenai#790)

* Add benchmark scripts (allenai#786)

* Added scripts to run benchmarks.

* Removed install script.

* Added install script back.

* Add remap verifier (allenai#773)

* first pass remap verifier

* make judge json parsing a little more robust

* typoooooooo

* typoooooooo

* fix logic...

* clean logging naming up

* Ran the linter. (allenai#792)

* fix the URL for code api setup (allenai#791)

Co-authored-by: Michael Noukhovitch <[email protected]>

* Add nltk setup to uv dockerfile (allenai#785)

* add punk tokenizer

* fix up command

* Switches the actors to use the Ray queue. (allenai#784)

* Made changes.

* Switched to use ray.util.queue.Queue instead of a custom RayQueue class.

* Now, only handles new version.

* Updated benchmark_generators.py and test_grpo_fast.py.

* CLeaned up code from Claude.

* training_step defaults to None.

* Added an info dataclass to replace the tuple.

* Removes assumption that queries_prompt_Q and inference_results_Q are in sync by moving queries_prompt_Q to be a map.

* CLeaned up benchmark

* Added code to split batch sizes.

* Removed benchmark scripts, which are now in a separate PR.

* Now, we create all Ray queues in main, and pass them in as appropriate.

* Removed changes

* Test changes.

* Linter passes

* Added tests.

* Now, we index with the dataset indices.

* Checks and tests pass.

* Ran linter

* Added benchmark scripts back. Whoops.

* Set new default value for num_samples

* Updates the benchmark script  (allenai#795)

* Set new default value for num_samples

* Now run N batches at once

* different batch size

* Fix pack length

* Fix pack length

* Fix wasted compute % (was accidentally multiplying by 100), and fix num rollouts (was referencing the wrong variable).

* Now, we save benchmark results to CSV.

* Now show a percentage for time spent generating.

* Updated benchmark saving code.

* Fixed syntax error.

* Fixed benchmark

* Fixed timing code.

* Removed changes to vllm_utils3.py.

* Now, we actually write the data to disk>

* Bigger batch

* Modified benchmark

* Undid changes to benchmark script.

* Temp change

* Undid changes to benchmark script.

* install nginx in uv (allenai#793)

it was only being installed in regular Dockerfile

Co-authored-by: Michael Noukhovitch <[email protected]>
Co-authored-by: Saurabh Shah <[email protected]>

* allow passing local models, bubble up dataset cache errors (allenai#797)

Co-authored-by: Michael Noukhovitch <[email protected]>

* binary reward for code (allenai#798)

* binary reward for code

* style

* binary code reward flag -> pass rate reward threshold

* Now, we run individual prompts through the queue. (allenai#796)

* Now, we run individual prompts through the queue.

* Fixed issues.

* Ran linter

* Fixed linter errors.

* COde lints.

* Test passes.

* Ran linter.

* Ensures that we send single prompts as requests.

* Now, code lints.

* Cleaned up code.

* Fixes test.

* Linter passes.

* Cleaned test up.

* Removed redundant comments.

* Adds flashinfer dep. (allenai#800)

* Adds flashinfer dep.

* Now, open_instruct builds even on mac.

* Updated install instructions to add flash-infer.

* Now, we set flashinfer as the default attention backend.

* Added flashinfer to the base dockerfile.

* Ran linter.

* Removed extra changes to mason.py.

* Undid changes to uv.lock.

* Updated requirements.txt

* Updated flash-attn version.

---------

Co-authored-by: Hamish Ivison <[email protected]>

* new beaker names (allenai#803)

* Remove Unused DPO Function (allenai#794)

* delete function

Signed-off-by: Yu Chin Fabian Lim <[email protected]>

* Update open_instruct/dataset_transformation.py

---------

Signed-off-by: Yu Chin Fabian Lim <[email protected]>
Co-authored-by: Hamish Ivison <[email protected]>

* extra reporting (allenai#799)

prev-branch: padding-free-squashing-7

Co-authored-by: Hamish Ivison <[email protected]>

* Revert "Now, we run individual prompts through the queue. (allenai#796)" (allenai#804)

This reverts commit 541058c.

* Fix misnamed variables. (allenai#808)

* Fix misnamed variables.

* Ran linter.

* Fix broken syntax. (allenai#809)

Co-authored-by: Hamish Ivison <[email protected]>

* Add new olmo chat templates, and improve data mixing/tokenization (allenai#765)

Adds new olmo-core-compatible chat templates. Includes:
* New olmo template with support for function-calling. Includes a basic hard-coded system prompt, and appends "You do not have access to any functions" to any SFT examples that do not include functions.
* Thinker version of the above template, has <think> included in the generation prompt
* R1-style thinker template
These 3 templates mirror our current Tulu templates

Also includes some necessary changes to the --add_bos logic, to handle the new chat template which does not have a bos token.

Includes a few other QoL fixes:
* Fixes a bug in the olmocore tokenization script re: label mask
* Logs dataset-level statistics during data mixing and tokenization
* Supports easy upsampling during data mixing

* Fixes from last PR (allenai#810)

* fix up my (jacob's) slightly broken pr

---------

Co-authored-by: jacob-morrison <[email protected]>

* Delete run_repro.sh (allenai#813)

* Fix disk space error on image creation (allenai#814)

* remove moar things

* create on pr

* dont create on pr

* use upstream stats

---------

Signed-off-by: Yu Chin Fabian Lim <[email protected]>
Co-authored-by: Finbarr Timbers <[email protected]>
Co-authored-by: Hamish Ivison <[email protected]>
Co-authored-by: Michael <[email protected]>
Co-authored-by: Michael Noukhovitch <[email protected]>
Co-authored-by: Saurabh Shah <[email protected]>
Co-authored-by: Yu Chin Fabian Lim <[email protected]>
Co-authored-by: Jacob Morrison <[email protected]>
sang1583535 pushed a commit to sang1583535/open-instruct that referenced this pull request Feb 3, 2026
* Now, we run individual prompts through the queue.

* Fixed issues.

* Ran linter

* Fixed linter errors.

* COde lints.

* Test passes.

* Ran linter.

* Ensures that we send single prompts as requests.

* Now, code lints.

* Cleaned up code.

* Fixes test.

* Linter passes.

* Cleaned test up.

* Removed redundant comments.
sang1583535 pushed a commit to sang1583535/open-instruct that referenced this pull request Feb 3, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants