Skip to content
This repository was archived by the owner on Jun 3, 2025. It is now read-only.

Conversation

@dsikka
Copy link
Contributor

@dsikka dsikka commented Jan 25, 2024

Summary

A series of improvements:

  • Split up the prep_for_generation operator such that we aren't setting up the tokenizer and generating tokens in the same operator, outside of the generation loop
  • With this split, make sure we properly handle edge cases, such as max_length=1, max_new_tokens=0, max_new_tokens=1. Also added the condition such that max_length has to be greater than 0 or we throw a ValueError
  • With splitting this up, we no longer have to import the process_output operator in the prep_for_generation operator and can remove the custom streaming code in the operator . Also, the non-kv cache and kv_cache pipelines now both share the same output_schema, simplifying generation logic
  • Also added a condition to check kv_cache capacity during prompt inference and exit if the capacity is reached before generation

Testing

  • Added new tests to validate the edge cases
  • Added a test to check if we throw an error when the cache capacity is filled during prefill
  • Updated the non-kv cache unit test to also compare logits
  • All other test cases pass

@dsikka dsikka marked this pull request as ready for review January 25, 2024 05:05
dbogunowicz
dbogunowicz previously approved these changes Jan 26, 2024
@dsikka dsikka requested a review from dbogunowicz January 26, 2024 14:48
@dsikka dsikka merged commit 7b028d4 into main Jan 26, 2024
@dsikka dsikka deleted the split_prep_operator branch January 26, 2024 18:42
ProExpertProg pushed a commit that referenced this pull request Jan 26, 2024
… cases, handle kv_cache full during prefill (#1562)

* split prep_for_generation operator

* fix logits

* update non-kv cache pipeline and tests

* add tests to address edge cases

* add condition to check of kv_cache full during prompt inference, add test to cover this case, revert debugging changes

* fix typing

* remove commented code

* remove irrelevant condition

* address PR comments
dsikka added a commit that referenced this pull request Jan 29, 2024
… cases, handle kv_cache full during prefill (#1562)

* split prep_for_generation operator

* fix logits

* update non-kv cache pipeline and tests

* add tests to address edge cases

* add condition to check of kv_cache full during prompt inference, add test to cover this case, revert debugging changes

* fix typing

* remove commented code

* remove irrelevant condition

* address PR comments

(cherry picked from commit 7b028d4)
@dsikka dsikka mentioned this pull request Jan 29, 2024
1 task
bfineran pushed a commit that referenced this pull request Jan 29, 2024
… cases, handle kv_cache full during prefill (#1562)

* split prep_for_generation operator

* fix logits

* update non-kv cache pipeline and tests

* add tests to address edge cases

* add condition to check of kv_cache full during prompt inference, add test to cover this case, revert debugging changes

* fix typing

* remove commented code

* remove irrelevant condition

* address PR comments

(cherry picked from commit 7b028d4)
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants