-
Notifications
You must be signed in to change notification settings - Fork 5
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[EAGLE-3698] - model upload handles multiple batch #227
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This PR is currently very large. Can it be broken down into more manageable chunks for review?
E.g.:
- Add multiple batches handling in one PR (maybe two if it can be logicially split)
- Add new model type example text-embedder
- Add new model type example multimodal-embedder
- Add vllm example
Or the model type and vllm examples can be added first if you prefer
Broke down into #236 : update code for batching and update old examples and #237 added text-embedder and multimodal-embedder examples |
Why
For now model only predicts one by one input even sending a batch with size >1.
How
get_predictions()
method ininference.py
will take a list of inputs instead of single input.Other updates:
Note:
Models generated by a lower version will not function on this version.