-
Notifications
You must be signed in to change notification settings - Fork 104
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[P1] Possible to do batch inference? #105
Comments
@thistleknot Yes, it supports batched inference calls. You can take a look at this function for batching: In a nutshell, you need to apply left padding to your tokenizer and calculate the batched intervention locations accordingly. |
'calculate the batched intervention locations accordingly.' that doesn't sound easy. I'm not sure if I can use the same -1 position as I was before for each prompt... or if it's expecting it to be where it is within the batch tensor. |
you able to help a brother out?
that's what I got atm, but it's not applying the control vector appropriately |
I'm doing this atm
but i'd like to escape the iteration, and I'm not sure how to format unit_locations. Normally one would do something like model.generate(**inputs), but this being pyreft, I'm not sure if that is supported as it's a custom class (I haven't delved into the class for this specific feature).
Thought I'd ask first as well as for visibility for others who might be interested.
The text was updated successfully, but these errors were encountered: