You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The ReadMe mentions the ability to serve at scale with continuous batching.
Even if not vLLM or TGI, is there some work that someone could point me to on this?
Is there any functioning packaging for serving continuous batching via an endpoint? Thanks
The text was updated successfully, but these errors were encountered:
frankaging
changed the title
Location of code for "LM training and serving with ReFT"
[P1] Location of code for "LM training and serving with ReFT"
Apr 29, 2024
@RonanKMcGovern Thanks for the note! The continuous batching is still in planning right now, we might do it soon (after our conf submission deadline).
I am thinking to implement it not as a generic API, but a pyreft feature where intervention locations on KV cache requires some special handlings (since inputs are concatenated together in that case)
Yes, I was thinking that there'll need to be a separate "inputs" field for the intervention.
This may be a bit wild of an idea, but I wonder if the interventions could be input as images are in vLLM and TGI. It's a case where there needs to be a second "pre-processor".
The ReadMe mentions the ability to serve at scale with continuous batching.
Even if not vLLM or TGI, is there some work that someone could point me to on this?
Is there any functioning packaging for serving continuous batching via an endpoint? Thanks
The text was updated successfully, but these errors were encountered: