[Feature Request]: Support loading a different model/key for RunInference #27628
Closed
1 of 15 tasks
Labels
done & done
Issue has been reviewed after it was closed for verification, followups, etc.
new feature
P2
python
Milestone
What would you like to happen?
Today, many users have pipelines that choose a single model for inference from 100s or 1000s of models based on properties of the data. Unfortunately, RunInference does not support this use case. We should support a new use case for RunInference that allows a single keyed RunInference transform to serve a different model for each key.
See design doc here - https://docs.google.com/document/d/1kj3FyWRbJu1KhViX07Z0Gk0MU0842jhYRhI-DMhhcv4/edit?usp=sharing
Issue Priority
Priority: 2 (default / most feature requests should be filed as P2)
Issue Components
The text was updated successfully, but these errors were encountered: