-
Notifications
You must be signed in to change notification settings - Fork 690
fix: add instruction to deploy model with inference gateway #2257
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix: add instruction to deploy model with inference gateway #2257
Conversation
WalkthroughThe documentation for the inference gateway deployment process was updated. The previously combined step for installing the model and helm chart was split into two separate steps: one for deploying the model and another for installing the helm chart. Explicit instructions and sample commands for model deployment were added. Changes
Estimated code review effort🎯 1 (Trivial) | ⏱️ ~2 minutes Possibly related PRs
Poem
Note ⚡️ Unit Test Generation is now available in beta!Learn more here, or try it out under "Finishing Touches" below. Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
Documentation and Community
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (1)
deploy/inference-gateway/README.md (1)
77-81: Minor wording & formatting tweaks
- Capitalise the sentence starter and fix the code-block caption:
-sample commands to deploy model: +Sample command to deploy the model:
- Consider adding a reminder that
agg.yamlmust point to the correct image/weights bucket if users customised it.Purely editorial; feel free to ignore.
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
deploy/inference-gateway/README.md(1 hunks)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: Build and Test - vllm
🔇 Additional comments (1)
deploy/inference-gateway/README.md (1)
73-83: No duplicate InferenceModel – step 3’s agg.yaml doesn’t define that CRDI inspected the vLLM aggregate manifest and the Helm chart:
components/backends/vllm/deploy/agg.yamlcontains nokind: InferenceModel.- The only
InferenceModelresource lives in
deploy/inference-gateway/helm/dynamo-gaie/templates/inference-model.yaml.Since step 3 does not create an
InferenceModel, there’s no conflict when you install the Helm chart in step 4. You can ignore the suggestion to pass--set inferenceModel.enabled=false.Likely an incorrect or invalid review comment.
0f49c11 to
6b8f55f
Compare
735c9d4 to
ca7db03
Compare
e4de1ff to
34fc842
Compare
|
@ishandhanani @athreesh please take another look.
|
…2260) Signed-off-by: Biswa Panda <[email protected]>
…2260) Signed-off-by: Biswa Panda <[email protected]>
dfdafe5 to
f3cf08f
Compare
f3cf08f to
e83529f
Compare
|
rebased on top of main (8f24c02) |
Overview:
closes:
linear: DEP-297
nvbug
Cherypick into release branch: #2260
Summary by CodeRabbit