-
Notifications
You must be signed in to change notification settings - Fork 44
feat: add helm chart for predefined-models #223
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Summary of Changes
Hello @lizzy-0323, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!
This pull request introduces a new Helm chart designed to streamline the deployment and management of predefined models and their associated serving runtimes for the Open Model Engine (OME). By shifting from manual Kustomize configurations to a Helm-based approach, users will gain enhanced flexibility and control over which models and runtimes are deployed, along with standardized configurations for production readiness.
Highlights
- New Helm Chart Introduction: I've introduced a new Helm chart named
ome-predefined-modelsto simplify the deployment and management of predefined models and serving runtimes within the OME ecosystem. - Templated Model Definitions: I've added templated Kubernetes
ClusterBaseModelresources for a variety of popular models from vendors such as Meta, DeepSeek, Intfloat, Microsoft, Moonshot AI, and NVIDIA, allowing for easy expansion and management. - Serving Runtime Support: I've included templated Kubernetes
ClusterServingRuntimeresources that support both vLLM and SGLang (SRT) runtimes, with specific configurations for different model types, including an RDMA-enabled setup for DeepSeek models. - Configurable Deployment: I've provided a comprehensive
values.yamlfile that enables users to easily enable or disable specific models and runtimes, offering fine-grained control over their deployments.
Using Gemini Code Assist
The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.
Invoking Gemini
You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.
| Feature | Command | Description |
|---|---|---|
| Code Review | /gemini review |
Performs a code review for the current pull request in its current state. |
| Pull Request Summary | /gemini summary |
Provides a summary of the current pull request in its current state. |
| Comment | @gemini-code-assist | Responds in comments when explicitly tagged, both in issue comments and review comments. |
| Help | /gemini help |
Displays a list of available commands. |
Customization
To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.
Limitations & Feedback
Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments or fill out our survey to provide feedback.
You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.
Footnotes
-
Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution. ↩
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request introduces a new Helm chart, ome-predefined-models, for deploying a collection of predefined models and serving runtimes. The chart is well-structured and provides a solid foundation for managing these resources.
My review focuses on improving production readiness, correctness, and maintainability. Key findings include the use of an internal container image which will likely cause deployment failures for public users, the use of a :dev image tag which is not ideal for production, an incorrect model URI, hardcoded storage paths that limit flexibility, and a number of unused values in values.yaml which could confuse users.
Addressing these points will significantly improve the quality and usability of the chart.
| routerConfig: | ||
| runner: | ||
| name: router | ||
| image: ghcr.io/moirai-internal/sgl-router:0.1.4.30f2a44 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| displayName: meta.llama-3.3-70b-instruct | ||
| storage: | ||
| storageUri: hf://meta-llama/Llama-3.3-70B-Instruct | ||
| path: /raid/models/meta/llama-3-3-70b-instruct |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The storage path /raid/models is hardcoded. This reduces the chart's flexibility, as users might have different storage layouts or permissions. It would be better to make the base path configurable via values.yaml.
path: {{ .Values.global.storageBasePath | default "/raid/models" }}/meta/llama-3-3-70b-instruct
| disabled: false | ||
| version: "1.0.0" | ||
| storage: | ||
| storageUri: hf://nvidia/Llama-3.1-Nemotron-70B-Instruct |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| path: /dev/infiniband | ||
| runner: | ||
| name: ome-container | ||
| image: docker.io/lmsysorg/sglang:dev |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The container image docker.io/lmsysorg/sglang:dev uses the dev tag. Using mutable tags like dev or latest is not recommended for production environments as it can lead to unpredictable deployments when the underlying image is updated. Please use a specific, immutable image tag (e.g., a version number or git SHA) to ensure repeatable deployments.
| srt: | ||
| enabled: true | ||
| deepseek_rdma: | ||
| enabled: true | ||
| deepseek_rdma_pd: | ||
| enabled: false | ||
| llama_4_maverick_17b_128e_instruct_fp8: | ||
| enabled: false | ||
| llama_4_maverick_17b_128e_instruct_fp8_pd: | ||
| enabled: false | ||
| llama_4_scout_17b_16e_instruct: | ||
| enabled: false | ||
| llama_4_scout_17b_16e_instruct_pd: | ||
| enabled: false | ||
| e5_mistral_7b_instruct: | ||
| enabled: true | ||
| llama_3_3_70b_instruct: | ||
| enabled: true | ||
| llama_3_3_70b_instruct_pd: | ||
| enabled: false | ||
| mistral_7b_instruct: | ||
| enabled: false | ||
| mistral_7b_instruct_pd: | ||
| enabled: false | ||
| mixtral_8x7b_instruct: | ||
| enabled: false | ||
| mixtral_8x7b_instruct_pd: | ||
| enabled: false | ||
| llama_3_2_1b_instruct: | ||
| enabled: false | ||
| llama_3_2_3b_instruct: | ||
| enabled: false | ||
| llama_3_2_3b_instruct_pd: | ||
| enabled: false | ||
| llama_3_2_90b_vision_instruct: | ||
| enabled: false | ||
| llama_3_1_70b_instruct: | ||
| enabled: false | ||
| llama_3_1_70b_instruct_pd: | ||
| enabled: false | ||
| llama_3_2_11b_vision_instruct: | ||
| enabled: false | ||
| llama_3_2_1b_instruct_pd: | ||
| enabled: false | ||
| kimi_k2_pd: | ||
| enabled: false | ||
|
|
||
| # vLLM runtime configurations | ||
| vllm: | ||
| enabled: true | ||
| mistral_7b_instruct: | ||
| enabled: false | ||
| mixtral_8x7b_instruct: | ||
| enabled: false | ||
| e5_mistral_7b_instruct: | ||
| enabled: true | ||
| llama_3_1_405b_instruct_fp8: | ||
| enabled: false | ||
| llama_3_1_nemotron_nano_8b_v1: | ||
| enabled: false | ||
| llama_3_1_nemotron_ultra_253b_v1: | ||
| enabled: false | ||
| llama_3_2_11b_vision_instruct: | ||
| enabled: false | ||
| llama_3_2_1b_instruct: | ||
| enabled: false | ||
| llama_3_2_3b_instruct: | ||
| enabled: false | ||
| llama_3_3_70b_instruct: | ||
| enabled: true | ||
| llama_3_3_nemotron_super_49b_v1: | ||
| enabled: false | ||
| llama_4_maverick_17b_128e_instruct_fp8: | ||
| enabled: false | ||
| llama_4_scout_17b_16e_instruct: | ||
| enabled: false |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This values.yaml file contains many entries for SRT and vLLM runtimes that do not have corresponding template logic in templates/srt-runtimes.yaml or templates/vllm-runtimes.yaml. For example, deepseek_rdma_pd is defined here but not used in the templates.
This can be confusing for users who might try to enable them and see no effect. To improve clarity and maintainability, please remove these unused values or add the corresponding templates.
| --enable-metrics \ | ||
| --api-key sgl | ||
| volumeMounts: | ||
| - mountPath: /dev/shm |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What type of PR is this?
add helm chart for predefined-models
What this PR does / why we need it:
Which issue(s) this PR fixes:
Fixes #153
Special notes for your reviewer:
Does this PR introduce a user-facing change?