Skip to content

Conversation

@lizzy-0323
Copy link

What type of PR is this?

add helm chart for predefined-models

What this PR does / why we need it:

Which issue(s) this PR fixes:

Fixes #153

Special notes for your reviewer:

Does this PR introduce a user-facing change?


@lizzy-0323 lizzy-0323 requested a review from slin1237 as a code owner July 31, 2025 16:36
Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Summary of Changes

Hello @lizzy-0323, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request introduces a new Helm chart designed to streamline the deployment and management of predefined models and their associated serving runtimes for the Open Model Engine (OME). By shifting from manual Kustomize configurations to a Helm-based approach, users will gain enhanced flexibility and control over which models and runtimes are deployed, along with standardized configurations for production readiness.

Highlights

  • New Helm Chart Introduction: I've introduced a new Helm chart named ome-predefined-models to simplify the deployment and management of predefined models and serving runtimes within the OME ecosystem.
  • Templated Model Definitions: I've added templated Kubernetes ClusterBaseModel resources for a variety of popular models from vendors such as Meta, DeepSeek, Intfloat, Microsoft, Moonshot AI, and NVIDIA, allowing for easy expansion and management.
  • Serving Runtime Support: I've included templated Kubernetes ClusterServingRuntime resources that support both vLLM and SGLang (SRT) runtimes, with specific configurations for different model types, including an RDMA-enabled setup for DeepSeek models.
  • Configurable Deployment: I've provided a comprehensive values.yaml file that enables users to easily enable or disable specific models and runtimes, offering fine-grained control over their deployments.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in issue comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments or fill out our survey to provide feedback.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a new Helm chart, ome-predefined-models, for deploying a collection of predefined models and serving runtimes. The chart is well-structured and provides a solid foundation for managing these resources.

My review focuses on improving production readiness, correctness, and maintainability. Key findings include the use of an internal container image which will likely cause deployment failures for public users, the use of a :dev image tag which is not ideal for production, an incorrect model URI, hardcoded storage paths that limit flexibility, and a number of unused values in values.yaml which could confuse users.

Addressing these points will significantly improve the quality and usability of the chart.

routerConfig:
runner:
name: router
image: ghcr.io/moirai-internal/sgl-router:0.1.4.30f2a44
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

critical

The router image ghcr.io/moirai-internal/sgl-router:0.1.4.30f2a44 appears to be from an internal container registry. If this Helm chart is intended for public use, this image will not be accessible to external users, causing deployments to fail. Please use a publicly available image.

displayName: meta.llama-3.3-70b-instruct
storage:
storageUri: hf://meta-llama/Llama-3.3-70B-Instruct
path: /raid/models/meta/llama-3-3-70b-instruct
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The storage path /raid/models is hardcoded. This reduces the chart's flexibility, as users might have different storage layouts or permissions. It would be better to make the base path configurable via values.yaml.

    path: {{ .Values.global.storageBasePath | default "/raid/models" }}/meta/llama-3-3-70b-instruct

disabled: false
version: "1.0.0"
storage:
storageUri: hf://nvidia/Llama-3.1-Nemotron-70B-Instruct
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The storageUri appears to be incorrect for this model. The model is named llama-3-1-nemotron-ultra-253b-v1, which suggests a 253B parameter model, but the storageUri points to Llama-3.1-Nemotron-70B-Instruct. Please verify and correct the URI to ensure the correct model is downloaded.

path: /dev/infiniband
runner:
name: ome-container
image: docker.io/lmsysorg/sglang:dev
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The container image docker.io/lmsysorg/sglang:dev uses the dev tag. Using mutable tags like dev or latest is not recommended for production environments as it can lead to unpredictable deployments when the underlying image is updated. Please use a specific, immutable image tag (e.g., a version number or git SHA) to ensure repeatable deployments.

Comment on lines +59 to +134
srt:
enabled: true
deepseek_rdma:
enabled: true
deepseek_rdma_pd:
enabled: false
llama_4_maverick_17b_128e_instruct_fp8:
enabled: false
llama_4_maverick_17b_128e_instruct_fp8_pd:
enabled: false
llama_4_scout_17b_16e_instruct:
enabled: false
llama_4_scout_17b_16e_instruct_pd:
enabled: false
e5_mistral_7b_instruct:
enabled: true
llama_3_3_70b_instruct:
enabled: true
llama_3_3_70b_instruct_pd:
enabled: false
mistral_7b_instruct:
enabled: false
mistral_7b_instruct_pd:
enabled: false
mixtral_8x7b_instruct:
enabled: false
mixtral_8x7b_instruct_pd:
enabled: false
llama_3_2_1b_instruct:
enabled: false
llama_3_2_3b_instruct:
enabled: false
llama_3_2_3b_instruct_pd:
enabled: false
llama_3_2_90b_vision_instruct:
enabled: false
llama_3_1_70b_instruct:
enabled: false
llama_3_1_70b_instruct_pd:
enabled: false
llama_3_2_11b_vision_instruct:
enabled: false
llama_3_2_1b_instruct_pd:
enabled: false
kimi_k2_pd:
enabled: false

# vLLM runtime configurations
vllm:
enabled: true
mistral_7b_instruct:
enabled: false
mixtral_8x7b_instruct:
enabled: false
e5_mistral_7b_instruct:
enabled: true
llama_3_1_405b_instruct_fp8:
enabled: false
llama_3_1_nemotron_nano_8b_v1:
enabled: false
llama_3_1_nemotron_ultra_253b_v1:
enabled: false
llama_3_2_11b_vision_instruct:
enabled: false
llama_3_2_1b_instruct:
enabled: false
llama_3_2_3b_instruct:
enabled: false
llama_3_3_70b_instruct:
enabled: true
llama_3_3_nemotron_super_49b_v1:
enabled: false
llama_4_maverick_17b_128e_instruct_fp8:
enabled: false
llama_4_scout_17b_16e_instruct:
enabled: false
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

This values.yaml file contains many entries for SRT and vLLM runtimes that do not have corresponding template logic in templates/srt-runtimes.yaml or templates/vllm-runtimes.yaml. For example, deepseek_rdma_pd is defined here but not used in the templates.

This can be confusing for users who might try to enable them and see no effect. To improve clarity and maintainability, please remove these unused values or add the corresponding templates.

--enable-metrics \
--api-key sgl
volumeMounts:
- mountPath: /dev/shm
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

There is trailing whitespace on this line. While YAML is often tolerant of this, it's best practice to remove it for cleanliness and to avoid potential parsing issues with stricter tools.

          - mountPath: /dev/shm

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[ENHANCEMENT] helm chart for runtime, models, and services

1 participant