-
Notifications
You must be signed in to change notification settings - Fork 690
chore: sglang k8s health/live, update doc #2272
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
WalkthroughThis update modifies the SGLang backend documentation and Kubernetes deployment YAMLs. Documentation now references Changes
Sequence Diagram(s)sequenceDiagram
participant User
participant Frontend (K8s Pod)
participant SGLang Worker (K8s Pod)
User->>Frontend (K8s Pod): Send request (e.g., via curl)
Frontend (K8s Pod)->>Frontend (K8s Pod): Readiness probe (HTTP GET /health)
Frontend (K8s Pod)-->>User: Responds if healthy
Frontend (K8s Pod)->>SGLang Worker (K8s Pod): Forwards request
SGLang Worker (K8s Pod)->>SGLang Worker (K8s Pod): Liveness/readiness/startup probes (HTTP GET /live, /health)
SGLang Worker (K8s Pod)-->>Frontend (K8s Pod): Responds if healthy
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~18 minutes Possibly related PRs
Poem
Note ⚡️ Unit Test Generation is now available in beta!Learn more here, or try it out under "Finishing Touches" below. Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
Documentation and Community
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 4
🔭 Outside diff range comments (1)
components/backends/sglang/deploy/agg_router.yaml (1)
20-28:execprobe now shells out tocurl | jq– high coupling & requiresjq
- The container image must contain
jq, otherwise the probe will always fail.- Mixing
curl | jqinside anexecprobe obscures intent; an HTTP-GET probe is simpler, faster and avoids extra binaries.- readinessProbe: - exec: - command: - - /bin/sh - - -c - - 'curl -s http://localhost:8000/health | jq -e ".status == \"healthy\""' + readinessProbe: + httpGet: + path: /health + port: 8000 + successThreshold: 1
♻️ Duplicate comments (3)
components/backends/sglang/README.md (1)
138-155: Curl example is still very verbose – same concern as prior review
This mirrors the earlier feedback from @ishandhanani about shortening the example payload. Consider trimming the prompt or pointing to a separate snippet instead of inlining 15 lines.components/backends/sglang/deploy/agg.yaml (1)
20-28: Samecurl | jqpattern – convert to native HTTP-GET
Apply the same refactor as suggested foragg_router.yaml.components/backends/sglang/deploy/disagg.yaml (1)
20-28: Switch Frontend readiness probe to HTTP-GET & dropjqdependency
Same rationale as previous files.
🧹 Nitpick comments (4)
components/backends/sglang/deploy/agg_router.yaml (1)
33-38: Huge gap between memory request (10 Gi) and limit (40 Gi)A 4× delta can starve the node scheduler and hides real usage. Either raise the request or lower the limit to a realistic head-room (<2×).
components/backends/sglang/deploy/agg.yaml (1)
33-38: Frontend memory request vs limit mismatch (10 Gi vs 40 Gi)
Align requests with realistic usage to avoid over-commit surprises.components/backends/sglang/deploy/disagg.yaml (2)
33-38: Memory request / limit skew (10 Gi → 40 Gi)
Consider tightening this gap for predictable bin-packing.
49-55: Worker liveness probe period 5 s with heavy model loadDecoding workers often spike >5 s under load; premature restarts are likely. A 15-20 s period with 3 failures is more forgiving while still responsive.
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (4)
components/backends/sglang/README.md(3 hunks)components/backends/sglang/deploy/agg.yaml(4 hunks)components/backends/sglang/deploy/agg_router.yaml(4 hunks)components/backends/sglang/deploy/disagg.yaml(5 hunks)
🧰 Additional context used
🧠 Learnings (9)
📓 Common learnings
Learnt from: julienmancuso
PR: ai-dynamo/dynamo#2012
File: deploy/cloud/helm/crds/templates/nvidia.com_dynamocomponentdeployments.yaml:1178-1180
Timestamp: 2025-07-18T16:05:05.534Z
Learning: The stopSignal field under lifecycle in DynamoComponentDeployment CRDs is autogenerated due to Kubernetes library upgrades (k8s.io/api and k8s.io/apimachinery from v0.32.3 to v0.33.1), not a manual design decision by the user.
Learnt from: julienmancuso
PR: ai-dynamo/dynamo#1365
File: deploy/cloud/operator/api/v1alpha1/dynamocomponentdeployment_types.go:171-178
Timestamp: 2025-06-04T13:09:53.416Z
Learning: The `DYN_DEPLOYMENT_CONFIG` environment variable (commonconsts.DynamoDeploymentConfigEnvVar) in the Dynamo operator will never be set via ValueFrom (secrets/config maps), only via direct Value assignment. The GetDynamoDeploymentConfig method correctly only checks env.Value for this specific environment variable.
Learnt from: julienmancuso
PR: ai-dynamo/dynamo#2012
File: deploy/cloud/helm/crds/templates/nvidia.com_dynamographdeployments.yaml:1233-1235
Timestamp: 2025-07-18T16:04:47.465Z
Learning: The `stopSignal` field in Kubernetes CRDs like DynamoGraphDeployment and DynamoComponentDeployment is autogenerated by controller-gen when upgrading Kubernetes library versions, and represents expected upstream API changes rather than manual code that needs custom validation.
📚 Learning: in components/backends/sglang/deploy/agg_router.yaml, the clear_namespace command is intentionally d...
Learnt from: biswapanda
PR: ai-dynamo/dynamo#2137
File: components/backends/sglang/deploy/agg_router.yaml:0-0
Timestamp: 2025-07-28T17:00:07.968Z
Learning: In components/backends/sglang/deploy/agg_router.yaml, the clear_namespace command is intentionally designed to block the router from starting if it fails (using &&). This is a deliberate design decision where namespace clearing is a critical prerequisite and the router should not start with an uncleared namespace.
Applied to files:
components/backends/sglang/README.mdcomponents/backends/sglang/deploy/agg_router.yaml
📚 Learning: in examples/sglang/slurm_jobs/scripts/worker_setup.py, background processes (like nats-server, etcd)...
Learnt from: fsaady
PR: ai-dynamo/dynamo#1730
File: examples/sglang/slurm_jobs/scripts/worker_setup.py:230-244
Timestamp: 2025-07-03T10:14:30.570Z
Learning: In examples/sglang/slurm_jobs/scripts/worker_setup.py, background processes (like nats-server, etcd) are intentionally left running even if later processes fail. This design choice allows users to manually connect to nodes and debug issues without having to restart the entire SLURM job from scratch, providing operational flexibility for troubleshooting in cluster environments.
Applied to files:
components/backends/sglang/README.mdcomponents/backends/sglang/deploy/agg_router.yamlcomponents/backends/sglang/deploy/agg.yamlcomponents/backends/sglang/deploy/disagg.yaml
📚 Learning: in the slurm job script template at examples/sglang/slurm_jobs/job_script_template.j2, the `--total_...
Learnt from: fsaady
PR: ai-dynamo/dynamo#1730
File: examples/sglang/slurm_jobs/job_script_template.j2:59-59
Timestamp: 2025-07-02T13:20:28.800Z
Learning: In the SLURM job script template at examples/sglang/slurm_jobs/job_script_template.j2, the `--total_nodes` parameter represents the total nodes per worker type (prefill or decode), not the total nodes in the entire cluster. Each worker type needs to know its own group size for distributed coordination.
Applied to files:
components/backends/sglang/README.mdcomponents/backends/sglang/deploy/disagg.yaml
📚 Learning: in fault tolerance test configurations, the `resources` section under `serviceargs` specifies resour...
Learnt from: nnshah1
PR: ai-dynamo/dynamo#1444
File: tests/fault_tolerance/configs/agg_tp_1_dp_8.yaml:31-38
Timestamp: 2025-07-01T15:33:53.262Z
Learning: In fault tolerance test configurations, the `resources` section under `ServiceArgs` specifies resources per individual worker, not total resources for all workers. So `workers: 8` with `gpu: '1'` means 8 workers × 1 GPU each = 8 GPUs total.
Applied to files:
components/backends/sglang/README.mdcomponents/backends/sglang/deploy/agg_router.yamlcomponents/backends/sglang/deploy/agg.yamlcomponents/backends/sglang/deploy/disagg.yaml
📚 Learning: in vllm worker deployments, startup probes (with longer periods and higher failure thresholds like p...
Learnt from: nnshah1
PR: ai-dynamo/dynamo#2124
File: components/backends/vllm/deploy/disagg.yaml:54-60
Timestamp: 2025-07-25T22:34:11.384Z
Learning: In vLLM worker deployments, startup probes (with longer periods and higher failure thresholds like periodSeconds: 10, failureThreshold: 60) are used to handle the slow model loading startup phase, while liveness probes are intentionally kept aggressive (periodSeconds: 5, failureThreshold: 1) for quick failure detection once the worker is operational. This pattern separates startup concerns from operational health monitoring in GPU-heavy workloads.
Applied to files:
components/backends/sglang/deploy/agg_router.yamlcomponents/backends/sglang/deploy/agg.yamlcomponents/backends/sglang/deploy/disagg.yaml
📚 Learning: in vllm worker deployments, grep-based log checks for "vllmworker.*has been initialized" are appropr...
Learnt from: biswapanda
PR: ai-dynamo/dynamo#1890
File: examples/vllm/deploy/agg.yaml:63-70
Timestamp: 2025-07-14T23:01:16.218Z
Learning: In vLLM worker deployments, grep-based log checks for "VllmWorker.*has been initialized" are appropriate for readiness probes to verify worker startup, but should not be used for liveness probes which need to detect ongoing worker health.
Applied to files:
components/backends/sglang/deploy/agg_router.yamlcomponents/backends/sglang/deploy/agg.yamlcomponents/backends/sglang/deploy/disagg.yaml
📚 Learning: in the dynamo operator, the project’s preferred security posture is to set a pod-level `podsecurityc...
Learnt from: julienmancuso
PR: ai-dynamo/dynamo#1474
File: deploy/cloud/operator/internal/controller/dynamocomponent_controller.go:1302-1306
Timestamp: 2025-06-11T21:18:00.425Z
Learning: In the Dynamo operator, the project’s preferred security posture is to set a Pod-level `PodSecurityContext` with `runAsUser`, `runAsGroup`, and `fsGroup` all set to `1000`, and then selectively override the user at the individual container level (e.g., `RunAsUser: 0` for Kaniko) when root is required.
Applied to files:
components/backends/sglang/deploy/agg_router.yamlcomponents/backends/sglang/deploy/agg.yamlcomponents/backends/sglang/deploy/disagg.yaml
📚 Learning: the stopsignal field under lifecycle in dynamocomponentdeployment crds is autogenerated due to kuber...
Learnt from: julienmancuso
PR: ai-dynamo/dynamo#2012
File: deploy/cloud/helm/crds/templates/nvidia.com_dynamocomponentdeployments.yaml:1178-1180
Timestamp: 2025-07-18T16:05:05.534Z
Learning: The stopSignal field under lifecycle in DynamoComponentDeployment CRDs is autogenerated due to Kubernetes library upgrades (k8s.io/api and k8s.io/apimachinery from v0.32.3 to v0.33.1), not a manual design decision by the user.
Applied to files:
components/backends/sglang/deploy/agg.yamlcomponents/backends/sglang/deploy/disagg.yaml
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (4)
- GitHub Check: pre-merge-rust (lib/bindings/python)
- GitHub Check: pre-merge-rust (lib/runtime/examples)
- GitHub Check: pre-merge-rust (.)
- GitHub Check: Build and Test - vllm
🔇 Additional comments (1)
components/backends/sglang/deploy/disagg.yaml (1)
84-90: Prefill worker startupProbe checks/health, decode worker uses/liveEnsure both endpoints behave consistently; otherwise one worker type may never become Ready.
Overview:
Details:
Where should the reviewer start?
Related Issues: (use one of the action keywords Closes / Fixes / Resolves / Relates to)
Summary by CodeRabbit
Documentation
Chores