-
Notifications
You must be signed in to change notification settings - Fork 688
chore: updated health checks to use new probes #2124
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
WalkthroughThe Kubernetes deployment YAML files for various vLLM worker components were updated to replace shell command-based health probes with HTTP GET requests to a Changes
Sequence Diagram(s)sequenceDiagram
participant K8s as Kubernetes
participant Worker as vLLM Worker (Decode/Prefill)
participant HealthEP as /health Endpoint
K8s->>Worker: Deploy container with env vars (DYN_SYSTEM_ENABLED, etc.)
loop Health Monitoring
K8s->>HealthEP: HTTP GET /health (liveness/readiness/startup)
HealthEP-->>K8s: 200 OK / error
end
Estimated code review effort🎯 2 (Simple) | ⏱️ ~8 minutes Possibly related PRs
Poem
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
Documentation and Community
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
♻️ Duplicate comments (28)
components/backends/vllm/deploy/disagg.yaml (4)
102-108: Duplicate of the liveness-probe comment above – same parameters, same concern.
109-115: Duplicate of the readiness-probe comment above – same parameters, same concern.
124-128: Duplicate of the quoting-nit comment above.
131-136: Duplicate of the startup-probe comment above.components/backends/vllm/deploy/disagg_router.yaml (8)
54-60: Same liveness-probe aggressiveness as noted in disagg.yaml – recommend raisingfailureThresholdto ≥3.
61-67: Same readiness-probe 10 min window concern.
76-80: Same quoting nit.
83-88: Same startup-probe confirmation note.
102-108: Duplicate liveness-probe concern for Prefill worker.
109-115: Duplicate readiness-probe concern for Prefill worker.
124-128: Duplicate quoting nit.
131-136: Duplicate startup-probe confirmation note.components/backends/vllm/deploy/agg.yaml (4)
51-57: Liveness probe overly aggressive – same recommendation: increasefailureThreshold(≥3) or widen period to avoid churn.
58-64: Readiness probe 10 min window – consider lowering threshold.
76-80: Quoting nit – see earlier comment.
83-88: Startup-probe confirmation – same as earlier.components/backends/vllm/deploy/disagg_planner.yaml (8)
54-60: Liveness probe aggressiveness – same recommendation.
61-67: Readiness probe long window – same recommendation.
76-80: Quoting nit.
83-88: Startup-probe confirmation.
102-108: Duplicate liveness-probe concern for Prefill worker.
109-115: Duplicate readiness-probe concern for Prefill worker.
124-128: Duplicate quoting nit.
131-136: Duplicate startup-probe confirmation note.components/backends/vllm/deploy/agg_router.yaml (4)
51-57: Same liveness-probe aggressiveness – raisefailureThreshold.
58-64: Same readiness-probe long window – consider lowering threshold.
76-80: Quoting nit.
83-88: Startup-probe confirmation.
🧹 Nitpick comments (2)
components/backends/vllm/deploy/disagg.yaml (2)
61-67: Readiness probe may hide real issues for up to 10 min.
periodSeconds: 10combined withfailureThreshold: 60means the worker can return unhealthy for ~10 minutes before being removed from service. That defeats fast load-balancer eviction during failures.Recommend cutting the threshold to something like 12–18 (2–3 min) unless you have a documented need for the longer window.
76-80: YAML quoting nit – unnecessary escape noise.The JSON string is currently double-quoted and the inner quotes are back-escaped:
value: "[\"generate\"]"Single-quoting is simpler and easier to read:
- value: "[\"generate\"]" + value: '["generate"]'Resulting env var payload is identical.
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (5)
components/backends/vllm/deploy/agg.yaml(2 hunks)components/backends/vllm/deploy/agg_router.yaml(2 hunks)components/backends/vllm/deploy/disagg.yaml(4 hunks)components/backends/vllm/deploy/disagg_planner.yaml(4 hunks)components/backends/vllm/deploy/disagg_router.yaml(4 hunks)
🧰 Additional context used
🧠 Learnings (6)
📓 Common learnings
Learnt from: biswapanda
PR: ai-dynamo/dynamo#1890
File: examples/vllm/deploy/agg.yaml:63-70
Timestamp: 2025-07-14T23:01:16.218Z
Learning: In vLLM worker deployments, grep-based log checks for "VllmWorker.*has been initialized" are appropriate for readiness probes to verify worker startup, but should not be used for liveness probes which need to detect ongoing worker health.
Learnt from: julienmancuso
PR: ai-dynamo/dynamo#2012
File: deploy/cloud/helm/crds/templates/nvidia.com_dynamocomponentdeployments.yaml:1178-1180
Timestamp: 2025-07-18T16:05:05.534Z
Learning: The stopSignal field under lifecycle in DynamoComponentDeployment CRDs is autogenerated due to Kubernetes library upgrades (k8s.io/api and k8s.io/apimachinery from v0.32.3 to v0.33.1), not a manual design decision by the user.
components/backends/vllm/deploy/disagg_router.yaml (4)
Learnt from: biswapanda
PR: #1890
File: examples/vllm/deploy/agg.yaml:63-70
Timestamp: 2025-07-14T23:01:16.218Z
Learning: In vLLM worker deployments, grep-based log checks for "VllmWorker.*has been initialized" are appropriate for readiness probes to verify worker startup, but should not be used for liveness probes which need to detect ongoing worker health.
Learnt from: julienmancuso
PR: #2012
File: deploy/cloud/helm/crds/templates/nvidia.com_dynamocomponentdeployments.yaml:1178-1180
Timestamp: 2025-07-18T16:05:05.534Z
Learning: The stopSignal field under lifecycle in DynamoComponentDeployment CRDs is autogenerated due to Kubernetes library upgrades (k8s.io/api and k8s.io/apimachinery from v0.32.3 to v0.33.1), not a manual design decision by the user.
Learnt from: nnshah1
PR: #1444
File: tests/fault_tolerance/configs/agg_tp_1_dp_8.yaml:31-38
Timestamp: 2025-07-01T15:33:53.262Z
Learning: In fault tolerance test configurations, the resources section under ServiceArgs specifies resources per individual worker, not total resources for all workers. So workers: 8 with gpu: '1' means 8 workers × 1 GPU each = 8 GPUs total.
Learnt from: julienmancuso
PR: #1474
File: deploy/cloud/operator/internal/controller/dynamocomponent_controller.go:1302-1306
Timestamp: 2025-06-11T21:18:00.425Z
Learning: In the Dynamo operator, the project’s preferred security posture is to set a Pod-level PodSecurityContext with runAsUser, runAsGroup, and fsGroup all set to 1000, and then selectively override the user at the individual container level (e.g., RunAsUser: 0 for Kaniko) when root is required.
components/backends/vllm/deploy/agg.yaml (4)
Learnt from: biswapanda
PR: #1890
File: examples/vllm/deploy/agg.yaml:63-70
Timestamp: 2025-07-14T23:01:16.218Z
Learning: In vLLM worker deployments, grep-based log checks for "VllmWorker.*has been initialized" are appropriate for readiness probes to verify worker startup, but should not be used for liveness probes which need to detect ongoing worker health.
Learnt from: julienmancuso
PR: #2012
File: deploy/cloud/helm/crds/templates/nvidia.com_dynamocomponentdeployments.yaml:1178-1180
Timestamp: 2025-07-18T16:05:05.534Z
Learning: The stopSignal field under lifecycle in DynamoComponentDeployment CRDs is autogenerated due to Kubernetes library upgrades (k8s.io/api and k8s.io/apimachinery from v0.32.3 to v0.33.1), not a manual design decision by the user.
Learnt from: GuanLuo
PR: #1371
File: examples/llm/benchmarks/vllm_multinode_setup.sh:18-25
Timestamp: 2025-06-05T01:46:15.509Z
Learning: In multi-node setups with head/worker architecture, the head node typically doesn't need environment variables pointing to its own services (like NATS_SERVER, ETCD_ENDPOINTS) because local processes can access them via localhost. Only worker nodes need these environment variables to connect to the head node's external IP address.
Learnt from: julienmancuso
PR: #1474
File: deploy/cloud/operator/internal/controller/dynamocomponent_controller.go:1302-1306
Timestamp: 2025-06-11T21:18:00.425Z
Learning: In the Dynamo operator, the project’s preferred security posture is to set a Pod-level PodSecurityContext with runAsUser, runAsGroup, and fsGroup all set to 1000, and then selectively override the user at the individual container level (e.g., RunAsUser: 0 for Kaniko) when root is required.
components/backends/vllm/deploy/disagg_planner.yaml (4)
Learnt from: biswapanda
PR: #1890
File: examples/vllm/deploy/agg.yaml:63-70
Timestamp: 2025-07-14T23:01:16.218Z
Learning: In vLLM worker deployments, grep-based log checks for "VllmWorker.*has been initialized" are appropriate for readiness probes to verify worker startup, but should not be used for liveness probes which need to detect ongoing worker health.
Learnt from: julienmancuso
PR: #2012
File: deploy/cloud/helm/crds/templates/nvidia.com_dynamocomponentdeployments.yaml:1178-1180
Timestamp: 2025-07-18T16:05:05.534Z
Learning: The stopSignal field under lifecycle in DynamoComponentDeployment CRDs is autogenerated due to Kubernetes library upgrades (k8s.io/api and k8s.io/apimachinery from v0.32.3 to v0.33.1), not a manual design decision by the user.
Learnt from: nnshah1
PR: #1444
File: tests/fault_tolerance/configs/agg_tp_1_dp_8.yaml:31-38
Timestamp: 2025-07-01T15:33:53.262Z
Learning: In fault tolerance test configurations, the resources section under ServiceArgs specifies resources per individual worker, not total resources for all workers. So workers: 8 with gpu: '1' means 8 workers × 1 GPU each = 8 GPUs total.
Learnt from: julienmancuso
PR: #1474
File: deploy/cloud/operator/internal/controller/dynamocomponent_controller.go:1302-1306
Timestamp: 2025-06-11T21:18:00.425Z
Learning: In the Dynamo operator, the project’s preferred security posture is to set a Pod-level PodSecurityContext with runAsUser, runAsGroup, and fsGroup all set to 1000, and then selectively override the user at the individual container level (e.g., RunAsUser: 0 for Kaniko) when root is required.
components/backends/vllm/deploy/agg_router.yaml (3)
Learnt from: biswapanda
PR: #1890
File: examples/vllm/deploy/agg.yaml:63-70
Timestamp: 2025-07-14T23:01:16.218Z
Learning: In vLLM worker deployments, grep-based log checks for "VllmWorker.*has been initialized" are appropriate for readiness probes to verify worker startup, but should not be used for liveness probes which need to detect ongoing worker health.
Learnt from: nnshah1
PR: #1444
File: tests/fault_tolerance/configs/agg_tp_1_dp_8.yaml:31-38
Timestamp: 2025-07-01T15:33:53.262Z
Learning: In fault tolerance test configurations, the resources section under ServiceArgs specifies resources per individual worker, not total resources for all workers. So workers: 8 with gpu: '1' means 8 workers × 1 GPU each = 8 GPUs total.
Learnt from: julienmancuso
PR: #1474
File: deploy/cloud/operator/internal/controller/dynamocomponent_controller.go:1302-1306
Timestamp: 2025-06-11T21:18:00.425Z
Learning: In the Dynamo operator, the project’s preferred security posture is to set a Pod-level PodSecurityContext with runAsUser, runAsGroup, and fsGroup all set to 1000, and then selectively override the user at the individual container level (e.g., RunAsUser: 0 for Kaniko) when root is required.
components/backends/vllm/deploy/disagg.yaml (4)
Learnt from: biswapanda
PR: #1890
File: examples/vllm/deploy/agg.yaml:63-70
Timestamp: 2025-07-14T23:01:16.218Z
Learning: In vLLM worker deployments, grep-based log checks for "VllmWorker.*has been initialized" are appropriate for readiness probes to verify worker startup, but should not be used for liveness probes which need to detect ongoing worker health.
Learnt from: julienmancuso
PR: #2012
File: deploy/cloud/helm/crds/templates/nvidia.com_dynamocomponentdeployments.yaml:1178-1180
Timestamp: 2025-07-18T16:05:05.534Z
Learning: The stopSignal field under lifecycle in DynamoComponentDeployment CRDs is autogenerated due to Kubernetes library upgrades (k8s.io/api and k8s.io/apimachinery from v0.32.3 to v0.33.1), not a manual design decision by the user.
Learnt from: nnshah1
PR: #1444
File: tests/fault_tolerance/configs/agg_tp_1_dp_8.yaml:31-38
Timestamp: 2025-07-01T15:33:53.262Z
Learning: In fault tolerance test configurations, the resources section under ServiceArgs specifies resources per individual worker, not total resources for all workers. So workers: 8 with gpu: '1' means 8 workers × 1 GPU each = 8 GPUs total.
Learnt from: julienmancuso
PR: #1474
File: deploy/cloud/operator/internal/controller/dynamocomponent_controller.go:1302-1306
Timestamp: 2025-06-11T21:18:00.425Z
Learning: In the Dynamo operator, the project’s preferred security posture is to set a Pod-level PodSecurityContext with runAsUser, runAsGroup, and fsGroup all set to 1000, and then selectively override the user at the individual container level (e.g., RunAsUser: 0 for Kaniko) when root is required.
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (4)
- GitHub Check: pre-merge-rust (.)
- GitHub Check: pre-merge-rust (lib/runtime/examples)
- GitHub Check: pre-merge-rust (lib/bindings/python)
- GitHub Check: Build and Test - vllm
🔇 Additional comments (2)
components/backends/vllm/deploy/disagg.yaml (1)
83-88: Startup probe period/threshold duplicates readiness – confirm intent.
startupProbeuses the same/healthendpoint withperiodSeconds: 10,failureThreshold: 60(10 min). Once the probe succeeds it is disabled, so a 10-minute window is reasonable for model load.
Just ensure:
- The
/healthendpoint returns 5xx until the model is fully loaded, otherwise readiness may succeed prematurely.- Memory/CPU limits can accommodate the long warm-up without OOM-killing the pod.
No action required if both are confirmed.
components/backends/vllm/deploy/agg_router.yaml (1)
66-66: Replica reduction – validate load capacity.Worker replicas dropped from 2 → 1. Ensure this capacity change is intentional and that the router still meets latency & throughput SLOs. If accidental, revert or adjust HPA accordingly.
keivenchang
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't have a problem with the changes as long as you've tested it. I'm curious how the Runtime knows which port to bind to though.
keivenchang
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
Overview:
Now that we have enabled health checks natively in the worker - updating to use these instead of the place holder checks.
Details:
Where should the reviewer start?
yamls
@tedzhouhk - question for confirmation - do the prefill workers register a "generate" endpoint with the dynamo runtime? Want to make sure that is the flow.
Related Issues: (use one of the action keywords Closes / Fixes / Resolves / Relates to)
Summary by CodeRabbit