-
Notifications
You must be signed in to change notification settings - Fork 312
[Bug][Router] ZombieIP: router routes to model pod that doesn't exist anymore #666
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Summary of Changes
Hello @jonoillar, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!
This pull request addresses a 'Zombie IP' bug in the router's Kubernetes service discovery, where the router might attempt to route requests to model pods that no longer exist. The fix involves a significant refactoring of the service discovery mechanism to improve its robustness and responsiveness to Kubernetes events. By decoupling the event watcher from the processing logic, introducing an asynchronous event queue with deduplication, and utilizing Kubernetes resource versioning, the router can now more reliably track and update the status of available model engines, preventing routing to stale or terminated pods.
Highlights
- Asynchronous Event Processing: The Kubernetes watcher now only enqueues events, while a new asynchronous processor handles the actual updates to available engines. This prevents slow API calls (e.g., checking model status) from blocking the event stream.
- Event Queue with Deduplication: A key-based queue (using
OrderedDict) ensures that only the latest event for a given pod is processed, effectively debouncing rapid updates and preventing redundant work. - Kubernetes Resource Versioning: The watcher now leverages Kubernetes resource versions, allowing it to efficiently retrieve only new events since the last successful watch, reducing network traffic and processing load.
- Migration to
aiohttp: All HTTP requests for checking engine status and retrieving model information have been migrated from synchronousrequeststo asynchronousaiohttp, aligning with the new async processing model. - Comprehensive Test Coverage: A new test file has been added, providing detailed scenarios to validate the improved service discovery logic, including handling of pod additions, deletions, and the resilience to slow external API responses.
Using Gemini Code Assist
The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.
Invoking Gemini
You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.
| Feature | Command | Description |
|---|---|---|
| Code Review | /gemini review |
Performs a code review for the current pull request in its current state. |
| Pull Request Summary | /gemini summary |
Provides a summary of the current pull request in its current state. |
| Comment | @gemini-code-assist | Responds in comments when explicitly tagged, both in issue comments and review comments. |
| Help | /gemini help |
Displays a list of available commands. |
Customization
To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.
Limitations & Feedback
Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.
You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.
Footnotes
-
Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution. ↩
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request refactors the Kubernetes service discovery mechanism to address a "Zombie IP" bug by decoupling event watching from event processing using an asynchronous worker. The overall architectural change is a significant improvement. However, the review has identified a few critical issues. A race condition has been introduced in the new event queuing logic that could lead to data corruption. Furthermore, the new test suite added to validate this fix is fundamentally flawed; it fails to test the new asynchronous code, uses incorrect mocks, and asserts the presence of the old bug rather than verifying the fix. There is also a medium-severity issue regarding inefficient creation of aiohttp sessions. These issues should be addressed to ensure the stability and correctness of the fix.
1db8c3b to
a9c6887
Compare
Signed-off-by: jonoillar <[email protected]>
Signed-off-by: jonoillar <[email protected]>
Signed-off-by: jonoillar <[email protected]>
Signed-off-by: jonoillar <[email protected]>
Signed-off-by: jonoillar <[email protected]>
Signed-off-by: jonoillar <[email protected]>
Signed-off-by: jonoillar <[email protected]>
Signed-off-by: jonoillar <[email protected]>
Signed-off-by: jonoillar <[email protected]>
Signed-off-by: jonoillar <[email protected]>
Signed-off-by: jonoillar <[email protected]>
Signed-off-by: jonoillar <[email protected]>
Signed-off-by: jonoillar <[email protected]>
Signed-off-by: jonoillar <[email protected]>
Signed-off-by: jonoillar <[email protected]>
Signed-off-by: jonoillar <[email protected]>
6d8e51d to
862a268
Compare
Signed-off-by: jonoillar <[email protected]>
The gemini code assist told this. Can it show me what it means by inefficient creation of aiohttp sessions ? I'm interested to fix it |
|
I would also like to contribute for creating test cases for this service discovery (k8s pod service discovery), since I cannot see any at the moment and I have a hunch this might be one of the most used one. |
Signed-off-by: jonoillar <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi, This is a huge change tbh, If I understand correctly, the problem is because the delete event didn't get processed as expected. After reviewing this PR, I proposed a quick fix #668 , which make the k8s watch stream to long-lasting connection and handle delete event immediately. It seems to solve the issue #656 from my side, let me know if this still has problems.
|
Hey @TheCodeWrangler would that PR partially solve your issue: #431 ? |
|
Stress test Results for the 2 pods deployment Zombie-router IP fix: Latest commit No big change |
|
Hi, can you fix the CI check? Also, there is a bug in service discovery considering about the PD, we need to fix that first before considering making this change to service discovery. |
|
@zerofishnoodles I'm checking it Side question: why is the I was checking the kvaware routing logic, but I cannot see any valid reason why this image should be that big. If I have time I'll investigate further, but if you have already some insights can you share them ? thanks 🥇 |
the kvaware docker includes lmcache package which makes it very big. |
I was under the assumption that the However this image has not been updated since 3 month ago. For example, the PR #589 that replaces So my question is: if I want to use the Sorry if this is off topic. if there is not quick answer to it, I'm happy to open an issue and discuss about it further |
|
@zerofishnoodles about the failing test: the problem is with the My branch is branching off this commit: I checkout this commit, and run this script: The same error occurred So, the problem is that the error was present before my changes I tested with the latest version: I face the same issue Full reproducible steps: Create virtual environment within the repository we are testing: Activate env: Install necessary files (as said in https://github.com/vllm-project/production-stack/actions/runs/17263806289/workflow#L112) Create Dockerfile Run the tests: I'm running the tests on a machine that has 4 EDIT: I just saw this MR: #663 so I believe you already know that the PD check fails |
Signed-off-by: jonoillar <[email protected]>
Signed-off-by: jonoillar <[email protected]>
|
I fixed CI and rebased the branch with latest changes :) |
Right now you may still need to use kvaware image for kvaware functionality, It is supposed to be merged to latest in the future.
Yes, there are some bug in PD that the current CI can't discover, it will be fixed through #663 , we are waiting for the bug fix for PD and then fix the CI for PD. Since the bug happened in the service_discovery too, and it blocked the PD functionality, we need to wait till that one gets fixed. |
FILL IN THE PR DESCRIPTION HERE
FIX #656
BEFORE SUBMITTING, PLEASE READ THE CHECKLIST BELOW AND FILL IN THE DESCRIPTION ABOVE
-swhen doinggit commit[Bugfix],[Feat], and[CI].Detailed Checklist (Click to Expand)
Thank you for your contribution to production-stack! Before submitting the pull request, please ensure the PR meets the following criteria. This helps us maintain the code quality and improve the efficiency of the review process.
PR Title and Classification
Please try to classify PRs for easy understanding of the type of changes. The PR title is prefixed appropriately to indicate the type of change. Please use one of the following:
[Bugfix]for bug fixes.[CI/Build]for build or continuous integration improvements.[Doc]for documentation fixes and improvements.[Feat]for new features in the cluster (e.g., autoscaling, disaggregated prefill, etc.).[Router]for changes to thevllm_router(e.g., routing algorithm, router observability, etc.).[Misc]for PRs that do not fit the above categories. Please use this sparingly.Note: If the PR spans more than one category, please include all relevant prefixes.
Code Quality
The PR need to meet the following code quality standards:
pre-committo format your code. SeeREADME.mdfor installation.DCO and Signed-off-by
When contributing changes to this project, you must agree to the DCO. Commits must include a
Signed-off-by:header which certifies agreement with the terms of the DCO.Using
-swithgit commitwill automatically add this header.What to Expect for the Reviews
We aim to address all PRs in a timely manner. If no one reviews your PR within 5 days, please @-mention one of YuhanLiu11
, Shaoting-Feng or ApostaC.
Fixed the Zombie IP problem by implementing this:
PRACTICAL BENEFITS: