Skip to content

Conversation

@jonoillar
Copy link

@jonoillar jonoillar commented Aug 26, 2025

FILL IN THE PR DESCRIPTION HERE

FIX #656

BEFORE SUBMITTING, PLEASE READ THE CHECKLIST BELOW AND FILL IN THE DESCRIPTION ABOVE


  • Make sure the code changes pass the pre-commit checks.
  • Sign-off your commit by using -s when doing git commit
  • Try to classify PRs for easy understanding of the type of changes, such as [Bugfix], [Feat], and [CI].
Detailed Checklist (Click to Expand)

Thank you for your contribution to production-stack! Before submitting the pull request, please ensure the PR meets the following criteria. This helps us maintain the code quality and improve the efficiency of the review process.

PR Title and Classification

Please try to classify PRs for easy understanding of the type of changes. The PR title is prefixed appropriately to indicate the type of change. Please use one of the following:

  • [Bugfix] for bug fixes.
  • [CI/Build] for build or continuous integration improvements.
  • [Doc] for documentation fixes and improvements.
  • [Feat] for new features in the cluster (e.g., autoscaling, disaggregated prefill, etc.).
  • [Router] for changes to the vllm_router (e.g., routing algorithm, router observability, etc.).
  • [Misc] for PRs that do not fit the above categories. Please use this sparingly.

Note: If the PR spans more than one category, please include all relevant prefixes.

Code Quality

The PR need to meet the following code quality standards:

  • Pass all linter checks. Please use pre-commit to format your code. See README.md for installation.
  • The code need to be well-documented to ensure future contributors can easily understand the code.
  • Please include sufficient tests to ensure the change is stay correct and robust. This includes both unit tests and integration tests.

DCO and Signed-off-by

When contributing changes to this project, you must agree to the DCO. Commits must include a Signed-off-by: header which certifies agreement with the terms of the DCO.

Using -s with git commit will automatically add this header.

What to Expect for the Reviews

We aim to address all PRs in a timely manner. If no one reviews your PR within 5 days, please @-mention one of YuhanLiu11
, Shaoting-Feng or ApostaC.

Fixed the Zombie IP problem by implementing this:

  1. Decouple the watcher and the worker. The watcher jobs would only be: Get the event, add them in a queue. Then the worker job would be the rest: read event from the queue, process them (by adding available engine, deleting them). This worker will handle these tasks asynchronously, so that if processing one event is blocking, the worker can process other events
  • Enqueue by keys -> this way, if there is a new event MODIFIED about a pod after an event ADDED has been added, just keep the MODIFIED event and remove the ADDED event from the queue. No need to process both events in that case
  1. Add ressource versioning to the watcher -> this way, it won't add too many events, but only the really modified ones

PRACTICAL BENEFITS:

  • No more Zombie IP's
  • ressource versioninig of kubernetes watch -> the model pods are ADDED only once, avoiding cluttering the vllm router with useless logs

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Summary of Changes

Hello @jonoillar, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request addresses a 'Zombie IP' bug in the router's Kubernetes service discovery, where the router might attempt to route requests to model pods that no longer exist. The fix involves a significant refactoring of the service discovery mechanism to improve its robustness and responsiveness to Kubernetes events. By decoupling the event watcher from the processing logic, introducing an asynchronous event queue with deduplication, and utilizing Kubernetes resource versioning, the router can now more reliably track and update the status of available model engines, preventing routing to stale or terminated pods.

Highlights

  • Asynchronous Event Processing: The Kubernetes watcher now only enqueues events, while a new asynchronous processor handles the actual updates to available engines. This prevents slow API calls (e.g., checking model status) from blocking the event stream.
  • Event Queue with Deduplication: A key-based queue (using OrderedDict) ensures that only the latest event for a given pod is processed, effectively debouncing rapid updates and preventing redundant work.
  • Kubernetes Resource Versioning: The watcher now leverages Kubernetes resource versions, allowing it to efficiently retrieve only new events since the last successful watch, reducing network traffic and processing load.
  • Migration to aiohttp: All HTTP requests for checking engine status and retrieving model information have been migrated from synchronous requests to asynchronous aiohttp, aligning with the new async processing model.
  • Comprehensive Test Coverage: A new test file has been added, providing detailed scenarios to validate the improved service discovery logic, including handling of pod additions, deletions, and the resilience to slow external API responses.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in issue comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request refactors the Kubernetes service discovery mechanism to address a "Zombie IP" bug by decoupling event watching from event processing using an asynchronous worker. The overall architectural change is a significant improvement. However, the review has identified a few critical issues. A race condition has been introduced in the new event queuing logic that could lead to data corruption. Furthermore, the new test suite added to validate this fix is fundamentally flawed; it fails to test the new asynchronous code, uses incorrect mocks, and asserts the presence of the old bug rather than verifying the fix. There is also a medium-severity issue regarding inefficient creation of aiohttp sessions. These issues should be addressed to ensure the stability and correctness of the fix.

@jonoillar jonoillar force-pushed the router/zombie-ip-fix branch 2 times, most recently from 1db8c3b to a9c6887 Compare August 26, 2025 14:51
@jonoillar jonoillar marked this pull request as draft August 26, 2025 14:59
@jonoillar jonoillar marked this pull request as ready for review August 26, 2025 17:22
Jon OILLARBURU and others added 16 commits August 27, 2025 10:18
Signed-off-by: jonoillar <[email protected]>
Signed-off-by: jonoillar <[email protected]>
Signed-off-by: jonoillar <[email protected]>
Signed-off-by: jonoillar <[email protected]>
Signed-off-by: jonoillar <[email protected]>
Signed-off-by: jonoillar <[email protected]>
Signed-off-by: jonoillar <[email protected]>
Signed-off-by: jonoillar <[email protected]>
Signed-off-by: jonoillar <[email protected]>
Signed-off-by: jonoillar <[email protected]>
Signed-off-by: jonoillar <[email protected]>
Signed-off-by: jonoillar <[email protected]>
Signed-off-by: jonoillar <[email protected]>
Signed-off-by: jonoillar <[email protected]>
@jonoillar jonoillar force-pushed the router/zombie-ip-fix branch from 6d8e51d to 862a268 Compare August 27, 2025 08:18
Signed-off-by: jonoillar <[email protected]>
@jonoillar
Copy link
Author

There is also a medium-severity issue regarding inefficient creation of aiohttp sessions. These issues should be addressed to ensure the stability and correctness of the fix.

The gemini code assist told this. Can it show me what it means by inefficient creation of aiohttp sessions ? I'm interested to fix it

@YuhanLiu11

@jonoillar
Copy link
Author

I would also like to contribute for creating test cases for this service discovery (k8s pod service discovery), since I cannot see any at the moment and I have a hunch this might be one of the most used one.
Could someone guide me through some ideas on how to create those tests ? I tried unit test, but ended having to do so many patches that would make the test break at every change of the code
Maybe in another PR?

Signed-off-by: jonoillar <[email protected]>
Copy link
Collaborator

@zerofishnoodles zerofishnoodles left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi, This is a huge change tbh, If I understand correctly, the problem is because the delete event didn't get processed as expected. After reviewing this PR, I proposed a quick fix #668 , which make the k8s watch stream to long-lasting connection and handle delete event immediately. It seems to solve the issue #656 from my side, let me know if this still has problems.

@jonoillar
Copy link
Author

Hey @TheCodeWrangler would that PR partially solve your issue: #431 ?

@jonoillar
Copy link
Author

jonoillar commented Aug 28, 2025

Stress test Results for the 2 pods deployment

.github/values-01-2pods-minimal-example.yaml

Zombie-router IP fix:

# 200/1000 ->  3.183 seconds
# 2000/10000 -> 17.280 seconds
# 2000/20000 -> 37.830 seconds

Latest commit 8367d431e286a30beefd2f00d4311a0316a314db:

# 200/1000 -> 3.418 seconds
# 2000/10000 -> 16.864 seconds
# 2000/20000 -> 37.176 seconds

No big change
@zerofishnoodles @YuhanLiu11

@zerofishnoodles
Copy link
Collaborator

Hi, can you fix the CI check? Also, there is a bug in service discovery considering about the PD, we need to fix that first before considering making this change to service discovery.

@jonoillar
Copy link
Author

@zerofishnoodles I'm checking it

Side question: why is the Dockerfile.kvaware image so big ? It's 24.2GB on my computer

I was checking the kvaware routing logic, but I cannot see any valid reason why this image should be that big. If I have time I'll investigate further, but if you have already some insights can you share them ? thanks 🥇

@zerofishnoodles
Copy link
Collaborator

@zerofishnoodles I'm checking it

Side question: why is the Dockerfile.kvaware image so big ? It's 24.2GB on my computer

I was checking the kvaware routing logic, but I cannot see any valid reason why this image should be that big. If I have time I'll investigate further, but if you have already some insights can you share them ? thanks 🥇

the kvaware docker includes lmcache package which makes it very big.

@jonoillar
Copy link
Author

the kvaware docker includes lmcache package which makes it very big.

I was under the assumption that the Dockerfile.kvaware dockerfile corresponds to the image lmcache/lmstack-router:kvaware

However this image has not been updated since 3 month ago. For example, the PR #589 that replaces httpx by aiohttp is not reflected in the image lmcache/lmstack-router:kvaware

So my question is: if I want to use the kvaware functionalities, should I use the image lmcache/lmstack-router:kvaware, or can I use lmcache/lmstack-router:latest and have kvaware functionalities ?

Sorry if this is off topic. if there is not quick answer to it, I'm happy to open an issue and discuss about it further

@jonoillar
Copy link
Author

jonoillar commented Sep 1, 2025

@zerofishnoodles about the failing test: the problem is with the disaggregated-prefill test

My branch is branching off this commit: 3bec177e61259ab62bed7b369461e4b0671f3202

I checkout this commit, and run this script:

./tests/e2e/run-k8s-routing-test.sh disaggregated-prefill --model “facebook/opt-125m”  --num-requests 25 --chunk-size 128 --verbose --result-dir /tmp/k8s-discovery-routing-results-disaggregated-prefill --timeout 10

The same error occurred

[ERROR] ❌ Chat completions failed: 500 Server Error: Internal Server Error for url: http://localhost:30080/v1/chat/completions payload: {'model': 'facebook/opt-125m', 'messages': [{'role': 'system', 'content': 'You are a helpful assistant.'}, {'role': 'user', 'content': 'Hello! How are you?'}], 'max_tokens': 10, 'temperature': 0.7}
[ERROR] ❌ disaggregated-prefill test failed

So, the problem is that the error was present before my changes

I tested with the latest version: 6b0a04ab73f301b36a74b976ccc8500cddf5f81f

I face the same issue

[ERROR] ❌ Chat completions failed: 500 Server Error: Internal Server Error for url: http://localhost:30080/v1/chat/completions payload: {'model': '“facebook/opt-125m”', 'messages': [{'role': 'system', 'content': 'You are a helpful assistant.'}, {'role': 'user', 'content': 'Hello! How are you?'}], 'max_tokens': 10, 'temperature': 0.7}
[ERROR] ❌ disaggregated-prefill test failed

Full reproducible steps:

Create virtual environment within the repository we are testing:

python3.12 -m venv stress_testing_venv

Activate env:

source stress_testing_venv/bin/activate

Install necessary files (as said in https://github.com/vllm-project/production-stack/actions/runs/17263806289/workflow#L112)

python -m pip install --upgrade pip
pip install -r benchmarks/multi-round-qa/requirements.txt
pip install -e .

Create Dockerfile

eval "$(minikube docker-env)"
docker build --build-arg INSTALL_OPTIONAL_DEP=default -t git-act-router -f docker/Dockerfile.kvaware .

Run the tests:

./tests/e2e/run-k8s-routing-test.sh disaggregated-prefill --model “facebook/opt-125m”  --num-requests 25 --chunk-size 128 --verbose --result-dir /tmp/k8s-discovery-routing-results-disaggregated-prefill --timeout 10

I'm running the tests on a machine that has 4 NVIDIA A10G GPUs

EDIT: I just saw this MR: #663 so I believe you already know that the PD check fails

Signed-off-by: jonoillar <[email protected]>
Signed-off-by: jonoillar <[email protected]>
@jonoillar
Copy link
Author

@zerofishnoodles

I fixed CI and rebased the branch with latest changes :)

@zerofishnoodles
Copy link
Collaborator

zerofishnoodles commented Sep 2, 2025

So my question is: if I want to use the kvaware functionalities, should I use the image [lmcache/lmstack-router:kvaware] , or can I use lmcache/lmstack-router:latest and have kvaware functionalities ?

Right now you may still need to use kvaware image for kvaware functionality, It is supposed to be merged to latest in the future.

EDIT: I just saw this MR #663 so I believe you already know that the PD check fails

Yes, there are some bug in PD that the current CI can't discover, it will be fixed through #663 , we are waiting for the bug fix for PD and then fix the CI for PD. Since the bug happened in the service_discovery too, and it blocked the PD functionality, we need to wait till that one gets fixed.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[Bug][Router] ZombieIP: router routes to model pod that doesn't exist anymore

3 participants