Skip to content

Conversation

@lucas-tucker
Copy link
Contributor

Addresses the Q2 roadmap #300

Adds a simple loop in the route_general_request function in request.py to ensure that a number of retries take place if streaming from the selected url results in failure.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Summary of Changes

Hello @lucas-tucker, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request introduces a robust retry mechanism within the router's request handling to improve resilience against vLLM endpoint failures. It specifically modifies the route_general_request function to attempt multiple reroutes if an initial streaming attempt to a vLLM instance fails, ensuring that requests are more likely to succeed by finding an alternative healthy endpoint. This change directly addresses a Q2 roadmap item.

Highlights

  • Enhanced Request Resilience: Implemented a retry loop in route_general_request to automatically re-attempt request routing and streaming if a vLLM endpoint fails during the process.
  • Refactored Service Discovery: Extracted the service discovery and endpoint selection logic into a new perform_service_discovery function, improving modularity and allowing it to be called synchronously within the retry loop using asyncio.to_thread.
  • Configurable Reroutes: Added an attempted_reroutes parameter to route_general_request, enabling control over the number of retry attempts for failed requests.
  • Improved Error Handling: Integrated a try-except block around the process_request call within the retry loop, allowing the system to gracefully handle streaming exceptions and attempt rerouting.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in issue comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a retry mechanism for handling failures when streaming from vLLM backends. The implementation involves refactoring the service discovery logic into a new function and wrapping the request processing in a retry loop. My review identifies a critical issue in the refactored perform_service_discovery function that will cause a NameError due to a variable being out of scope. I've provided a detailed comment and a code suggestion to fix this.

Signed-off-by: Lucas T <[email protected]>
Copy link
Collaborator

@zerofishnoodles zerofishnoodles left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi Lucas, I am wondering what if the vllm instance is up and can be discovered but due some other reasons it failed with the request, won't it continuously route to the same endpoint? In this scenario, the request will never succeed. will it be better to send to different ones?

@lucas-tucker
Copy link
Contributor Author

Yes. Storing endpoints as mapping from URL to object would be useful in removal. Can also go through the list if you'd like.

@zerofishnoodles
Copy link
Collaborator

Yes. Storing endpoints as mapping from URL to object would be useful in removal. Can also go through the list if you'd like.

Can you route a different one if the current one fails? There are some issues considering about the dead pod IP #656 , in which case the request will fail continuously if re-routing.

Lucas Tucker and others added 4 commits August 22, 2025 12:05
Signed-off-by: Lucas Tucker <[email protected]>
Signed-off-by: Lucas Tucker <[email protected]>
Signed-off-by: Lucas Tucker <[email protected]>
@jonoillar
Copy link

Could we have the number of attempted_reroutes configurable ? Thanks :)

@lucas-tucker
Copy link
Contributor Author

It is currently configurable as an argument passed to route_general_request, with default value 0

@ikaadil
Copy link
Contributor

ikaadil commented Aug 24, 2025

It is currently configurable as an argument passed to route_general_request, with default value 0

Could you please add in the src/vllm_router/parsers/parser.py? Thanks

)

parser.add_argument(
"--request-reroutes",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You may consider renaming this argument to a more explicit name, such as --max-instance-failover-reroute-attempts

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agreed

Copy link
Collaborator

@zerofishnoodles zerofishnoodles left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@ikaadil
Copy link
Contributor

ikaadil commented Aug 26, 2025

Could you clarify the best way to review this PR?
Here’s what I did when testing it:

  1. Installed the Helm chart:
helm install vllm ./helm -f .github/values-01-2pods-minimal-example.yaml
  1. Ran the stress test script:
tests/e2e/stress-test.sh
  1. After the script successfully processed about 1,000 requests, I manually deleted one of the pods.
  2. As soon as that happened, the stress test failed.
    The test output:
tests/e2e/stress-test.sh
[INFO] Checking prerequisites...
[INFO] Router stress test configuration:
[INFO]   Concurrent requests: 2000
[INFO]   Total requests: 10000
[INFO]   Router port: 30080
[INFO]   Backend ports: 8001, 8002
[INFO]   Model: facebook/opt-125m
[INFO] Starting router with round-robin routing (stress test mode)
[INFO] Router started with PID: 1100606
[INFO] Waiting for router to be ready...
[INFO] Router is ready
[INFO] Running stress test with Apache Bench
[INFO] Concurrent: 2000, Total: 10000
This is ApacheBench, Version 2.3 <$Revision: 1879490 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/

Benchmarking localhost (be patient)
Completed 1000 requests
Completed 2000 requests
Completed 3000 requests
Completed 4000 requests
apr_pollset_poll: The timeout specified has expired (70007)
Total of 4529 requests completed

So it looks like deleting a pod during the test causes the router to fail.
What’s the recommended way to validate whether this behavior is expected, or if it indicates an issue with the PR?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants