Skip to content

Conversation

@TheCodeWrangler
Copy link

[Router] Add configurable Uvicorn workers support

This PR adds configurable Uvicorn workers support to the vLLM router to improve performance and handle higher loads, addressing readiness/liveness check failures under load.

Problem

The vLLM router application has a tendency to fail readiness/liveness checks under load when running with a single Uvicorn process. This limits the application's ability to handle concurrent requests effectively and can cause service disruptions in production environments.

Solution

This PR implements configurable Uvicorn workers support following FastAPI best practices:

  • Command Line Configuration: Added --workers argument to specify the number of worker processes
  • Environment Variable Support: Added VLLM_ROUTER_WORKERS environment variable for Docker/containerized deployments
  • Backward Compatibility: Maintains single-worker behavior as default
  • Docker Integration: Updated Dockerfile to support workers configuration via environment variable

Changes Made

Core Implementation

  • src/vllm_router/parsers/parser.py: Added --workers argument with environment variable fallback
  • src/vllm_router/app.py: Modified uvicorn.run() to accept workers parameter

Configuration Options

  1. Command Line: vllm-router --workers 4 --port 8001 ...
  2. Environment Variable: VLLM_ROUTER_WORKERS=4
  3. Docker: docker run -e VLLM_ROUTER_WORKERS=4 ...

Benefits

  • Improved Load Handling: Multiple workers can handle more concurrent requests
  • Better Health Checks: Reduces likelihood of readiness/liveness check failures under load
  • CPU Utilization: Takes advantage of multiple CPU cores
  • Production Ready: Follows FastAPI deployment best practices
  • Flexible Configuration: Supports both CLI and environment variable configuration

Usage Examples

# Command line
vllm-router --workers 4 --port 8001 --service-discovery static --static-backends "http://localhost:8000"

# Environment variable
export VLLM_ROUTER_WORKERS=4
vllm-router --port 8001 --service-discovery static --static-backends "http://localhost:8000"

# Docker
docker run -e VLLM_ROUTER_WORKERS=4 -p 8001:8001 vllm-router

Testing

  • Verified backward compatibility (default single worker)
  • Tested command line argument parsing
  • Tested environment variable fallback
  • Verified Docker environment variable support
  • Updated run script functionality

Documentation

  • Created comprehensive WORKERS_CONFIGURATION.md with usage examples and best practices
  • Updated inline documentation for new arguments
  • Added performance considerations and deployment guidance

Performance Considerations

  • CPU Cores: Generally set workers to match CPU cores (e.g., 4 workers for 4 cores)
  • Memory: Each worker process consumes memory independently
  • Load Balancing: Uvicorn automatically distributes requests across workers
  • Health Checks: Multiple workers help prevent check failures under load

References


  • Make sure the code changes pass the pre-commit checks.
  • Sign-off your commit by using -s when doing git commit
  • Try to classify PRs for easy understanding of the type of changes, such as [Bugfix], [Feat], and [CI].
Detailed Checklist (Click to Expand)

Thank you for your contribution to production-stack! Before submitting the pull request, please ensure the PR meets the following criteria. This helps us maintain the code quality and improve the efficiency of the review process.

PR Title and Classification

Please try to classify PRs for easy understanding of the type of changes. The PR title is prefixed appropriately to indicate the type of change. Please use one of the following:

  • [Bugfix] for bug fixes.
  • [CI/Build] for build or continuous integration improvements.
  • [Doc] for documentation fixes and improvements.
  • [Feat] for new features in the cluster (e.g., autoscaling, disaggregated prefill, etc.).
  • [Router] for changes to the vllm_router (e.g., routing algorithm, router observability, etc.).
  • [Misc] for PRs that do not fit the above categories. Please use this sparingly.

Note: If the PR spans more than one category, please include all relevant prefixes.

Code Quality

The PR need to meet the following code quality standards:

  • Pass all linter checks. Please use pre-commit to format your code. See README.md for installation.
  • The code need to be well-documented to ensure future contributors can easily understand the code.
  • Please include sufficient tests to ensure the change is stay correct and robust. This includes both unit tests and integration tests.

DCO and Signed-off-by

When contributing changes to this project, you must agree to the DCO. Commits must include a Signed-off-by: header which certifies agreement with the terms of the DCO.

Using -s with git commit will automatically add this header.

What to Expect for the Reviews

We aim to address all PRs in a timely manner. If no one reviews your PR within 5 days, please @-mention one of YuhanLiu11
, Shaoting-Feng or ApostaC.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Summary of Changes

Hello @TheCodeWrangler, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request introduces a significant enhancement to the vLLM router by enabling the configuration of Uvicorn worker processes. This change aims to improve the router's ability to handle concurrent requests, prevent health check failures under load, and better utilize multi-core CPUs, thereby making the application more robust and production-ready.

Highlights

  • Configurable Uvicorn Workers: The vLLM router now supports configuring the number of Uvicorn worker processes, addressing readiness/liveness check failures under high load.
  • Flexible Configuration Options: Users can specify the number of workers via a new --workers command-line argument or by setting the VLLM_ROUTER_WORKERS environment variable. The default behavior remains a single worker for backward compatibility.
  • Docker Integration: The Dockerfile has been updated to set a default VLLM_ROUTER_WORKERS environment variable, making it easier to deploy containerized applications with multiple workers.
  • Dependency Updates: Several Python dependencies have been updated, including aiohttp, huggingface-hub, lmcache, setuptools, tokenizers, and transformers, along with additions like awscrt, caio, cufile-python, and hf-xet.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in issue comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces support for configurable Uvicorn workers to enhance the performance and load-handling capabilities of the vLLM router. The changes are well-structured, adding a --workers command-line argument and a corresponding VLLM_ROUTER_WORKERS environment variable.

My review focuses on ensuring the multi-worker implementation is robust and correct. I've identified a critical issue in how the Uvicorn server is launched, which will prevent it from functioning correctly with multiple workers due to how application state is initialized. I've also pointed out a potential crash from unhandled errors when parsing environment variables. Addressing these points will be crucial for the stability and correctness of this new feature.

@TheCodeWrangler TheCodeWrangler marked this pull request as draft September 9, 2025 12:35
@TheCodeWrangler TheCodeWrangler marked this pull request as ready for review September 9, 2025 12:44
zerofishnoodles and others added 6 commits September 9, 2025 07:57
Signed-off-by: Nathan Price <[email protected]>
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
Signed-off-by: Nathan Price <[email protected]>
@TheCodeWrangler TheCodeWrangler force-pushed the configure-uvicorn-workers branch from eedb4e9 to ee6fc9f Compare September 9, 2025 12:58
@TheCodeWrangler
Copy link
Author

not sure if i should request review:
@zerofishnoodles

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am wondering why there is a big change for uv lock since I don't see any new import in the commit.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

not sure ... i went ahead and just merged uv.lock from main

01c91e1

max-wittig and others added 3 commits September 16, 2025 08:59
Signed-off-by: Braulio Dumba <[email protected]>
Co-authored-by: Yuhan Liu <[email protected]>
Signed-off-by: Nathan Price <[email protected]>
Signed-off-by: Nathan Price <[email protected]>
@TheCodeWrangler TheCodeWrangler force-pushed the configure-uvicorn-workers branch from 01c91e1 to c559477 Compare September 16, 2025 14:01
TheCodeWrangler and others added 3 commits September 16, 2025 09:01
- Fix line length and formatting issues in initialize_all function
- Improve code readability with proper line breaks for long function calls
- Ensure compliance with pre-commit formatting requirements

This addresses the pre-commit check failures by properly formatting
the code according to the project's style guidelines.

Signed-off-by: Nathan Price <[email protected]>
@zerofishnoodles
Copy link
Collaborator

Hi, the CI didn't past, seems like the prefill and decode aiohttp client are not correctly initialized in the lifespan function. Can you fix it?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants