-
Notifications
You must be signed in to change notification settings - Fork 315
feat: allow for configuration of number of uvicorn workers #689
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
feat: allow for configuration of number of uvicorn workers #689
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Summary of Changes
Hello @TheCodeWrangler, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!
This pull request introduces a significant enhancement to the vLLM router by enabling the configuration of Uvicorn worker processes. This change aims to improve the router's ability to handle concurrent requests, prevent health check failures under load, and better utilize multi-core CPUs, thereby making the application more robust and production-ready.
Highlights
- Configurable Uvicorn Workers: The vLLM router now supports configuring the number of Uvicorn worker processes, addressing readiness/liveness check failures under high load.
- Flexible Configuration Options: Users can specify the number of workers via a new
--workerscommand-line argument or by setting theVLLM_ROUTER_WORKERSenvironment variable. The default behavior remains a single worker for backward compatibility. - Docker Integration: The Dockerfile has been updated to set a default
VLLM_ROUTER_WORKERSenvironment variable, making it easier to deploy containerized applications with multiple workers. - Dependency Updates: Several Python dependencies have been updated, including
aiohttp,huggingface-hub,lmcache,setuptools,tokenizers, andtransformers, along with additions likeawscrt,caio,cufile-python, andhf-xet.
Using Gemini Code Assist
The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.
Invoking Gemini
You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.
| Feature | Command | Description |
|---|---|---|
| Code Review | /gemini review |
Performs a code review for the current pull request in its current state. |
| Pull Request Summary | /gemini summary |
Provides a summary of the current pull request in its current state. |
| Comment | @gemini-code-assist | Responds in comments when explicitly tagged, both in issue comments and review comments. |
| Help | /gemini help |
Displays a list of available commands. |
Customization
To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.
Limitations & Feedback
Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.
You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.
Footnotes
-
Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution. ↩
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request introduces support for configurable Uvicorn workers to enhance the performance and load-handling capabilities of the vLLM router. The changes are well-structured, adding a --workers command-line argument and a corresponding VLLM_ROUTER_WORKERS environment variable.
My review focuses on ensuring the multi-worker implementation is robust and correct. I've identified a critical issue in how the Uvicorn server is launched, which will prevent it from functioning correctly with multiple workers due to how application state is initialized. I've also pointed out a potential crash from unhandled errors when parsing environment variables. Addressing these points will be crucial for the stability and correctness of this new feature.
Signed-off-by: Rui Zhang <[email protected]> Signed-off-by: Nathan Price <[email protected]>
Signed-off-by: Nathan Price <[email protected]>
Signed-off-by: Nathan Price <[email protected]>
Signed-off-by: Nathan Price <[email protected]>
Signed-off-by: Nathan Price <[email protected]>
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com> Signed-off-by: Nathan Price <[email protected]>
eedb4e9 to
ee6fc9f
Compare
|
not sure if i should request review: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am wondering why there is a big change for uv lock since I don't see any new import in the commit.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
not sure ... i went ahead and just merged uv.lock from main
Signed-off-by: Max Wittig <[email protected]> Signed-off-by: Nathan Price <[email protected]>
Signed-off-by: Braulio Dumba <[email protected]> Co-authored-by: Yuhan Liu <[email protected]> Signed-off-by: Nathan Price <[email protected]>
Signed-off-by: Nathan Price <[email protected]>
01c91e1 to
c559477
Compare
- Fix line length and formatting issues in initialize_all function - Improve code readability with proper line breaks for long function calls - Ensure compliance with pre-commit formatting requirements This addresses the pre-commit check failures by properly formatting the code according to the project's style guidelines. Signed-off-by: Nathan Price <[email protected]>
|
Hi, the CI didn't past, seems like the prefill and decode aiohttp client are not correctly initialized in the lifespan function. Can you fix it? |
[Router] Add configurable Uvicorn workers support
This PR adds configurable Uvicorn workers support to the vLLM router to improve performance and handle higher loads, addressing readiness/liveness check failures under load.
Problem
The vLLM router application has a tendency to fail readiness/liveness checks under load when running with a single Uvicorn process. This limits the application's ability to handle concurrent requests effectively and can cause service disruptions in production environments.
Solution
This PR implements configurable Uvicorn workers support following FastAPI best practices:
--workersargument to specify the number of worker processesVLLM_ROUTER_WORKERSenvironment variable for Docker/containerized deploymentsChanges Made
Core Implementation
src/vllm_router/parsers/parser.py: Added--workersargument with environment variable fallbacksrc/vllm_router/app.py: Modifieduvicorn.run()to accept workers parameterConfiguration Options
vllm-router --workers 4 --port 8001 ...VLLM_ROUTER_WORKERS=4docker run -e VLLM_ROUTER_WORKERS=4 ...Benefits
Usage Examples
Testing
Documentation
WORKERS_CONFIGURATION.mdwith usage examples and best practicesPerformance Considerations
References
-swhen doinggit commit[Bugfix],[Feat], and[CI].Detailed Checklist (Click to Expand)
Thank you for your contribution to production-stack! Before submitting the pull request, please ensure the PR meets the following criteria. This helps us maintain the code quality and improve the efficiency of the review process.
PR Title and Classification
Please try to classify PRs for easy understanding of the type of changes. The PR title is prefixed appropriately to indicate the type of change. Please use one of the following:
[Bugfix]for bug fixes.[CI/Build]for build or continuous integration improvements.[Doc]for documentation fixes and improvements.[Feat]for new features in the cluster (e.g., autoscaling, disaggregated prefill, etc.).[Router]for changes to thevllm_router(e.g., routing algorithm, router observability, etc.).[Misc]for PRs that do not fit the above categories. Please use this sparingly.Note: If the PR spans more than one category, please include all relevant prefixes.
Code Quality
The PR need to meet the following code quality standards:
pre-committo format your code. SeeREADME.mdfor installation.DCO and Signed-off-by
When contributing changes to this project, you must agree to the DCO. Commits must include a
Signed-off-by:header which certifies agreement with the terms of the DCO.Using
-swithgit commitwill automatically add this header.What to Expect for the Reviews
We aim to address all PRs in a timely manner. If no one reviews your PR within 5 days, please @-mention one of YuhanLiu11
, Shaoting-Feng or ApostaC.