Skip to content

Conversation

@krishung5
Copy link
Contributor

@krishung5 krishung5 commented Aug 19, 2025

Overview:

Follow up PR for #2509.

Details:

Where should the reviewer start?

Related Issues: (use one of the action keywords Closes / Fixes / Resolves / Relates to)

  • closes GitHub issue: #xxx

Summary by CodeRabbit

  • Chores
    • Updated vLLM dependency to a newer revision.
    • Switched vLLM installation to a pinned Git source for reproducible builds.
    • Refreshed container metadata and comments; no runtime behavior changes.

@krishung5
Copy link
Contributor Author

/ok to test d0cff4c

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Aug 19, 2025

Walkthrough

Updates vLLM references: Dockerfile comment bumped from v0.10.0 to v0.10.1; install script removes a comment line; pyproject optional dependency switches from pinned PyPI vllm==0.10.0 to a specific Git revision (77a6bf0...). No functional logic changes in scripts.

Changes

Cohort / File(s) Summary of Changes
vLLM Dockerfile metadata
container/Dockerfile.vllm
Updated commented vLLM tag/URL from v0.10.0 to v0.10.1; removed prior notes about DeepGEMM pinning; no ARG changes.
vLLM installer script cleanup
container/deps/vllm/install_vllm.sh
Removed an empty line and an inline comment before python setup install; no behavioral changes.
Optional dependency source update
pyproject.toml
Changed vllm optional dependency from PyPI pin (0.10.0) to Git source at rev 77a6bf07aedf132aad2b6719f6d87abc5d3311ab; other entries unchanged.

Estimated code review effort

🎯 2 (Simple) | ⏱️ ~10 minutes

Possibly related PRs

Poem

A bunny taps the version tree,
From tag to commit, neat as can be.
A comment trimmed, a pin set free—
Git revs hop lightly, dependency.
Carrots aligned in TOML rows,
Ship the build—onward it goes! 🥕🚀

Tip

🔌 Remote MCP (Model Context Protocol) integration is now available!

Pro plan users can now connect to remote MCP servers from the Integrations page. Connect with popular remote MCPs such as Notion and Linear to add more context to your reviews and chats.


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

CodeRabbit Commands (Invoked using PR/Issue comments)

Type @coderabbitai help to get the list of available commands.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

Status, Documentation and Community

  • Visit our Status Page to check the current availability of CodeRabbit.
  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (1)
pyproject.toml (1)

59-59: Switch to VCS pin looks fine; verify packaging/consumer impact and optionally use tag

Moving vllm to a Git commit is acceptable, and you already have allow-direct-references = true configured for Hatch. Two follow-ups to avoid surprises:

  • Developer installs of ai-dynamo[vllm] now require Git and build deps; this may fail on minimal environments that previously consumed PyPI wheels. Consider calling this out in docs or using a tag for self-documentation.
  • If the intent is strictly “match v0.10.1,” you can pin by tag for clarity. Commit pinning is fine for reproducibility, but the tag makes intent obvious.

You can switch to the tag:

-    { git = "https://github.com/vllm-project/vllm.git", rev = "77a6bf07aedf132aad2b6719f6d87abc5d3311ab" },
+    { git = "https://github.com/vllm-project/vllm.git", rev = "v0.10.1" },

Or keep the commit and add a comment for clarity:

     "nixl<=0.4.1",
-    { git = "https://github.com/vllm-project/vllm.git", rev = "77a6bf07aedf132aad2b6719f6d87abc5d3311ab" },
+    # vLLM v0.10.1
+    { git = "https://github.com/vllm-project/vllm.git", rev = "77a6bf07aedf132aad2b6719f6d87abc5d3311ab" },

To proactively check for packaging and consumer impact:

  • Verify that pip/uv can resolve and install the extra in a clean environment (requires Git).
  • Confirm the commit matches v0.10.1 so Dockerfile and pyproject remain in sync.

If you want, I can add a README note about the new requirement for Git and build tools when using the vllm extra.

📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

💡 Knowledge Base configuration:

  • MCP integration is disabled by default for public repositories
  • Jira integration is disabled by default for public repositories
  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between 1ed877f and d0cff4c.

📒 Files selected for processing (3)
  • container/Dockerfile.vllm (1 hunks)
  • container/deps/vllm/install_vllm.sh (0 hunks)
  • pyproject.toml (1 hunks)
💤 Files with no reviewable changes (1)
  • container/deps/vllm/install_vllm.sh
🧰 Additional context used
🧠 Learnings (1)
📚 Learning: 2025-07-22T10:22:28.972Z
Learnt from: ptarasiewiczNV
PR: ai-dynamo/dynamo#2027
File: container/deps/vllm/install_vllm.sh:0-0
Timestamp: 2025-07-22T10:22:28.972Z
Learning: The `--torch-backend=auto` flag works with vLLM installations via uv pip install, even though it's not a standard pip option. This flag is processed by vLLM's build system during installation to automatically match PyTorch distribution with container CUDA versions.

Applied to files:

  • container/Dockerfile.vllm
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
  • GitHub Check: Mirror Repository to GitLab
  • GitHub Check: Build and Test - dynamo

Comment on lines +19 to +20
# Match 0.10.1 vLLM release
# https://github.com/vllm-project/vllm/releases/tag/v0.10.1
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Verification agent

🧩 Analysis chain

Confirm commit/tag alignment and annotate for traceability

Good to document the target release. Since VLLM_REF (Line 16) is pinned to a commit, please verify that 77a6bf07aedf132aad2b6719f6d87abc5d3311ab is exactly the commit for v0.10.1, and consider annotating the commit in the comment for quicker traceability.

If you want to make this explicit, you could add a small comment:

 # Match 0.10.1 vLLM release
 # https://github.com/vllm-project/vllm/releases/tag/v0.10.1
+# VLLM_REF pinned to commit for v0.10.1
+# 77a6bf07aedf132aad2b6719f6d87abc5d3311ab

Also, please sanity-check DEEPGEMM_REF and FLASHINF_REF compatibility with v0.10.1 (they are unchanged). If there are known pairings for this vLLM release, capturing them in a short comment helps future bumps.


🌐 Web query:

What commit hash does the vLLM GitHub tag v0.10.1 point to?

💡 Result:

The v0.10.1 tag points to commit aab549870df50edf0512f0a59b574f692f546465 (short: aab5498). (github.com)

Citations:


Align VLLM_REF with the v0.10.1 tag commit
The Dockerfile currently pins VLLM_REF to 77a6bf07aedf132aad2b6719f6d87abc5d3311ab, but the GitHub v0.10.1 tag actually points to commit aab549870df50edf0512f0a59b574f692f546465. Please update the ref and annotate it for traceability:

 # Match 0.10.1 vLLM release
 # https://github.com/vllm-project/vllm/releases/tag/v0.10.1
-# VLLM_REF pinned to commit for v0.10.1
-# 77a6bf07aedf132aad2b6719f6d87abc5d3311ab
+VLLM_REF=aab549870df50edf0512f0a59b574f692f546465
+# VLLM_REF pinned to commit for v0.10.1 (aab549870df50edf0512f0a59b574f692f546465)

• After updating, please also sanity-check that your DEEPGEMM_REF and FLASHINF_REF remain compatible with this v0.10.1 commit.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
# Match 0.10.1 vLLM release
# https://github.com/vllm-project/vllm/releases/tag/v0.10.1
# Match 0.10.1 vLLM release
# https://github.com/vllm-project/vllm/releases/tag/v0.10.1
VLLM_REF=aab549870df50edf0512f0a59b574f692f546465
# VLLM_REF pinned to commit for v0.10.1 (aab549870df50edf0512f0a59b574f692f546465)
🤖 Prompt for AI Agents
In container/Dockerfile.vllm around lines 19 to 20, VLLM_REF is pinned to commit
77a6bf07aedf132aad2b6719f6d87abc5d3311ab but should point to the v0.10.1 tag
commit aab549870df50edf0512f0a59b574f692f546465; update the VLLM_REF value to
aab549870df50edf0512f0a59b574f692f546465 and add a brief comment explaining this
maps to GitHub v0.10.1 for traceability, then rebuild or re-run any local docker
steps to verify no build errors and sanity-check DEEPGEMM_REF and FLASHINF_REF
(update them if they are known incompatible) by ensuring their commits or tags
are compatible with v0.10.1 and adjusting pins or comments accordingly.

"uvloop",
"nixl<=0.4.1",
"vllm==0.10.0",
{ git = "https://github.com/vllm-project/vllm.git", rev = "77a6bf07aedf132aad2b6719f6d87abc5d3311ab" },
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we cannot use URLs in wheels dependencies, pypi wont allow us to publish python wheels which pull deps from URL

@krishung5
Copy link
Contributor Author

Closing. #2528

@krishung5 krishung5 closed this Aug 19, 2025
@krishung5 krishung5 deleted the krish/update-ref branch August 27, 2025 00:39
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants