Skip to content

[Bugfix] Fix a bad import#29694

Closed
DarkLight1337 wants to merge 1 commit intovllm-project:mainfrom
DarkLight1337:fix-import
Closed

[Bugfix] Fix a bad import#29694
DarkLight1337 wants to merge 1 commit intovllm-project:mainfrom
DarkLight1337:fix-import

Conversation

@DarkLight1337
Copy link
Copy Markdown
Member

@DarkLight1337 DarkLight1337 commented Nov 28, 2025

Purpose

False alarm, closing

Test Plan

Test Result


Essential Elements of an Effective PR Description Checklist
  • The purpose of the PR, such as "Fix some issue (link existing issues this PR will resolve)".
  • The test plan, such as providing test command.
  • The test results, such as pasting the results comparison before and after, or e2e results
  • (Optional) The necessary documentation update, such as updating supported_models.md and examples for a new model.
  • (Optional) Release notes update. If your change is user facing, please update the release notes draft in the Google Doc.

Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>
@DarkLight1337 DarkLight1337 deleted the fix-import branch November 28, 2025 17:41
@mergify mergify bot added the multi-modality Related to multi-modality (#4194) label Nov 28, 2025
Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

ℹ️ About Codex in GitHub

Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".

Comment on lines 25 to +26
from vllm.logger import init_logger
from vllm.transformers_utils.processor import cached_processor_from_config
from vllm.transformers_utils.input_processor import cached_processor_from_config
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Fix broken cached_processor import

The new import points to vllm.transformers_utils.input_processor, but no such module exists in this repo and cached_processor_from_config is still defined in transformers_utils/processor.py. As a result, importing vllm.multimodal.processing (e.g., when enabling multimodal pipelines) will now raise ModuleNotFoundError before any processing logic runs.

Useful? React with 👍 / 👎.

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request addresses a bug by correcting an import path in vllm/multimodal/processing.py. The cached_processor_from_config function is now imported from vllm.transformers_utils.input_processor instead of vllm.transformers_utils.processor. This change aligns with a likely refactoring where the function was moved to a new module. The fix is straightforward and appears correct, ensuring that the multimodal processing logic uses the intended utility function from its canonical source.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

multi-modality Related to multi-modality (#4194)

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant