Skip to content

(bugfix): Fixed encode in LLM entrypoint for IOProcessr plugin prompts#34618

Merged
vllm-bot merged 1 commit intovllm-project:mainfrom
christian-pinto:fix_prithvi_tests
Feb 16, 2026
Merged

(bugfix): Fixed encode in LLM entrypoint for IOProcessr plugin prompts#34618
vllm-bot merged 1 commit intovllm-project:mainfrom
christian-pinto:fix_prithvi_tests

Conversation

@christian-pinto
Copy link
Contributor

@christian-pinto christian-pinto commented Feb 16, 2026

Purpose

This PR fixes a bug in encode in the LLM entrypoint when using IO Processor plugins.

Previously in encode we would verify that the request is meant to use an IO Processor plugin, and then pass the full prompt to the parse_data of the IO Processor plugin. While we should only pass the prompt.data field to the plugin, similarly to what is done in the online serving mode.

This came out of a discussion in #34214

@DarkLight1337 @staugust

Test Plan

Modified test to pass a properly formatted prompt to vllm when performing the inference in offline mode.

Test Result

Signed-off-by: Christian Pinto <christian.pinto@ibm.com>
@mergify
Copy link

mergify bot commented Feb 16, 2026

Documentation preview: https://vllm--34618.org.readthedocs.build/en/34618/

@mergify mergify bot added documentation Improvements or additions to documentation frontend bug Something isn't working labels Feb 16, 2026
Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request aims to fix a bug in the LLM.encode method for IO Processor plugin prompts. The change in vllm/entrypoints/llm.py modifies the argument passed to the io_processor.parse_data method. While the change itself seems to align with the PR description, the corresponding changes in the test and example files appear to contradict the fix, potentially making it ineffective. I've added a critical comment detailing this concern.

Copy link
Member

@DarkLight1337 DarkLight1337 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for fixing!

@DarkLight1337 DarkLight1337 enabled auto-merge (squash) February 16, 2026 13:28
@github-actions github-actions bot added the ready ONLY add when PR is ready to merge/full CI is needed label Feb 16, 2026
@vllm-bot vllm-bot merged commit 6930bec into vllm-project:main Feb 16, 2026
52 of 54 checks passed
wzhao18 pushed a commit to wzhao18/vllm that referenced this pull request Feb 18, 2026
vllm-project#34618)

Signed-off-by: Christian Pinto <christian.pinto@ibm.com>
Signed-off-by: wzhao18 <wzhao18.sz@gmail.com>
eldarkurtic pushed a commit to eldarkurtic/vllm that referenced this pull request Feb 19, 2026
vllm-project#34618)

Signed-off-by: Christian Pinto <christian.pinto@ibm.com>
Signed-off-by: Eldar Kurtic <research@neuralmagic.com>
ZJY0516 pushed a commit to ZJY0516/vllm that referenced this pull request Feb 23, 2026
vllm-project#34618)

Signed-off-by: Christian Pinto <christian.pinto@ibm.com>
Signed-off-by: zjy0516 <riverclouds.zhu@qq.com>
llsj14 pushed a commit to llsj14/vllm that referenced this pull request Mar 1, 2026
tunglinwood pushed a commit to tunglinwood/vllm that referenced this pull request Mar 4, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

bug Something isn't working documentation Improvements or additions to documentation frontend ready ONLY add when PR is ready to merge/full CI is needed

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants