Skip to content

Fix per file ruff ignores related to line length#26262

Merged
DarkLight1337 merged 1 commit intovllm-project:mainfrom
hmellor:fic-ruff-line-len-ignores
Oct 6, 2025
Merged

Fix per file ruff ignores related to line length#26262
DarkLight1337 merged 1 commit intovllm-project:mainfrom
hmellor:fic-ruff-line-len-ignores

Conversation

@hmellor
Copy link
Copy Markdown
Member

@hmellor hmellor commented Oct 5, 2025

Forward fixes the last of the issues skipped by #26247

Forward fixes the last of the issues skipped by vllm-project#26247

Signed-off-by: Harry Mellor <19981378+hmellor@users.noreply.github.com>
@mergify mergify bot added documentation Improvements or additions to documentation frontend llama Related to Llama models multi-modality Related to multi-modality (#4194) performance Performance-related issues qwen Related to Qwen models labels Oct 5, 2025
Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request addresses line length issues across the codebase by removing per-file ruff ignores for E501. The changes involve either reformatting long lines to comply with the line length limit or replacing file-level ignores with more specific line-level noqa comments. The modifications are extensive, touching many files in the benchmarks, csrc, examples, tests, and vllm directories. A significant change is the removal of a large block of E501 ignores from pyproject.toml. The approach taken is consistent and effectively resolves the widespread line length warnings. The code is now cleaner and the linting configuration is more maintainable. The changes that involve reformatting improve readability. While adding noqa comments is a pragmatic solution for lines that are difficult to reformat, the overall change is a positive step towards better code quality. I have reviewed the changes and found no issues of high or critical severity. The pull request is well-executed and achieves its goal.

Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

@DarkLight1337 DarkLight1337 enabled auto-merge (squash) October 6, 2025 02:47
@github-actions github-actions bot added the ready ONLY add when PR is ready to merge/full CI is needed label Oct 6, 2025
@DarkLight1337 DarkLight1337 merged commit 6c04638 into vllm-project:main Oct 6, 2025
89 checks passed
@hmellor hmellor deleted the fic-ruff-line-len-ignores branch October 6, 2025 05:31
karan pushed a commit to karan/vllm that referenced this pull request Oct 6, 2025
Signed-off-by: Harry Mellor <19981378+hmellor@users.noreply.github.com>
Signed-off-by: Karan Goel <3261985+karan@users.noreply.github.com>
lywa1998 pushed a commit to lywa1998/vllm that referenced this pull request Oct 20, 2025
Signed-off-by: Harry Mellor <19981378+hmellor@users.noreply.github.com>
alhridoy pushed a commit to alhridoy/vllm that referenced this pull request Oct 24, 2025
Signed-off-by: Harry Mellor <19981378+hmellor@users.noreply.github.com>
rtourgeman pushed a commit to rtourgeman/vllm that referenced this pull request Nov 10, 2025
Signed-off-by: Harry Mellor <19981378+hmellor@users.noreply.github.com>
devpatelio pushed a commit to SumanthRH/vllm that referenced this pull request Nov 29, 2025
Signed-off-by: Harry Mellor <19981378+hmellor@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

documentation Improvements or additions to documentation frontend kv-connector llama Related to Llama models multi-modality Related to multi-modality (#4194) performance Performance-related issues qwen Related to Qwen models ready ONLY add when PR is ready to merge/full CI is needed speculative-decoding structured-output tool-calling tpu Related to Google TPUs v1

Projects

Status: Done
Status: Done

Development

Successfully merging this pull request may close these issues.

2 participants