Skip to content

update RFC template#9

Merged
hsliuustc0106 merged 2 commits intomainfrom
hsliu-dev-C
Oct 18, 2025
Merged

update RFC template#9
hsliuustc0106 merged 2 commits intomainfrom
hsliu-dev-C

Conversation

@hsliuustc0106
Copy link
Copy Markdown
Collaborator

@hsliuustc0106 hsliuustc0106 commented Oct 18, 2025

PLEASE FILL IN THE PR DESCRIPTION HERE ENSURING ALL CHECKLIST ITEMS (AT THE BOTTOM) HAVE BEEN CONSIDERED.

Purpose

RFC Template (.github/ISSUE_TEMPLATE/750-RFC.yml)

Adapted for vLLM-omni with updated URLs
Includes sections for:
Motivation - Why the RFC is needed
Proposed Change - What changes are being proposed
Feedback Period - How long to collect feedback (usually 1+ weeks)
CC List - People to notify about the RFC
Any Other Things - Additional context

Test Plan

none

Test Result

none


Essential Elements of an Effective PR Description Checklist
  • The purpose of the PR, such as "Fix some issue (link existing issues this PR will resolve)".
  • The test plan, such as providing test command.
  • The test results, such as pasting the results comparison before and after, or e2e results
  • (Optional) The necessary documentation update, such as updating supported_models.md and examples for a new model.
  • (Optional) Release notes update. If your change is user facing, please update the release notes draft.

BEFORE SUBMITTING, PLEASE READ https://github.com/hsliuustc0106/vllm-omni/blob/main/CONTRIBUTING.md (anything written below this line will be removed by GitHub Actions)

- Add 750-RFC.yml template for architectural discussions
- Template adapted for vLLM-omni with updated URLs
- Enables structured feedback for major design changes
- Follows vLLM's RFC template structure
@gemini-code-assist
Copy link
Copy Markdown

Summary of Changes

Hello @hsliuustc0106, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request introduces a structured process for Request for Comments (RFCs) within the vLLM-omni project by adding a new GitHub issue template. This template aims to streamline the proposal and feedback collection for significant architectural or design decisions, ensuring clarity and consistency. Additionally, the .gitignore file has been cleaned up by removing outdated Kubernetes-related exclusions.

Highlights

  • New RFC Issue Template: A new GitHub issue template for Request for Comments (RFCs) has been added, standardizing the process for proposing major architectural changes or design choices within the project.
  • Structured RFC Submission: The new template includes dedicated sections for 'Motivation', 'Proposed Change', 'Feedback Period', and a 'CC List', ensuring comprehensive and consistent RFC submissions.
  • vLLM-omni Specifics: The RFC template is tailored for the vLLM-omni project, incorporating relevant URLs for referencing previous RFCs and guiding users to the project's documentation and chatbot for common questions.
  • .gitignore Cleanup: Kubernetes-related entries, such as k8s/ and various .yml and .yaml exclusions, have been removed from the .gitignore file.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@hsliuustc0106 hsliuustc0106 merged commit 7a85034 into main Oct 18, 2025
1 check passed
Copy link
Copy Markdown

@gemini-code-assist gemini-code-assist Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a new issue template for Requests for Comments (RFCs) and also modifies the .gitignore file. The RFC template is a welcome addition, though I've suggested a couple of improvements to make it more effective: clarifying the descriptions for the 'Motivation' and 'Proposed Change' fields, and removing a checkbox that seems more suited for bug reports. My main concern is with the .gitignore change, which is very broad and could lead to unintended files being committed. I've recommended a more targeted change to avoid this potential issue. In the future, it would be better to separate unrelated changes like these into different pull requests.

@Gaohan123 Gaohan123 deleted the hsliu-dev-C branch December 1, 2025 09:52
princepride pushed a commit to princepride/vllm-omni that referenced this pull request Jan 10, 2026
chickeyton pushed a commit to chickeyton/vllm-omni that referenced this pull request Mar 16, 2026
Celeste-jq pushed a commit to Celeste-jq/vllm-omni that referenced this pull request Mar 28, 2026
Sy0307 added a commit to Sy0307/vllm-omni that referenced this pull request Apr 10, 2026
P0 fixes:
  vllm-project#1: _free_scaffold_weights now shrinks storage to zero (actually
      releases VRAM). Only runs when SKIP_SCAFFOLD is also set.
      Called lazily after first prefill, not at load time.
  vllm-project#2: Sliding VAE default OFF (splice algorithm had alignment bug).
      _sliding_vae_decode now falls back to full decode until proper
      overlap-add is implemented.
  vllm-project#3: Complete per-request state reset in preprocess: now clears
      _curr_prefix_feat_cond, _last_audio_patch_gpu, _prev_audio,
      _prev_audio_len, _decode_step_count, _precomputed_stop_logits.
  vllm-project#4: compute_logits fallback forces stop (not continue) when
      _prefill_completed=True, preventing runaway generation.
  vllm-project#5: Scaffold VRAM: load_weights no longer frees immediately;
      _free_scaffold_weights called after first prefill completes,
      so scaffold is available for prefill then released.

P1 fixes:
  vllm-project#6: Log all active config flags at load time.
  vllm-project#7: Remove dead _STOP_CHECK_INTERVAL code.
  vllm-project#8: Remove broken audio_duration formula from postprocess.
  vllm-project#9/vllm-project#14: Move `from einops import rearrange` to module top level.
  vllm-project#11: Remove torch.no_grad() context from _forward_decode_graphable
       (incompatible with CUDA Graph capture).
Sy0307 added a commit to Sy0307/vllm-omni that referenced this pull request Apr 10, 2026
P0 fixes:
  vllm-project#1: _free_scaffold_weights now shrinks storage to zero (actually
      releases VRAM). Only runs when SKIP_SCAFFOLD is also set.
      Called lazily after first prefill, not at load time.
  vllm-project#2: Sliding VAE default OFF (splice algorithm had alignment bug).
      _sliding_vae_decode now falls back to full decode until proper
      overlap-add is implemented.
  vllm-project#3: Complete per-request state reset in preprocess: now clears
      _curr_prefix_feat_cond, _last_audio_patch_gpu, _prev_audio,
      _prev_audio_len, _decode_step_count, _precomputed_stop_logits.
  vllm-project#4: compute_logits fallback forces stop (not continue) when
      _prefill_completed=True, preventing runaway generation.
  vllm-project#5: Scaffold VRAM: load_weights no longer frees immediately;
      _free_scaffold_weights called after first prefill completes,
      so scaffold is available for prefill then released.

P1 fixes:
  vllm-project#6: Log all active config flags at load time.
  vllm-project#7: Remove dead _STOP_CHECK_INTERVAL code.
  vllm-project#8: Remove broken audio_duration formula from postprocess.
  vllm-project#9/vllm-project#14: Move `from einops import rearrange` to module top level.
  vllm-project#11: Remove torch.no_grad() context from _forward_decode_graphable
       (incompatible with CUDA Graph capture).
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant