Skip to content

Conversation

@omertuc
Copy link
Member

@omertuc omertuc commented Aug 6, 2025

  • Update Containerfile.add_llama_to_lightspeed to use the latest lightspeed-stack version

  • Update Containerfile.assisted-chat to use the latest lightspeed-stack version

  • Remove the 7491bd3 litellm sed async
    patch

  • Bump llama-stack submodule to 9ed580e462eb7f9dca87ab98384b9b5b091c59c8 (v0.2.17)

  • Bump lightspeed-stack submodule to 739b786b77853f3ea5d8de7995c57e9a15785da2 (untagged)

  • Modify config/llama_stack_client_config.yaml to use ${env.GEMINI_API_KEY:=} instead of ${env.GEMINI_API_KEY:+} for compatibility with the latest lightspeed-stack version. The original value was wrong and worked because litellm picked it up from the env var anyway, but the new llama-stack version actively checks if it has the key configured in its own config as well.

  • Update scripts/query.sh to improve model selection with fzf, showing model names and types more clearly. (Unrelated to bump)

Summary by CodeRabbit

  • New Features

    • Enhanced the interactive model selection interface in the query script, providing clearer model names, types, and provider information for easier selection.
  • Chores

    • Updated base images and subproject references to newer versions for improved stability and compatibility.
    • Adjusted configuration for environment variable handling in Gemini provider settings.
    • Removed temporary compatibility patches related to asynchronous calls in container setup files.

@openshift-ci openshift-ci bot requested review from carbonin and jhernand August 6, 2025 11:59
@coderabbitai
Copy link

coderabbitai bot commented Aug 6, 2025

Walkthrough

This update removes temporary async patching commands from two Containerfiles, updates subproject references for lightspeed-stack and llama-stack, adjusts an environment variable expansion in a YAML config, and enhances the interactive model selection logic in a shell script for improved user experience.

Changes

Cohort / File(s) Change Summary
Containerfile Async Patch Removal
Containerfile.add_llama_to_lightspeed, Containerfile.assisted-chat
Removed sed commands that patched litellm_openai_mixin.py for async compatibility; updated base image in Containerfile.assisted-chat.
Subproject Reference Updates
lightspeed-stack, llama-stack
Updated subproject commit references to newer versions for both lightspeed-stack and llama-stack.
Config Environment Variable Syntax
config/llama_stack_client_config.yaml
Changed Gemini API key environment variable expansion from ${env.GEMINI_API_KEY:+} to ${env.GEMINI_API_KEY:=} syntax.
Script Model Selection Enhancement
scripts/query.sh
Enhanced model selection logic for interactive use: improved display, filtering, and variable naming for model/provider selection.

Sequence Diagram(s)

sequenceDiagram
    participant User
    participant QueryScript
    participant ModelRegistry

    User->>QueryScript: Run query.sh
    QueryScript->>ModelRegistry: Fetch model list (JSON)
    ModelRegistry-->>QueryScript: Return models
    QueryScript->>User: Display interactive, formatted model selection (fzf)
    User->>QueryScript: Select model
    QueryScript->>ModelRegistry: Send query with selected model/provider
    ModelRegistry-->>QueryScript: Return response
    QueryScript->>User: Display response
Loading

Estimated code review effort

🎯 2 (Simple) | ⏱️ ~8 minutes

Possibly related PRs

Suggested labels

lgtm

Suggested reviewers

  • omertuc
  • maorfr

Note

⚡️ Unit Test Generation is now available in beta!

Learn more here, or try it out under "Finishing Touches" below.


📜 Recent review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 61e80bf and 4caa50c.

📒 Files selected for processing (6)
  • Containerfile.add_llama_to_lightspeed (0 hunks)
  • Containerfile.assisted-chat (1 hunks)
  • config/llama_stack_client_config.yaml (1 hunks)
  • lightspeed-stack (1 hunks)
  • llama-stack (1 hunks)
  • scripts/query.sh (2 hunks)
💤 Files with no reviewable changes (1)
  • Containerfile.add_llama_to_lightspeed
✅ Files skipped from review due to trivial changes (2)
  • config/llama_stack_client_config.yaml
  • Containerfile.assisted-chat
🚧 Files skipped from review as they are similar to previous changes (3)
  • lightspeed-stack
  • llama-stack
  • scripts/query.sh
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Red Hat Konflux / assisted-chat-saas-main-on-pull-request
✨ Finishing Touches
  • 📝 Generate Docstrings
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

‼️ IMPORTANT
Auto-reply has been disabled for this repository in the CodeRabbit settings. The CodeRabbit bot will not respond to your replies unless it is explicitly tagged.

  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai explain this code block.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and explain its main purpose.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR.
  • @coderabbitai generate sequence diagram to generate a sequence diagram of the changes in this PR.
  • @coderabbitai generate unit tests to generate unit tests for this PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🧹 Nitpick comments (2)
Containerfile.assisted-chat (1)

2-3: Base image bumped to an un-tagged dev digest – confirm stability guarantees

Pinning by digest is good for reproducibility, but the human-readable comment references the moving tag dev-20250806-739b786.
If this digest is rebuilt in-place (common for dev images) you lose the guarantee that today’s build equals tomorrow’s.
Consider either:

# lightspeed-stack commit 739b786 (immutable)
FROM quay.io/lightspeed-core/lightspeed-stack@sha256:<digest>   # ← keep
LABEL org.opencontainers.image.revision="739b786"

or tracking an officially versioned tag once one is cut.

scripts/query.sh (1)

37-43: Padding expression can fail for identifiers > 40 chars

" " * (40 - length) yields a negative repeat count when length > 40, raising a jq error.
Guard with max(0; 40-length):

" " * ( (40 - (length)) | max(0) )
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between ad89872 and 61e80bf.

📒 Files selected for processing (6)
  • Containerfile.add_llama_to_lightspeed (0 hunks)
  • Containerfile.assisted-chat (1 hunks)
  • config/llama_stack_client_config.yaml (1 hunks)
  • lightspeed-stack (1 hunks)
  • llama-stack (1 hunks)
  • scripts/query.sh (2 hunks)
💤 Files with no reviewable changes (1)
  • Containerfile.add_llama_to_lightspeed
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Red Hat Konflux / assisted-chat-saas-main-on-pull-request
🔇 Additional comments (3)
llama-stack (1)

1-1: Submodule bump requires integration tests & tag pinning
The pointer moves to an untagged commit. Please:

  1. Confirm that the new llama-stack SHA introduces no breaking API changes consumed by this repo.
  2. Prefer pinning to an official release tag rather than a floating SHA for reproducibility.
lightspeed-stack (1)

1-1: Prefer pinning the submodule to a tagged release for reproducibility

The new pointer (739b786…) targets an untagged commit on the lightspeed-stack repository. Relying on a floating development commit may cause non-deterministic builds if the commit is later amended or force-pushed.
Consider updating to the nearest annotated tag (or create one) so downstream consumers can reproduce images without referencing an arbitrary SHA.

Please confirm that this commit is stable and will not be rebased, or update the pointer to a tagged version.

config/llama_stack_client_config.yaml (1)

19-19: Check the ${env.GEMINI_API_KEY:=} expansion – it silently falls back to an empty key

Switching from ${env.GEMINI_API_KEY:+} to ${env.GEMINI_API_KEY:=} assigns an empty string when the variable is unset.
The new llama-stack code will “see” a key present (albeit empty) and may either:

  1. Fail with a confusing “invalid key” error instead of the clearer “missing key”, or
  2. Attempt a request with an empty key and receive 401s.

If the intention is “require that GEMINI_API_KEY is set”, ${env.GEMINI_API_KEY:?} (error) or simply omitting the field until the variable exists is safer.

Please verify how the provider validates this field and pick the variant that gives the best failure mode.

@omertuc
Copy link
Member Author

omertuc commented Aug 6, 2025

/retest

@omertuc
Copy link
Member Author

omertuc commented Aug 6, 2025

/retest-required

- Update Containerfile.add_llama_to_lightspeed to use the latest lightspeed-stack version

- Update Containerfile.assisted-chat to use the latest lightspeed-stack version

- Remove the 7491bd3 litellm sed async
patch

- Bump llama-stack submodule to 9ed580e462eb7f9dca87ab98384b9b5b091c59c8 (v0.2.17)

- Bump lightspeed-stack submodule to 739b786b77853f3ea5d8de7995c57e9a15785da2 (untagged)

- Modify `config/llama_stack_client_config.yaml` to use `${env.GEMINI_API_KEY:=}` instead of `${env.GEMINI_API_KEY:+}` for compatibility with the latest lightspeed-stack version. The original value was wrong and worked because litellm picked it up from the env var anyway, but the new llama-stack version actively checks if it has the key configured in its own config as well.

- Update `scripts/query.sh` to improve model selection with fzf, showing model names and types more clearly. (Unrelated to bump)
Copy link
Collaborator

@maorfr maorfr left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

/lgtm

@openshift-ci
Copy link

openshift-ci bot commented Aug 6, 2025

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: maorfr, omertuc

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants