-
Notifications
You must be signed in to change notification settings - Fork 20
Bump lightspeed-core to dev-20250806-739b786 #91
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
WalkthroughThis update removes temporary async patching commands from two Containerfiles, updates subproject references for Changes
Sequence Diagram(s)sequenceDiagram
participant User
participant QueryScript
participant ModelRegistry
User->>QueryScript: Run query.sh
QueryScript->>ModelRegistry: Fetch model list (JSON)
ModelRegistry-->>QueryScript: Return models
QueryScript->>User: Display interactive, formatted model selection (fzf)
User->>QueryScript: Select model
QueryScript->>ModelRegistry: Send query with selected model/provider
ModelRegistry-->>QueryScript: Return response
QueryScript->>User: Display response
Estimated code review effort🎯 2 (Simple) | ⏱️ ~8 minutes Possibly related PRs
Suggested labels
Suggested reviewers
Note ⚡️ Unit Test Generation is now available in beta!Learn more here, or try it out under "Finishing Touches" below. 📜 Recent review detailsConfiguration used: CodeRabbit UI 📒 Files selected for processing (6)
💤 Files with no reviewable changes (1)
✅ Files skipped from review due to trivial changes (2)
🚧 Files skipped from review as they are similar to previous changes (3)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
✨ Finishing Touches
🧪 Generate unit tests
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
🧹 Nitpick comments (2)
Containerfile.assisted-chat (1)
2-3: Base image bumped to an un-tagged dev digest – confirm stability guaranteesPinning by digest is good for reproducibility, but the human-readable comment references the moving tag
dev-20250806-739b786.
If this digest is rebuilt in-place (common for dev images) you lose the guarantee that today’s build equals tomorrow’s.
Consider either:# lightspeed-stack commit 739b786 (immutable) FROM quay.io/lightspeed-core/lightspeed-stack@sha256:<digest> # ← keep LABEL org.opencontainers.image.revision="739b786"or tracking an officially versioned tag once one is cut.
scripts/query.sh (1)
37-43: Padding expression can fail for identifiers > 40 chars
" " * (40 - length)yields a negative repeat count whenlength > 40, raising a jq error.
Guard withmax(0; 40-length):" " * ( (40 - (length)) | max(0) )
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (6)
Containerfile.add_llama_to_lightspeed(0 hunks)Containerfile.assisted-chat(1 hunks)config/llama_stack_client_config.yaml(1 hunks)lightspeed-stack(1 hunks)llama-stack(1 hunks)scripts/query.sh(2 hunks)
💤 Files with no reviewable changes (1)
- Containerfile.add_llama_to_lightspeed
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: Red Hat Konflux / assisted-chat-saas-main-on-pull-request
🔇 Additional comments (3)
llama-stack (1)
1-1: Submodule bump requires integration tests & tag pinning
The pointer moves to an untagged commit. Please:
- Confirm that the new
llama-stackSHA introduces no breaking API changes consumed by this repo.- Prefer pinning to an official release tag rather than a floating SHA for reproducibility.
lightspeed-stack (1)
1-1: Prefer pinning the submodule to a tagged release for reproducibilityThe new pointer (
739b786…) targets an untagged commit on thelightspeed-stackrepository. Relying on a floating development commit may cause non-deterministic builds if the commit is later amended or force-pushed.
Consider updating to the nearest annotated tag (or create one) so downstream consumers can reproduce images without referencing an arbitrary SHA.Please confirm that this commit is stable and will not be rebased, or update the pointer to a tagged version.
config/llama_stack_client_config.yaml (1)
19-19: Check the${env.GEMINI_API_KEY:=}expansion – it silently falls back to an empty keySwitching from
${env.GEMINI_API_KEY:+}to${env.GEMINI_API_KEY:=}assigns an empty string when the variable is unset.
The newllama-stackcode will “see” a key present (albeit empty) and may either:
- Fail with a confusing “invalid key” error instead of the clearer “missing key”, or
- Attempt a request with an empty key and receive 401s.
If the intention is “require that GEMINI_API_KEY is set”,
${env.GEMINI_API_KEY:?}(error) or simply omitting the field until the variable exists is safer.Please verify how the provider validates this field and pick the variant that gives the best failure mode.
|
/retest |
|
/retest-required |
- Update Containerfile.add_llama_to_lightspeed to use the latest lightspeed-stack version - Update Containerfile.assisted-chat to use the latest lightspeed-stack version - Remove the 7491bd3 litellm sed async patch - Bump llama-stack submodule to 9ed580e462eb7f9dca87ab98384b9b5b091c59c8 (v0.2.17) - Bump lightspeed-stack submodule to 739b786b77853f3ea5d8de7995c57e9a15785da2 (untagged) - Modify `config/llama_stack_client_config.yaml` to use `${env.GEMINI_API_KEY:=}` instead of `${env.GEMINI_API_KEY:+}` for compatibility with the latest lightspeed-stack version. The original value was wrong and worked because litellm picked it up from the env var anyway, but the new llama-stack version actively checks if it has the key configured in its own config as well. - Update `scripts/query.sh` to improve model selection with fzf, showing model names and types more clearly. (Unrelated to bump)
maorfr
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/lgtm
|
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: maorfr, omertuc The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
Update Containerfile.add_llama_to_lightspeed to use the latest lightspeed-stack version
Update Containerfile.assisted-chat to use the latest lightspeed-stack version
Remove the 7491bd3 litellm sed async
patch
Bump llama-stack submodule to 9ed580e462eb7f9dca87ab98384b9b5b091c59c8 (v0.2.17)
Bump lightspeed-stack submodule to 739b786b77853f3ea5d8de7995c57e9a15785da2 (untagged)
Modify
config/llama_stack_client_config.yamlto use${env.GEMINI_API_KEY:=}instead of${env.GEMINI_API_KEY:+}for compatibility with the latest lightspeed-stack version. The original value was wrong and worked because litellm picked it up from the env var anyway, but the new llama-stack version actively checks if it has the key configured in its own config as well.Update
scripts/query.shto improve model selection with fzf, showing model names and types more clearly. (Unrelated to bump)Summary by CodeRabbit
New Features
Chores