Skip to content

MGMT-21336 Query fail when a model isn't provided in the request#75

Merged
openshift-merge-bot[bot] merged 1 commit intorh-ecosystem-edge:mainfrom
eranco74:MGMT-21336
Jul 30, 2025
Merged

MGMT-21336 Query fail when a model isn't provided in the request#75
openshift-merge-bot[bot] merged 1 commit intorh-ecosystem-edge:mainfrom
eranco74:MGMT-21336

Conversation

@eranco74
Copy link
Collaborator

@eranco74 eranco74 commented Jul 30, 2025

Specify the default model and provider in the LSC config

This is required because following
llamastack/llama-stack#2886 llama-stack discovers the models dynamically, if we don't pass a provider and model, it falls back to the first model it finds (regardless of what we have in the llama-stack config.yaml)

Note that it's also listing vertexAI models (which work with the vertexAI API)!

Summary by CodeRabbit

  • Chores
    • Updated configuration to include default model and provider settings for inference.

Specify the default model and provider in the LSC config

This is required because following
llamastack/llama-stack#2886 llama-stack discovers the models dynamically, if we don't pass a provider and model, it falls back to the first model it finds (regardless of what we have in the llama-stack config.yaml)

Note that it's also listing vertexAI models (which work with the vertexAI API)!

Signed-off-by: Eran Cohen <eranco@redhat.com>
@openshift-ci openshift-ci bot requested review from carbonin and omertuc July 30, 2025 09:37
@openshift-ci
Copy link

openshift-ci bot commented Jul 30, 2025

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: eranco74

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Details Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@coderabbitai
Copy link

coderabbitai bot commented Jul 30, 2025

Walkthrough

A new inference section was added to the lightspeed-stack.template.yaml configuration file, introducing default_model and default_provider keys. No other changes or removals were made to the configuration.

Changes

Cohort / File(s) Change Summary
Inference Configuration Addition
lightspeed-stack.template.yaml
Added a top-level inference section with default_model and default_provider keys.

Estimated code review effort

🎯 1 (Trivial) | ⏱️ ~2 minutes

Possibly related PRs

Note

⚡️ Unit Test Generation is now available in beta!

Learn more here, or try it out under "Finishing Touches" below.

✨ Finishing Touches
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

‼️ IMPORTANT
Auto-reply has been disabled for this repository in the CodeRabbit settings. The CodeRabbit bot will not respond to your replies unless it is explicitly tagged.

  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai explain this code block.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and explain its main purpose.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR.
  • @coderabbitai generate sequence diagram to generate a sequence diagram of the changes in this PR.
  • @coderabbitai generate unit tests to generate unit tests for this PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (1)
lightspeed-stack.template.yaml (1)

31-32: Consider quoting the model string to prevent future YAML parsing surprises

Although / is valid in unquoted scalars, quoting the value is more robust (e.g., if a future model id contains : or #).

-  default_model: gemini/gemini/gemini-2.0-flash
+  default_model: "gemini/gemini/gemini-2.0-flash"
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 62aa01d and 94213ee.

📒 Files selected for processing (1)
  • lightspeed-stack.template.yaml (1 hunks)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Red Hat Konflux / assisted-chat-saas-main-on-pull-request
🔇 Additional comments (1)
lightspeed-stack.template.yaml (1)

30-32: Verify key names conform to upstream Llama-Stack schema

The newly-added inference: block looks correct syntactically, but upstream has changed field names more than once (default_model, default_model_name, model, etc.). A mismatch will silently fall back to the first discovered model – exactly the bug this PR tries to avoid.

Please double-check against the current llama-stack schema (PR 2886) that the exact keys are:

inference:
  default_model: <string>
  default_provider: <string>

and not something like default_model_id or default_models.

@omertuc
Copy link
Member

omertuc commented Jul 30, 2025

In which situations a model is not provided in the request?

@omertuc
Copy link
Member

omertuc commented Jul 30, 2025

/lgtm

@eranco74
Copy link
Collaborator Author

In which situations a model is not provided in the request?

integration/stage/prod

@openshift-merge-bot openshift-merge-bot bot merged commit 8adef6a into rh-ecosystem-edge:main Jul 30, 2025
5 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants