Skip to content

Conversation

@alex-solovyev
Copy link
Collaborator

@alex-solovyev alex-solovyev commented Feb 9, 2026

Summary

  • Adds resolve_model_tier() to shared-constants.sh as a shared utility for tier name -> model string resolution
  • Runner and cron helpers now accept tier names (haiku/sonnet/opus/flash/pro/grok) in --model flag
  • New --provider flag allows overriding the provider while keeping the model ID
  • detect_ai_backends() discovers available CLI backends

Changes

shared-constants.sh

  • resolve_model_tier(): Resolves tier names to full provider/model strings. Tries fallback-chain-helper.sh first (availability-aware), falls back to static mapping. Passes through full provider/model strings unchanged.
  • detect_ai_backends(): Returns list of available AI CLI backends (opencode, claude)

runner-helper.sh

  • create and run commands: --model now accepts tier names
  • New --provider flag overrides the provider portion of the model string
  • Updated help text and usage comments

cron-helper.sh

  • add command: --model now accepts tier names
  • New --provider flag for provider override
  • Updated help text

cron-dispatch.sh

  • Now sources shared-constants.sh for resolve_model_tier()
  • Model resolved from tier name before dispatch

Examples

# Before (only full model strings)
runner-helper.sh create reviewer --model anthropic/claude-sonnet-4-20250514

# After (tier names work too)
runner-helper.sh create reviewer --model sonnet
runner-helper.sh run reviewer "review code" --model opus
runner-helper.sh run reviewer "review code" --model sonnet --provider openrouter

cron-helper.sh add --schedule "0 9 * * *" --task "daily review" --model haiku

Testing

  • bash -n: All 4 files pass syntax check
  • ShellCheck -S error: Zero violations

Fixes: t132.7 ref:GH#739

Summary by CodeRabbit

  • New Features

    • Added --provider option to allow users to override the AI provider when creating or managing tasks.
    • Model tier names now automatically resolve to full provider/model identifiers.
  • Chores

    • Enhanced internal utilities for AI backend detection and model resolution.
    • Exported shared constants for improved script interoperability.

…s (t132.7)

Add resolve_model_tier() to shared-constants.sh for tier name resolution
(haiku/sonnet/opus/flash/pro/grok -> full provider/model strings). Used by
runner-helper.sh, cron-helper.sh, and cron-dispatch.sh.

- resolve_model_tier(): tries fallback-chain-helper.sh first, then static map
- detect_ai_backends(): discovers available CLI backends (opencode, claude)
- --provider flag on runner create/run and cron add for provider override
- --model now accepts tier names in addition to full provider/model strings
- cron-dispatch.sh now sources shared-constants.sh for tier resolution

Fixes: t132.7 ref:GH#739
@alex-solovyev alex-solovyev merged commit 6f6a658 into main Feb 9, 2026
4 checks passed
@gemini-code-assist
Copy link

Summary of Changes

Hello @alex-solovyev, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly enhances the flexibility and user experience for specifying AI models within the runner-helper.sh and cron-helper.sh scripts. It introduces the convenience of using concise model tier names instead of lengthy provider-specific strings and provides a dedicated mechanism to override the model provider independently. These changes streamline model selection and offer more granular control over AI backend usage, making the scripts more adaptable to various AI configurations.

Highlights

  • Model Tier Support: The --model flag in runner-helper.sh and cron-helper.sh now accepts simplified tier names (e.g., sonnet, opus) in addition to full provider/model strings, simplifying model selection.
  • Provider Override: A new --provider flag has been introduced for runner-helper.sh and cron-helper.sh to allow overriding the model's provider while retaining the model ID, offering greater flexibility.
  • Centralized Model Resolution: A new resolve_model_tier() function in shared-constants.sh centralizes the translation of tier names to full model strings, incorporating a fallback mechanism for robust resolution.
  • AI Backend Detection: A detect_ai_backends() function has been added to shared-constants.sh to identify and list available AI CLI tools like opencode and claude on the system.
Changelog
  • .agents/scripts/cron-dispatch.sh
    • Now sources shared-constants.sh to access shared utility functions.
    • Integrates resolve_model_tier() to automatically convert model tier names to full provider/model strings before dispatching cron jobs.
  • .agents/scripts/cron-helper.sh
    • The add command now supports model tier names for the --model flag.
    • Introduced a new --provider flag to allow overriding the model's provider.
    • Updated help text to reflect the new --model and --provider options.
  • .agents/scripts/runner-helper.sh
    • The create and run commands now accept model tier names for the --model flag.
    • Added a new --provider flag for both create and run commands to override the model's provider.
    • Updated usage comments and help text to document the new model tier and provider override capabilities.
  • .agents/scripts/shared-constants.sh
    • Added resolve_model_tier() function: This function translates model tier names (e.g., haiku, sonnet) into their corresponding full provider/model strings, first attempting an availability-aware resolution via fallback-chain-helper.sh and then falling back to a static mapping. It passes through already-full provider/model strings unchanged.
    • Added detect_ai_backends() function: This function identifies and lists available AI CLI backends (e.g., opencode, claude) installed on the system.
Activity
  • No specific activity (comments, reviews, or progress updates) has been recorded for this pull request yet.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Feb 9, 2026

Caution

Review failed

The pull request is closed.

Walkthrough

These changes extend the agent scripting framework to support model-tier name resolution and provider overrides. New functions in shared-constants.sh enable translation of tier names to full model strings, while cron-dispatch.sh, cron-helper.sh, and runner-helper.sh implement --provider options for customizing AI model providers.

Changes

Cohort / File(s) Summary
Shared Utilities
shared-constants.sh
Added resolve_model_tier() function to translate tier names into full provider/model strings and detect_ai_backends() to identify available AI tools. Exported extensive set of constants (HTTP codes, error messages, timeouts, colors) for public use.
Cron Automation
cron-dispatch.sh, cron-helper.sh
Integrated model-tier resolution via resolve_model_tier() and added --provider option to override model provider. cron-dispatch.sh sources shared-constants for utility access; cron-helper.sh normalizes model strings post-parsing when provider specified.
Runner Helper
runner-helper.sh
Extended create and run commands with --provider option and model-tier resolution. Implements provider override logic after resolving model tiers, maintaining backward compatibility for existing workflows.

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~22 minutes

Possibly related issues

Poem

🔄 Model tiers resolved, providers set free,
Shells now speak in harmony!
From shared constants to cron's delight,
Multi-cloud agents, configured just right.

✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch feature/t132.7-multi-provider-support

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

alex-solovyev added a commit that referenced this pull request Feb 9, 2026
@github-actions
Copy link

github-actions bot commented Feb 9, 2026

🔍 Code Quality Report

�[0;35m[MONITOR]�[0m Code Review Monitoring Report

�[0;34m[INFO]�[0m Latest Quality Status:
SonarCloud: 0 bugs, 0 vulnerabilities, 59 code smells

�[0;34m[INFO]�[0m Recent monitoring activity:
Mon Feb 9 17:26:43 UTC 2026: Code review monitoring started
Mon Feb 9 17:26:44 UTC 2026: SonarCloud - Bugs: 0, Vulnerabilities: 0, Code Smells: 59

📈 Current Quality Metrics

  • BUGS: 0
  • CODE SMELLS: 59
  • VULNERABILITIES: 0

Generated on: Mon Feb 9 17:26:46 UTC 2026


Generated by AI DevOps Framework Code Review Monitoring

@sonarqubecloud
Copy link

sonarqubecloud bot commented Feb 9, 2026

Copy link

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a valuable feature for flexible AI model selection by adding support for model tiers and provider overrides. The implementation is well-structured across the affected scripts. My review focuses on a couple of areas for improvement:

  • Style Guide Adherence: There are a couple of minor violations of the repository's shell script style guide regarding variable declaration and assignment in shared-constants.sh.

Addressing these points will make the new code even more robust and easier to maintain. Overall, this is a great addition to the framework.

# Returns the resolved model string on stdout.
#######################################
resolve_model_tier() {
local tier="${1:-coding}"

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The repository style guide (line 11) requires declaring and assigning local variables on separate lines for safety under set -e. This line combines both operations.

Suggested change
local tier="${1:-coding}"
local tier
tier="${1:-coding}"
References
  1. Style guide rule 11 states to use local var="$1" pattern in functions, declaring and assigning separately for exit code safety. (link)

fi

# Try fallback-chain-helper.sh for availability-aware resolution
local chain_helper="${BASH_SOURCE[0]%/*}/fallback-chain-helper.sh"

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

To adhere to the repository style guide (line 11), please separate the declaration and assignment of this local variable.

Suggested change
local chain_helper="${BASH_SOURCE[0]%/*}/fallback-chain-helper.sh"
local chain_helper
chain_helper="${BASH_SOURCE[0]%/*}/fallback-chain-helper.sh"
References
  1. Style guide rule 11 states to use local var="$1" pattern in functions, declaring and assigning separately for exit code safety. (link)

alex-solovyev added a commit that referenced this pull request Feb 9, 2026
All 8 subtasks of t132 (Cross-Provider Model Routing) are now complete:
- t132.1: Model-specific subagents (PR #758)
- t132.2: Provider/model registry (PR #761)
- t132.3: Model availability checker (PR #770)
- t132.4: Fallback chain config (PR #781)
- t132.5: Supervisor model resolution (PR #787)
- t132.6: Quality gate with escalation (PR #788)
- t132.7: Multi-provider runner/cron support (PR #789)
- t132.8: Cross-model review workflow (PR #791)

Also fixed stale git conflict markers in TODO.md.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant