chore: update stale model references to current Anthropic lineup#2118
chore: update stale model references to current Anthropic lineup#2118marcusquinn merged 1 commit intomainfrom
Conversation
Update all Claude model references to match current models: - Opus 4.6 (claude-opus-4-6) - Sonnet 4.6 (claude-sonnet-4-6) - Haiku 4.5 (claude-haiku-4-5) Replaces stale claude-3-5-sonnet, claude-3-5-haiku, and date-suffixed model IDs across 12 files (docs, scripts, configs). Backward-compatible normalization patterns retained in model-label-helper.sh.
WalkthroughThis PR updates Anthropic Claude model identifiers across helper scripts, configuration files, and documentation. Changes primarily involve swapping legacy model IDs (claude-3-5-haiku, claude-3-5-sonnet, claude-opus-4) to their newer counterparts (claude-haiku-4-5, claude-sonnet-4-6, claude-opus-4-6), with one script receiving whitespace normalization. Changes
Estimated code review effort🎯 2 (Simple) | ⏱️ ~18 minutes Possibly related PRs
Suggested labels
Poem
🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
Summary of ChangesHello @marcusquinn, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request systematically updates all references to Anthropic's Claude models throughout the codebase and documentation. The primary goal is to align the system with the current Anthropic model lineup, specifically incorporating Opus 4.6, Sonnet 4.6, and Haiku 4.5. This ensures that examples, configurations, and internal logic reflect the most up-to-date and performant models available, while also maintaining compatibility with historical data and older model identifiers where necessary. Highlights
Changelog
Activity
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
🔍 Code Quality Report�[0;35m[MONITOR]�[0m Code Review Monitoring Report �[0;34m[INFO]�[0m Latest Quality Status: �[0;34m[INFO]�[0m Recent monitoring activity: 📈 Current Quality Metrics
Generated on: Sun Feb 22 00:18:46 UTC 2026 Generated by AI DevOps Framework Code Review Monitoring |
|
There was a problem hiding this comment.
Code Review
This pull request correctly updates stale Anthropic model references across various documentation and script files, aligning them with the current lineup. The changes are consistent and well-contained. I've identified one shell script style guide violation that has been refined to ensure proper exit code propagation.
| echo "$url" | sed -E 's|^https?://||' | sed -E 's|/.*||' | sed -E 's|:.*||' | ||
| } |
There was a problem hiding this comment.
This function is missing an explicit return statement. All functions in shell scripts must have explicit return statements to ensure predictable behavior and exit code propagation. Additionally, it's crucial for functions that wrap commands to propagate the command's exit code to the caller, rather than hardcoding return 0. This prevents masking potential errors.
References:
- Repository Style Guide: All functions must have explicit
returnstatements. - Rule: In shell scripts, functions that wrap a command should propagate its exit code to the caller. Avoid masking potential errors with a hardcoded
return 0. Instead, usereturn $?or capture the exit code and return it (e.g.,local rc=$?; return $rc).
| echo "$url" | sed -E 's|^https?://||' | sed -E 's|/.*||' | sed -E 's|:.*||' | |
| } | |
| echo "$url" | sed -E 's|^https?://||' | sed -E 's|/.*||' | sed -E 's|:.*||' | |
| return $? | |
| } |
References
- All functions must have explicit
returnstatements. (link) - In shell scripts, functions that wrap a command should propagate its exit code to the caller. Avoid masking potential errors with a hardcoded
return 0. Instead, usereturn $?or capture the exit code and return it (e.g.,local rc=$?; return $rc).
There was a problem hiding this comment.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (4)
.agents/scripts/eeat-score-helper.sh (1)
348-352:⚠️ Potential issue | 🟠 MajorPer-request
timeout=<int>requiresClientTimeoutinaiohttp3.13.2.The session-level timeout on line 549 correctly uses
aiohttp.ClientTimeout(total=120), but per-request calls on lines 351, 406, and 443 pass raw integers (timeout=30andtimeout=60). Inaiohttp3.x, thetimeoutparameter must be aClientTimeoutinstance—newer code paths reject plain numbers with a type error.Fix required
- async with self.session.get(url, timeout=30) as response: + async with self.session.get(url, timeout=aiohttp.ClientTimeout(total=30)) as response:async with self.session.post( "https://api.openai.com/v1/chat/completions", headers=headers, json=payload, - timeout=60 + timeout=aiohttp.ClientTimeout(total=60) ) as response:async with self.session.post( "https://api.anthropic.com/v1/messages", headers=headers, json=payload, - timeout=60 + timeout=aiohttp.ClientTimeout(total=60) ) as response:Also applies to: 406, 443
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.agents/scripts/eeat-score-helper.sh around lines 348 - 352, Per-request timeouts are being passed as raw integers (e.g., timeout=30 / timeout=60) which aiohttp 3.x requires to be aiohttp.ClientTimeout instances; update the calls (including fetch_page_content and the other methods that call self.session.get with timeout) to pass timeout=aiohttp.ClientTimeout(total=...) or reuse a shared ClientTimeout instance (e.g., aiohttp.ClientTimeout(total=30) for the 30s calls, ClientTimeout(total=60) for 60s calls), and ensure aiohttp.ClientTimeout is imported or referenced via aiohttp.ClientTimeout; keep the rest of the request code unchanged..agents/tools/context/model-routing.md (1)
30-34:⚠️ Potential issue | 🟡 Minor
sonnetandopusprimary models still reference the May 2025 releases across all three routing tables.The same gap appears in the Model Tiers table (lines 32–34), the Model-Specific Subagents table (lines 113–115), and the Fallback Routing table (lines 153–155). Agents reading this file would select
claude-sonnet-4/claude-opus-4rather than the currentclaude-sonnet-4-6/claude-opus-4-6targeted by the PR. This is also inconsistent with themodel-availability-helper.shnon-OpenCode path already updated in this PR to use the correct IDs.The same fix applies to lines 113, 115, 153, and 155.
🔧 Proposed fix — Model Tiers table (and apply identically to subagent + fallback tables)
- | `sonnet` | claude-sonnet-4 | Medium | Code implementation, review, most development tasks | + | `sonnet` | claude-sonnet-4-6 | Medium | Code implementation, review, most development tasks | | `pro` | gemini-2.5-pro | Medium-High | Large codebase analysis, complex reasoning with big context | - | `opus` | claude-opus-4 | Highest | Architecture decisions, complex multi-step reasoning, novel problems | + | `opus` | claude-opus-4-6 | Highest | Architecture decisions, complex multi-step reasoning, novel problems |🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.agents/tools/context/model-routing.md around lines 30 - 34, Update the model IDs for the `sonnet` and `opus` entries: replace `claude-sonnet-4` with `claude-sonnet-4-6` and `claude-opus-4` with `claude-opus-4-6` wherever they appear in the document (specifically in the Model Tiers table rows for `sonnet`/`opus`, the Model-Specific Subagents table entries for `sonnet`/`opus`, and the Fallback Routing table entries for `sonnet`/`opus`) so the tables match the non-OpenCode path in model-availability-helper.sh and the PR target models..agents/tools/ai-assistants/models/README.md (1)
9-13:⚠️ Potential issue | 🟡 MinorSonnet and Opus primary models not updated to the current lineup.
The haiku tier was correctly updated to
claude-haiku-4-5, but thesonnetandopusrows still referenceclaude-sonnet-4andclaude-opus-4(the May 2025 releases). The PR objective explicitly targetsclaude-sonnet-4-6andclaude-opus-4-6as the current lineup. This creates an internal inconsistency — thefallback-chain:example at line 52 already usesclaude-sonnet-4-6, andmodel-availability-helper.sh(also in this PR) correctly routes toclaude-sonnet-4-6/claude-opus-4-6in the non-OpenCode path.🔧 Proposed fix to align sonnet and opus rows
- | `sonnet` | `models/sonnet.md` | claude-sonnet-4 | gpt-4.1 | + | `sonnet` | `models/sonnet.md` | claude-sonnet-4-6 | gpt-4.1 | | `pro` | `models/pro.md` | gemini-2.5-pro | claude-sonnet-4 | - | `opus` | `models/opus.md` | claude-opus-4 | o3 | + | `opus` | `models/opus.md` | claude-opus-4-6 | o3 |🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.agents/tools/ai-assistants/models/README.md around lines 9 - 13, Update the table entries for the `sonnet` and `opus` tiers in the README so their primary model columns use the current names `claude-sonnet-4-6` and `claude-opus-4-6` (replace `claude-sonnet-4` and `claude-opus-4`), ensuring consistency with the `fallback-chain` example and the logic in `model-availability-helper.sh`; modify the rows referencing `sonnet` (`models/sonnet.md`) and `opus` (`models/opus.md`) accordingly..agents/scripts/model-registry-helper.sh (1)
1248-1267:⚠️ Potential issue | 🟡 Minor
cmd_routedefaults forsonnetandopusnot updated — inconsistent withmodel-availability-helper.sh.The haiku default was correctly updated to
claude-haiku-4-5, butsonnetandopusstill fall back to the May 2025claude-sonnet-4andclaude-opus-4when the registry is empty.model-availability-helper.sh(updated in this same PR) correctly routes toclaude-sonnet-4-6/claude-opus-4-6in the non-OpenCode path, makingcmd_routeoutput inconsistent with actual dispatch behavior on a fresh install.🔧 Proposed fix to align cmd_route defaults
sonnet) - primary_model="${primary_model:-claude-sonnet-4}" + primary_model="${primary_model:-claude-sonnet-4-6}" fallback_model="${fallback_model:-gpt-4.1}" ;; pro) primary_model="${primary_model:-gemini-2.5-pro}" - fallback_model="${fallback_model:-claude-sonnet-4}" + fallback_model="${fallback_model:-claude-sonnet-4-6}" ;; opus) - primary_model="${primary_model:-claude-opus-4}" + primary_model="${primary_model:-claude-opus-4-6}" fallback_model="${fallback_model:-o3}" ;;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.agents/scripts/model-registry-helper.sh around lines 1248 - 1267, The cmd_route defaults for the "sonnet" and "opus" cases are out of date; update the primary_model assignments in those case blocks to match model-availability-helper.sh (set primary_model to "claude-sonnet-4-6" for the sonnet case and "claude-opus-4-6" for the opus case), leaving fallback_model values as-is so cmd_route output aligns with actual dispatch behavior on fresh installs; locate the "sonnet" and "opus" case labels and change the primary_model variable assignments accordingly.
🧹 Nitpick comments (2)
.agents/scripts/eeat-score-helper.sh (2)
438-450:_call_anthropicsilently discards non-200 responses — add the same error logging as_call_openai.
_call_openailogs the response body on non-200 status._call_anthropichas noelsebranch, so 401 (bad key), 429 (rate limit), and 529 (overload) all return""with no trace — making Anthropic API failures nearly impossible to diagnose in production.♻️ Proposed fix
if response.status == 200: data = await response.json() return data["content"][0]["text"].strip() + else: + error = await response.text() + print(f"Anthropic API error {response.status}: {error}") except Exception as e: print(f"Anthropic API call failed: {e}")🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.agents/scripts/eeat-score-helper.sh around lines 438 - 450, The _call_anthropic function currently swallows non-200 responses; update it to mirror _call_openai by adding an else branch after checking response.status that reads and logs the response body and status (and any useful headers) when status != 200, so 401/429/529 responses are recorded; also include the response text in the exception log inside the except block for better diagnostics. Target the async function named _call_anthropic and use the same logging pattern and fields used by _call_openai so failures are visible in production logs.
852-869: Temp script is not cleaned up on Python failure — add atrap.With
set -eactive, if thepython3invocation exits non-zero the script aborts immediately and therm -f "$analyzer_script"on line 869 is skipped. The same pattern repeats indo_score(lines 915–925) anddo_batch(lines 987–997).♻️ Proposed fix (same pattern for do_score / do_batch)
local analyzer_script="/tmp/eeat_analyzer_$$.py" generate_analyzer_script >"$analyzer_script" + trap 'rm -f "$analyzer_script"' RETURN python3 "$analyzer_script" "${urls[@]}" \ ... - rm -f "$analyzer_script"Using
trap ... RETURNscopes cleanup to the function and fires on both normal return and error exit, keeping the rest of the logic untouched. As per coding guidelines, automation scripts should include error recovery mechanisms.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.agents/scripts/eeat-score-helper.sh around lines 852 - 869, After generating the temp analyzer script (`analyzer_script="/tmp/eeat_analyzer_$$.py"`), add a trap to ensure the temp file is removed on both normal return and errors (e.g. trap 'rm -f "$analyzer_script"' RETURN) before invoking python3, and remove the trap after execution (trap - RETURN); apply the same pattern to the equivalent temp-file flows in do_score and do_batch so the temp scripts are always cleaned up even when python exits non-zero.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In @.agents/scripts/eeat-score-helper.sh:
- Around line 813-822: Remove the fallback "|| echo \"\"" from the jq pipeline
so mapfile -t receives no input on jq failure (letting urls remain empty), or
alternatively post-process the urls array returned by mapfile (in the same block
around the jq/mapfile usage) to remove empty-string entries before the
empty-array check; update the block handling JSON input where mapfile and jq are
used (reference: the mapfile -t urls < <(jq -r '...') line and the subsequent if
[[ ${`#urls`[@]} -eq 0 ]] guard) so malformed jq output no longer yields urls=("")
and the "No valid URLs found" error triggers correctly.
---
Outside diff comments:
In @.agents/scripts/eeat-score-helper.sh:
- Around line 348-352: Per-request timeouts are being passed as raw integers
(e.g., timeout=30 / timeout=60) which aiohttp 3.x requires to be
aiohttp.ClientTimeout instances; update the calls (including fetch_page_content
and the other methods that call self.session.get with timeout) to pass
timeout=aiohttp.ClientTimeout(total=...) or reuse a shared ClientTimeout
instance (e.g., aiohttp.ClientTimeout(total=30) for the 30s calls,
ClientTimeout(total=60) for 60s calls), and ensure aiohttp.ClientTimeout is
imported or referenced via aiohttp.ClientTimeout; keep the rest of the request
code unchanged.
In @.agents/scripts/model-registry-helper.sh:
- Around line 1248-1267: The cmd_route defaults for the "sonnet" and "opus"
cases are out of date; update the primary_model assignments in those case blocks
to match model-availability-helper.sh (set primary_model to "claude-sonnet-4-6"
for the sonnet case and "claude-opus-4-6" for the opus case), leaving
fallback_model values as-is so cmd_route output aligns with actual dispatch
behavior on fresh installs; locate the "sonnet" and "opus" case labels and
change the primary_model variable assignments accordingly.
In @.agents/tools/ai-assistants/models/README.md:
- Around line 9-13: Update the table entries for the `sonnet` and `opus` tiers
in the README so their primary model columns use the current names
`claude-sonnet-4-6` and `claude-opus-4-6` (replace `claude-sonnet-4` and
`claude-opus-4`), ensuring consistency with the `fallback-chain` example and the
logic in `model-availability-helper.sh`; modify the rows referencing `sonnet`
(`models/sonnet.md`) and `opus` (`models/opus.md`) accordingly.
In @.agents/tools/context/model-routing.md:
- Around line 30-34: Update the model IDs for the `sonnet` and `opus` entries:
replace `claude-sonnet-4` with `claude-sonnet-4-6` and `claude-opus-4` with
`claude-opus-4-6` wherever they appear in the document (specifically in the
Model Tiers table rows for `sonnet`/`opus`, the Model-Specific Subagents table
entries for `sonnet`/`opus`, and the Fallback Routing table entries for
`sonnet`/`opus`) so the tables match the non-OpenCode path in
model-availability-helper.sh and the PR target models.
---
Nitpick comments:
In @.agents/scripts/eeat-score-helper.sh:
- Around line 438-450: The _call_anthropic function currently swallows non-200
responses; update it to mirror _call_openai by adding an else branch after
checking response.status that reads and logs the response body and status (and
any useful headers) when status != 200, so 401/429/529 responses are recorded;
also include the response text in the exception log inside the except block for
better diagnostics. Target the async function named _call_anthropic and use the
same logging pattern and fields used by _call_openai so failures are visible in
production logs.
- Around line 852-869: After generating the temp analyzer script
(`analyzer_script="/tmp/eeat_analyzer_$$.py"`), add a trap to ensure the temp
file is removed on both normal return and errors (e.g. trap 'rm -f
"$analyzer_script"' RETURN) before invoking python3, and remove the trap after
execution (trap - RETURN); apply the same pattern to the equivalent temp-file
flows in do_score and do_batch so the temp scripts are always cleaned up even
when python exits non-zero.
| mapfile -t urls < <(jq -r '.[] | select(.status_code == 200) | .url' "$input_file" 2>/dev/null || echo "") | ||
| elif [[ "$input_file" == *.csv ]]; then | ||
| # CSV format - extract URLs from first column where status is 200 | ||
| mapfile -t urls < <(tail -n +2 "$input_file" | awk -F',' '$2 == "200" || $2 == 200 {gsub(/"/, "", $1); print $1}') | ||
| fi | ||
|
|
||
| if [[ ${#urls[@]} -eq 0 ]]; then | ||
| print_error "No valid URLs found in input file" | ||
| return 1 | ||
| fi |
There was a problem hiding this comment.
|| echo "" defeats the empty-URL guard and produces silent garbage output.
When jq fails (malformed input, wrong schema, etc.), echo "" emits a newline that mapfile -t captures as a single element — urls=(""). This makes ${#urls[@]} equal to 1, not 0, so the guard on line 819 never fires. The Python script is then invoked with an empty-string URL, aiohttp throws an exception (caught silently), all seven E-E-A-T scores default to 5, and a "perfect C-grade" report is written to disk with no indication that anything went wrong.
🐛 Proposed fix
- mapfile -t urls < <(jq -r '.[] | select(.status_code == 200) | .url' "$input_file" 2>/dev/null || echo "")
+ if ! mapfile -t urls < <(jq -r '.[] | select(.status_code == 200) | .url' "$input_file" 2>/dev/null); then
+ print_error "Failed to parse JSON input: $input_file"
+ return 1
+ fiAlternatively, keep the one-liner but strip empty elements after the fact:
- mapfile -t urls < <(jq -r '.[] | select(.status_code == 200) | .url' "$input_file" 2>/dev/null || echo "")
+ mapfile -t urls < <(jq -r '.[] | select(.status_code == 200) | .url' "$input_file" 2>/dev/null)Removing || echo "" lets a jq failure produce an empty array — which the existing guard on line 819 already handles correctly. As per coding guidelines, automation scripts must prioritise reliability and clear error feedback.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| mapfile -t urls < <(jq -r '.[] | select(.status_code == 200) | .url' "$input_file" 2>/dev/null || echo "") | |
| elif [[ "$input_file" == *.csv ]]; then | |
| # CSV format - extract URLs from first column where status is 200 | |
| mapfile -t urls < <(tail -n +2 "$input_file" | awk -F',' '$2 == "200" || $2 == 200 {gsub(/"/, "", $1); print $1}') | |
| fi | |
| if [[ ${#urls[@]} -eq 0 ]]; then | |
| print_error "No valid URLs found in input file" | |
| return 1 | |
| fi | |
| mapfile -t urls < <(jq -r '.[] | select(.status_code == 200) | .url' "$input_file" 2>/dev/null) | |
| elif [[ "$input_file" == *.csv ]]; then | |
| # CSV format - extract URLs from first column where status is 200 | |
| mapfile -t urls < <(tail -n +2 "$input_file" | awk -F',' '$2 == "200" || $2 == 200 {gsub(/"/, "", $1); print $1}') | |
| fi | |
| if [[ ${`#urls`[@]} -eq 0 ]]; then | |
| print_error "No valid URLs found in input file" | |
| return 1 | |
| fi |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.agents/scripts/eeat-score-helper.sh around lines 813 - 822, Remove the
fallback "|| echo \"\"" from the jq pipeline so mapfile -t receives no input on
jq failure (letting urls remain empty), or alternatively post-process the urls
array returned by mapfile (in the same block around the jq/mapfile usage) to
remove empty-string entries before the empty-array check; update the block
handling JSON input where mapfile and jq are used (reference: the mapfile -t
urls < <(jq -r '...') line and the subsequent if [[ ${`#urls`[@]} -eq 0 ]] guard)
so malformed jq output no longer yields urls=("") and the "No valid URLs found"
error triggers correctly.
…per.sh Addresses high-severity quality-debt from PR #2118 review: function at line 122 was missing an explicit return statement. ShellCheck clean.



Summary
claude-3-5-sonnet,claude-3-5-haiku, and date-suffixed model IDs across 12 filesmodel-label-helper.shfor historical dataChanges
Documentation (7 files)
subagent-index.toon— haiku tier + opus model IDtools/context/model-routing.md— haiku tier references (3x)tools/ai-assistants/models/README.md— haiku tier mappingservices/hosting/cloudflare-platform/.../ai-gateway/README.md— Anthropic exampletools/api/vercel-ai-sdk.md— Anthropic SDK exampletools/browser/stagehand.md— model name commenttools/context/dspy.md— Anthropic model configtools/code-review/skill-scanner.md— env var exampleScripts (4 files)
scripts/eeat-score-helper.sh— fallback model IDscripts/model-availability-helper.sh— haiku tier in OpenCode pathscripts/model-label-helper.sh— added 4.x patterns as primary (kept 3.x for backward compat)scripts/model-registry-helper.sh— comment + default haiku modelVerification
rgsweep confirms zero stale references outside backward-compat normalizationfallback-chain-config.json.txt,dispatch.sh,ai-reason.sh) were already on 4.6 — no changes neededSummary by CodeRabbit
Chores
Documentation