-
Notifications
You must be signed in to change notification settings - Fork 2.4k
Claude 3.7 is out. we had some harcoded stuff #6197
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
| let re_version = Regex::new(r"-(\d+)-(\d+)-").unwrap(); | ||
| if re_version.is_match(&result) { | ||
| result = re_version.replace(&result, "-$1.$2-").to_string(); | ||
| let re_version_separator_pattern = Regex::new(r"-(\d+)-(\d+)-").unwrap(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
map_to_canonical_model should be able to do this for you (although maybe not if the model ends up getting removed from openrouter)
| let max_tokens = model_config.max_tokens.unwrap_or(8192); | ||
| // https://platform.claude.com/docs/en/about-claude/models/overview | ||
| // 64k output tokens works for most claude models: | ||
| let max_tokens = model_config.max_tokens.unwrap_or(64 * 1024); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We should use max_completion_tokens from canonical model (and more generally across all providers). I can do a followup pass with that.
Seems like the data there correctly has 64k.
|
Hi @DOsinga - I noticed our PRs both touch Your refactoring to simplify the model enums looks great! When you rebase to resolve conflicts, the key pieces from #6187 that need to stay are:
These handle the case where Gemini 3 models need Happy to help coordinate if needed! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull request overview
This PR removes hardcoded references to Claude 3.7 models and simplifies the codebase to be more flexible with model versions. The changes include updating model lists, removing version-specific enums in GCP Vertex AI support, generalizing thinking/extended output logic, and updating tests.
Key Changes
- Removed Claude 3.7 model references from known model lists across providers (Anthropic, OpenRouter)
- Simplified GCP Vertex AI model handling by replacing version-specific enums with string-based model names
- Generalized extended thinking support to work with all Claude models instead of being Claude 3.7-specific
- Updated max output tokens default from 8192 to 64k for newer Claude models
Reviewed changes
Copilot reviewed 8 out of 8 changed files in this pull request and generated 3 comments.
Show a summary per file
| File | Description |
|---|---|
| ui/desktop/src/hooks/useCostTracking.ts | Minor formatting change consolidating function call to single line |
| ui/desktop/src/components/settings/app/AppSettingsSection.tsx | Removed extra blank line |
| crates/goose/src/providers/openrouter.rs | Updated known models list: removed claude-3.7-sonnet, added grok-code-fast-1, fixed gemini model name |
| crates/goose/src/providers/formats/gcpvertexai.rs | Major refactor: replaced version-specific enums with generic string-based model variants |
| crates/goose/src/providers/formats/databricks.rs | Removed outdated Claude 3.7-specific comments |
| crates/goose/src/providers/formats/anthropic.rs | Generalized thinking logic, removed temperature restriction, increased max_tokens default, removed Claude 3.7-specific test |
| crates/goose/src/providers/canonical/name_builder.rs | Simplified test cases while maintaining coverage |
| crates/goose/src/providers/anthropic.rs | Removed legacy Claude 3.7 models from known models list |
| #[test] | ||
| fn test_generic_model_parsing() -> Result<()> { | ||
| // Test generic Claude models | ||
| fn test_model_parsing() -> Result<()> { |
Copilot
AI
Dec 19, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There are now two test functions with the same name test_model_parsing (lines 304 and 360). This will cause a compilation error. One of these tests should be removed or renamed.
| fn test_model_parsing() -> Result<()> { | |
| fn test_model_parsing_families() -> Result<()> { |
| .insert("temperature".to_string(), json!(temp)); | ||
| } | ||
|
|
||
| // Add thinking parameters for claude-3-7-sonnet model |
Copilot
AI
Dec 19, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The comment on line 430 still references "claude-3-7-sonnet model" but the check on line 432 now applies thinking parameters to ALL models when CLAUDE_THINKING_ENABLED is set, not just Claude 3.7. The comment should be updated to reflect this broader applicability.
| // Add thinking parameters for claude-3-7-sonnet model | |
| // Add thinking parameters for all models when CLAUDE_THINKING_ENABLED is set |
| let max_tokens = model_config.max_tokens.unwrap_or(8192); | ||
| // https://platform.claude.com/docs/en/about-claude/models/overview | ||
| // 64k output tokens works for most claude models: | ||
| let max_tokens = model_config.max_tokens.unwrap_or(64 * 1024); |
Copilot
AI
Dec 19, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The default max_tokens was increased from 8192 to 65536 (64 * 1024). However, in the databricks.rs file at line 585, the default remains 8192. This inconsistency could lead to different behavior across providers for the same model when thinking is enabled.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull request overview
Copilot reviewed 7 out of 7 changed files in this pull request and generated 2 comments.
| if let Some(temp) = model_config.temperature { | ||
| // Claude 3.7 models with thinking enabled don't support temperature | ||
| if !model_config.model_name.starts_with("claude-3-7-sonnet-") { | ||
| payload | ||
| .as_object_mut() | ||
| .unwrap() | ||
| .insert("temperature".to_string(), json!(temp)); | ||
| } | ||
| payload | ||
| .as_object_mut() | ||
| .unwrap() | ||
| .insert("temperature".to_string(), json!(temp)); |
Copilot
AI
Jan 5, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Temperature is now always added regardless of model when specified in model_config. The previous code excluded temperature for claude-3-7-sonnet models with thinking enabled. If users can still specify claude-3-7-sonnet models (even though they're removed from the known models list), this could cause API errors. Consider either: (1) explicitly blocking claude-3-7 model names, or (2) keeping the temperature exclusion for backward compatibility.
| GcpVertexAIModel::Gemini(GeminiVersion::Pro25Preview), | ||
| GcpVertexAIModel::Gemini(GeminiVersion::Flash25), | ||
| GcpVertexAIModel::Gemini(GeminiVersion::Pro25), | ||
| GcpVertexAIModel::Claude("claude-3-7-sonnet@20250219".to_string()), |
Copilot
AI
Jan 5, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Claude 3.7 model is still included in the known models list here, but the PR is removing Claude 3.7 references from other providers. This should be removed for consistency with the PR's goal.
| GcpVertexAIModel::Claude("claude-3-7-sonnet@20250219".to_string()), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull request overview
Copilot reviewed 6 out of 6 changed files in this pull request and generated 1 comment.
| .insert("temperature".to_string(), json!(temp)); | ||
| } | ||
|
|
||
| // Add thinking parameters for claude-3-7-sonnet model |
Copilot
AI
Jan 6, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This comment still mentions "claude-3-7-sonnet model" but the thinking parameters are now applied to all models when CLAUDE_THINKING_ENABLED is set. Update the comment to reflect that thinking is no longer model-specific.
| // Add thinking parameters for claude-3-7-sonnet model | |
| // Add thinking parameters when CLAUDE_THINKING_ENABLED is set (applies to all models) |
* 'main' of github.com:block/goose: Claude 3.7 is out. we had some harcoded stuff (#6197) Release 1.19.0 chore: upgrade to node v24 as engine (#6361) chore(deps): bump rsa from 0.9.9 to 0.9.10 (#6358) Bump rust toolchain to 1.92 (current stable) (#6356) Hide advanced recipe options under expandable content (#6021) fix: use .config/agents (plural) for skills directory (#6357) fix: prevent KaTeX from treating underscores as subscripts in plain text (#6242) fix: make goose review PRs more like goose contributors do (#6240) fix : preserve provider engine type when editing custom providers (#6106) feat(providers): add retry for model fetching (#6347) allow goose issue solver to react to activation comments (#6239)
* main: (31 commits) added validation and debug for invalid call tool result (#6368) Update MCP apps tutorial: fix _meta structure and version prereq (#6404) Fixed fonts (#6389) Update confidence levels prompt injection detection to reduce false positive rates (#6390) Add ML-based prompt injection detection (#5623) docs: update custom extensions tutorial (#6388) fix ResultsFormat error when loading old sessions (#6385) docs: add MCP Apps tutorial and documentation updates (#6384) changed z-index to make sure the search highlighter does not appear on modal overlay (#6386) Handling special claude model response in github copilot provider (#6369) fix: prevent duplicate rendering when tool returns both mcp-ui and mcp-apps resources (#6378) fix: update MCP Apps _meta.ui.resourceUri to use nested format (SEP-1865) (#6372) feat(providers): add streaming support for Google Gemini provider (#6191) Blog: edit links in mcp apps post (#6371) fix: prevent infinite loop of tool-input notifications in MCP Apps (#6374) fix: Show platform-specific keyboard shortcuts in UI (#6323) fix: we load extensions when agent starts so don't do it up front (#6350) docs: credit HumanLayer in RPI tutorial (#6365) Blog: Goose Lands MCP Apps (#6172) Claude 3.7 is out. we had some harcoded stuff (#6197) ...
Summary
removed refrences to 3.7 and simplified some code