Skip to content

Conversation

@baxen
Copy link
Collaborator

@baxen baxen commented Dec 24, 2025

No description provided.

Copilot AI review requested due to automatic review settings December 24, 2025 04:49
Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR adds support for configuring max tokens via the GOOSE_MAX_TOKENS environment variable, allowing users to control the maximum number of tokens in model responses.

  • Implements parse_max_tokens() method that reads and validates the GOOSE_MAX_TOKENS environment variable
  • Integrates max tokens parsing into ModelConfig::new_with_context_env()
  • Adds comprehensive test coverage for valid values, invalid inputs, and edge cases

Comment on lines 198 to 201
return Err(ConfigError::InvalidRange(
"GOOSE_MAX_TOKENS".to_string(),
"must be greater than 0".to_string(),
));
Copy link

Copilot AI Dec 24, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The second parameter to InvalidRange is inconsistent with parse_temperature. In parse_temperature (line 179), the actual value is passed (val), but here a descriptive message is passed. For consistency, pass val here to match the existing pattern.

Copilot uses AI. Check for mistakes.
}

fn parse_max_tokens() -> Result<Option<i32>, ConfigError> {
if let Ok(val) = std::env::var("GOOSE_MAX_TOKENS") {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

rather than using an environment variable, should we make this go through the config module, which checks environment variables anyway and has some light templating for types going on?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yeah sounds good. i changed just this one field for now rather than the full model config. can come back to doing the other ones in this file that also don't use config

@baxen baxen merged commit edcb634 into main Jan 6, 2026
19 of 20 checks passed
@baxen baxen deleted the baxen/max_tokens_env branch January 6, 2026 02:43
zanesq added a commit that referenced this pull request Jan 6, 2026
* 'main' of github.com:block/goose:
  refactor:  when changing provider/model,load existing provider/model (#6334)
  chore: refactor configure_extensions_dialog to reduce line count (#6277)
  chore: refactor handle_configure to reduce line count (#6276)
  chore: refactor interactive session to reduce line count (#6274)
  chore: refactor docx_tool to reduce function size (#6273)
  chore: refactor cli() function to reduce line count (#6272)
  make sure the models are using streaming properly (#6331)
  feat: add a max tokens env var (#6264)
  docs: slash commands topic (#6333)
  fix(ci): prevent gh-pages branch bloat (#6340)
  chore(deps): bump qs and body-parser in /documentation (#6338)
  Skip the smoke tests for dependabot PRs (#6337)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants