Skip to content

Conversation

@tito
Copy link

@tito tito commented Jul 15, 2025

Implement automatic retry with exponential backoff when Anthropic returns {"type": "overloaded_error"} responses, by checking the error message on overloaded_error inclusion.

Changes

  • Add ServiceOverloadedError type to properly categorize these errors
  • Implement configurable retry logic with exponential backoff and jitter
  • Add retry configuration schema with sensible defaults (20 retries, 30s max delay)

Configuration

We can now configure retry behavior in opencode.json:

{
  "retry": {
    "maxRetries": 20,
    "initialDelay": 1000,
    "maxDelay": 30000
  }
}

Closes #833

@thdxr
Copy link
Contributor

thdxr commented Jul 15, 2025

i ended up submitting a PR upstream: vercel/ai#7317

@tito
Copy link
Author

tito commented Jul 15, 2025

@thdxr So i should dismiss this one ?

@tito
Copy link
Author

tito commented Jul 15, 2025

Or could we patch this library on opencode side to then be able to parse and work on that error ? I don't know how long we should wait for the upstream PR to be merged

@WilliamAGH
Copy link

I believe this is still necessary to fix this issue: #1712

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

handle LLM overload errors (e.g. {"type": "overloaded_error", "message": "Overloaded"})

3 participants