Skip to content

Conversation

spoons-and-mirrors
Copy link
Contributor

Summary

This PR adds a new tool to opencode: batch

Explainer

Lately I've been trying to make parallel tool calling work reliably with different models using prompt engineering, but it has shown to be the wrong approach. By simply exposing a batch tool and with some light prompting (AGENTS.md of 2nd commit), every model I use (gh/gpt-5, gh/sonnet-4.5, zai/glm-4.6) have seen dramatic increase in likely hood of executing parallel tool calling.
I still find myself having to say "use batch" from time to time but overall, it's been great.

Notes

There's some edits to be made in the descriptions of some other tools (to align the whole tool set with batch) - this will be done in another PR.

Numbers

I haven't actually made legit benchmarks to test the real efficiency gain from this PR, but after using it for a minute, I can see a substantial decrease in average completion time of user requests. This PR is likely beneficial for provider rate limiting (1 request instead of 3++ over the course of a working day do pile up...)

@heavenly
Copy link

heavenly commented Oct 5, 2025

super cool tool!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants