-
Notifications
You must be signed in to change notification settings - Fork 2.3k
Fix mcp large response race condition #5065
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
6feb213 to
839250f
Compare
When multiple subagents are running in parallel and they call an mcp tool that returns a large response, these responses may all be returned at the exact same second which can result in subagents receiving paths containing the incorrect information. We fix this by adding microseconds to the filename so that this is very unlikely to occur anymore. Maybe a better solution is to check if the file exists, but I think this is sufficient for now. Signed-off-by: alexyao2015 <[email protected]>
839250f to
539977d
Compare
DOsinga
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this is for debugging only so I think the current behavior is fine. if anything we should just delete the file
|
This is not just for debugging. If an mcp server provides a large response to the agent, it's stored in a file which can be erroneously overwritten with incorrect content in the current implementation. |
|
well, that is for debugging yes. to be able to see what the large response was. I think this is something we used to have trouble with and can probably go /cc @michaelneale |
|
If by debugging, you mean the alternative is using an in memory temporary file, I suppose so? This file is currently read by the agent using shell commands if the developer tools extension is enabled, so it's definitely not used only for debugging as you might think it is. With the race condition existing as it is today, sub agents get confused because they are fed the wrong data when they receive a response from the mcp server. |
|
ok, fine. let's give it a real unique name though instead of still a timestmap |
|
yes, this makes sense, and wow, this is a wild edge case. Yeah some MCPs will spew a lot, so this directs to a file, if they happen to have same name... ouch. It would rarely happen as responses now have to be pretty large (if we make it too small, it slows down common MCP calls). Belatedly approve this @alexyao2015 wow, that is cool fix. |
* main: Skip hidden & real format (#5194) docs: Hacktoberfest blog submission - Best Practices for Using Goose in Enterprise Environments by Anudhyan Datta. (#5184) docs: prompt injection detection (#5193) Fix mcp large response race condition (#5065) Compaction overhaul (#5186) fix: #3960 better approach to input schema for dynamic task params (#5189) used recipe id or deeplink to start agent (#5154) [docs] Add Blog Post: "Designing AI for Users, Not Just LLMs" (#5190) docs: update cognee, jetbrains, mbot extensions config (#5172) Minimally disable subagents when not in autonomous model (#5149) Fix provider sort (#5188) blog: Getting Started with Goose on Windows (#5156) feat: add CI/CD Pipeline recipe (#5183) feat: add Daily Standup Report Generator recipe (#5123) (#5131) Sort providers in alphabetical vs random (#5090) Declarative providers (#5084) adding youtube link to firecrawl mcp tutorial, merge after 9am Eastern Oct 15 (#5173) Ollama integration: modified default model + added models (#5153) Fix codex subagent configuration in documentation (#5180)
* main: Blog: Best Practices for Prompt Engineering with goose (block#5204) force WAL sync after session create (block#5202) Feat: goose Apify MCP integration docs (block#5047) feat: enhance goose to search sessions for easy recall (block#5177) Skip hidden & real format (block#5194) docs: Hacktoberfest blog submission - Best Practices for Using Goose in Enterprise Environments by Anudhyan Datta. (block#5184) docs: prompt injection detection (block#5193) Fix mcp large response race condition (block#5065)
* main: (119 commits) Break compaction back into check_ and do_ compaction (#5212) fix: revert built app name to uppercase Goose (#5206) feat: add Code Documentation Generator recipe (#5121) (#5125) Revert "feat: enhance goose to search sessions for easy recall (#5177)" (#5209) Blog: Best Practices for Prompt Engineering with goose (#5204) force WAL sync after session create (#5202) Feat: goose Apify MCP integration docs (#5047) feat: enhance goose to search sessions for easy recall (#5177) Skip hidden & real format (#5194) docs: Hacktoberfest blog submission - Best Practices for Using Goose in Enterprise Environments by Anudhyan Datta. (#5184) docs: prompt injection detection (#5193) Fix mcp large response race condition (#5065) Compaction overhaul (#5186) fix: #3960 better approach to input schema for dynamic task params (#5189) used recipe id or deeplink to start agent (#5154) [docs] Add Blog Post: "Designing AI for Users, Not Just LLMs" (#5190) docs: update cognee, jetbrains, mbot extensions config (#5172) Minimally disable subagents when not in autonomous model (#5149) Fix provider sort (#5188) blog: Getting Started with Goose on Windows (#5156) ...
* main: (143 commits) Break compaction back into check_ and do_ compaction (#5212) fix: revert built app name to uppercase Goose (#5206) feat: add Code Documentation Generator recipe (#5121) (#5125) Revert "feat: enhance goose to search sessions for easy recall (#5177)" (#5209) Blog: Best Practices for Prompt Engineering with goose (#5204) force WAL sync after session create (#5202) Feat: goose Apify MCP integration docs (#5047) feat: enhance goose to search sessions for easy recall (#5177) Skip hidden & real format (#5194) docs: Hacktoberfest blog submission - Best Practices for Using Goose in Enterprise Environments by Anudhyan Datta. (#5184) docs: prompt injection detection (#5193) Fix mcp large response race condition (#5065) Compaction overhaul (#5186) fix: #3960 better approach to input schema for dynamic task params (#5189) used recipe id or deeplink to start agent (#5154) [docs] Add Blog Post: "Designing AI for Users, Not Just LLMs" (#5190) docs: update cognee, jetbrains, mbot extensions config (#5172) Minimally disable subagents when not in autonomous model (#5149) Fix provider sort (#5188) blog: Getting Started with Goose on Windows (#5156) ...
* 'main' of github.com:block/goose: (22 commits) Rewrite extension management tools (#5057) fix: re-sync package-lock.json (#5235) docs: Hacktoberfest MCP youtube short entry to community-content.json (#5150) feat: add schedule button to recipe entries (#5217) Autocompact threshold UI cleanup (#5232) fix: correct schema for openai tools (#5229) Break compaction back into check_ and do_ compaction (#5212) fix: revert built app name to uppercase Goose (#5206) feat: add Code Documentation Generator recipe (#5121) (#5125) Revert "feat: enhance goose to search sessions for easy recall (#5177)" (#5209) Blog: Best Practices for Prompt Engineering with goose (#5204) force WAL sync after session create (#5202) Feat: goose Apify MCP integration docs (#5047) feat: enhance goose to search sessions for easy recall (#5177) Skip hidden & real format (#5194) docs: Hacktoberfest blog submission - Best Practices for Using Goose in Enterprise Environments by Anudhyan Datta. (#5184) docs: prompt injection detection (#5193) Fix mcp large response race condition (#5065) Compaction overhaul (#5186) fix: #3960 better approach to input schema for dynamic task params (#5189) ...
Summary
When multiple subagents are running in parallel and they call an mcp
tool that returns a large response, these responses may all be returned
at the exact same second which can result in subagents receiving paths
containing the incorrect information.
We fix this by adding microseconds to the filename so that this is very
unlikely to occur anymore. Maybe a better solution is to check if the
file exists, but I think this is sufficient for now.
Type of Change
Testing
Tested that this race condition no longer occurs when the main agent calls subagents to run in parallel.