Skip to content

Conversation

@tlongwell-block
Copy link
Collaborator

Resolves #5824

Copy link
Collaborator

@DOsinga DOsinga left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I like the route, but less agents is not a good idea

.get_or_try_init(|| async {
let manager = Self::new(Some(DEFAULT_MAX_SESSION)).await?;
let max_sessions = Config::global()
.get_goose_max_active_agents()
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this is a singleton so making it depend on a configuration variable only somewhat works

more to the point and I think we discussed this previously, I don't think the agent manager should be a cache and definitely not one with max=10 by default - we still have a rumbling subagent problem and that would mean if you start 10 subagents, the main agent dies.

so adding an agent/stop path is the right way, but the clients should either always handle this or the agents should be properly resumed (now it loads the default provider, no extensions etc)

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

so adding an agent/stop path is the right way, but the clients should either always handle this or the agents should be properly resumed (now it loads the default provider, no extensions etc)

I think #5419 resolves some of this

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

restored default to 100

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think #5419 resolves some of this

it's definitely on the way there. the next step would be to store the active extensions also and then we can move code from resume_agent to here and make sure that everybody uses the agent manager and then we can kill agents and also have agents restore themselves after a system restart

@tlongwell-block tlongwell-block merged commit 7b787f9 into main Nov 21, 2025
38 of 43 checks passed
@tlongwell-block tlongwell-block deleted the agent_stop branch November 21, 2025 14:24
michaelneale added a commit that referenced this pull request Nov 24, 2025
* main: (48 commits)
  [fix] generic check for gemini compat (#5842)
  Add scheduler to diagnostics (#5849)
  Cors and token (#5850)
  fix sessions coming back with empty messages (#5841)
  markdown export from URL (#5830)
  Next camp refactor live (#5706)
  Add out of context compaction test via error proxy (#5805)
  fix: Add backward compatibility for conversationCompacted message type (#5819)
  Add /agent/stop endpoint, make max active agents configurable (#5826)
  Handle 404s (#5791)
  Persist provider name and model config in the session (#5419)
  Comment out the flaky mcp callers (#5827)
  Slash commands (#5718)
  fix: remove setx calls to not permanently edit the windows shell PATH (#5821)
  fix: Parse maas models for gcp vertex provider (#5816)
  fix: support Gemini 3's thought signatures (#5806)
  chore: Add Adrian Cole to Maintainers (#5815)
  [MCP-UI] Proxy and Better Message Handling (#5487)
  Release 1.15.0
  Document New Window menu in macOS dock (#5811)
  ...
kskarthik pushed a commit to kskarthik/goose that referenced this pull request Nov 25, 2025
kskarthik pushed a commit to kskarthik/goose that referenced this pull request Nov 26, 2025
BlairAllan pushed a commit to BlairAllan/goose that referenced this pull request Nov 29, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Feature Request: Add Session Cleanup API to Prevent MCP Process Accumulation

3 participants