Skip to content

admin: Track reverse_proxy in-flight requests#7517

Merged
mholt merged 1 commit intocaddyserver:masterfrom
pauloappbr:issue-7277
Mar 3, 2026
Merged

admin: Track reverse_proxy in-flight requests#7517
mholt merged 1 commit intocaddyserver:masterfrom
pauloappbr:issue-7277

Conversation

@pauloappbr
Copy link
Contributor

Assistance Disclosure

I consulted an AI assistant (Gemini) to help refactor the UsagePool logic into the sync.RWMutex map approach suggested by @mholt. I fully reviewed, authored the final structure, and verified the code's correctness.


This PR supersedes #7281 and implements the feature requested in #7277. Huge thanks to @u5surf for laying the groundwork!

As discussed in the previous PR and suggested by @mholt, this replaces the UsagePool implementation with a lightweight, package-level map[string]uint protected by a sync.RWMutex.

Changes:

  • Removed inflightHosts UsagePool logic.
  • Implemented inFlightRequests map to track requests during the proxy loop securely.
  • Updated admin.go to expose these draining hosts alongside the static hosts in /reverse_proxy/upstreams.
  • Fixed the double-counting issue pointed out by @huang041.
  • Zero memory leak: hosts are deleted from the map as soon as their in-flight count reaches 0.

@francislavoie francislavoie changed the title Issue 7277 admin: Track reverse_proxy in-flight requests Feb 24, 2026
@pauloappbr
Copy link
Contributor Author

@huang041 Brilliant catches! Thank you for the detailed review.You are totally right about the global lock contention under high RPS. I've updated the implementation to use sync.Map with atomic.Int64, which makes the hot path lock-free.Spot on regarding the $O(n^2)$ loop in admin.go. I've introduced a simple map[string]struct{} to track known hosts during the first iteration, reducing the subsequent lookup to $O(1)$.The changes have been pushed. Let me know if everything looks good now!

@francislavoie francislavoie added the feature ⚙️ New feature or request label Feb 25, 2026
@francislavoie francislavoie added this to the v2.11.2 milestone Feb 25, 2026
…server#7277)

This refactors the initial approach in PR caddyserver#7281, replacing the UsagePool
with a dedicated package-level sync.Map and atomic.Int64 to track
in-flight requests without global lock contention.

It also introduces a lookup map in the admin API to fix a potential
O(n^2) iteration over upstreams, ensuring that draining upstreams
are correctly exposed across config reloads without leaking memory.

Co-authored-by: Y.Horie <u5.horie@gmail.com>

reverseproxy: optimize in-flight tracking and admin API

- Replaced sync.RWMutex with sync.Map and atomic.Int64 to avoid lock contention under high RPS.
- Introduced a lookup map in the admin API to fix a potential O(n^2) iteration over upstreams.
@mholt
Copy link
Member

mholt commented Mar 1, 2026

I'll review this soon... thanks!

@francislavoie
Copy link
Member

While reviewing this it made me think about dynamic upstreams a bit more, and I came up with this #7539, which should pair nicely with the improved in-flight count tracking.

@pauloappbr
Copy link
Contributor Author

Amazing work, @francislavoie! Breaking out the dynamic hosts into their own registry makes total sense.

The background cleanup goroutine is a very elegant approach. It acts like a TTL cache, preventing memory leaks while ensuring the health state persists long enough for the passive health checks to actually trip and do their job.

Really glad my refactor could help kick off this improvement! I'll be following this implementation closely to learn from it.

Copy link
Member

@mholt mholt left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Neato, thank you for the revisions!

@mholt mholt merged commit 88616e8 into caddyserver:master Mar 3, 2026
29 checks passed
@github-actions github-actions bot mentioned this pull request Mar 6, 2026
4 tasks
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

feature ⚙️ New feature or request

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants