api: Add all in-flight requests /reverse_proxy/upstreams (#7277)#7281
api: Add all in-flight requests /reverse_proxy/upstreams (#7277)#7281u5surf wants to merge 2 commits intocaddyserver:masterfrom
Conversation
| di.Upstream.Host.countRequest(-1) | ||
| inflightHost := inflightHosts.Load(di.Address) | ||
| if inflightHost != nil { | ||
| host, _ := inflightHost.(*Host) | ||
| host.countRequest(-1) |
There was a problem hiding this comment.
di.Upstream.Host and inflightHost.(*Host) could point to the same Host, so countRequest(-1) might be applied twice.
| host := new(Host) | ||
| existingHost, loaded := inflightHosts.LoadOrStore(u.String(), host) | ||
| if loaded { | ||
| host = existingHost.(*Host) | ||
| } | ||
| _ = host.countRequest(numRemaiRequests) | ||
| u.Host = host |
There was a problem hiding this comment.
Just a thought - would it be simpler to just add the old host to inflightHosts rather than creating a new one?
|
Wondering if a UsagePool is really the right abstraction for this. UsagePools are useful specifically for managing an allocated value based on reference counting. If the goal is to just count active requests per upstream, maybe a map of |
…server#7277) This refactors the initial approach in PR caddyserver#7281, replacing the UsagePool with a dedicated package-level map protected by a sync.RWMutex. This addresses maintainer feedback, avoids double-counting bugs, and ensures that draining upstreams are correctly exposed via the admin API across config reloads without leaking memory. Co-authored-by: Y.Horie <u5.horie@gmail.com>
|
Hey @u5surf, I noticed this PR was stalled, so I went ahead and implemented the map + sync.RWMutex refactor suggested by @mholt to replace the UsagePool. I've opened a PR directly to your fork's branch here: u5surf#1 I made sure to add you as a co-author to keep the history intact. If you review and merge that into your branch, it will automatically update this PR so we can get it merged into Caddy! |
Would you submit your PR to this project directly? I don't any care if your PR merge instead of mine. I consider it doesn't need to keep my commit. I would like to delegate the project authors (e.g @mholt) how to handle to contribute with AI agent. |
|
Thanks for the green light, @u5surf! I really appreciate you laying the groundwork and test cases for this. I've opened a new PR directly to master here: #7517 I also closed the PR on your fork so we can keep the discussion centralized on the new one. @mholt, looking forward to your thoughts on the sync.RWMutex approach! |
…server#7277) This refactors the initial approach in PR caddyserver#7281, replacing the UsagePool with a dedicated package-level map protected by a sync.RWMutex. This addresses maintainer feedback, avoids double-counting bugs, and ensures that draining upstreams are correctly exposed via the admin API across config reloads without leaking memory. Co-authored-by: Y.Horie <u5.horie@gmail.com>
…server#7277) This refactors the initial approach in PR caddyserver#7281, replacing the UsagePool with a dedicated package-level map protected by a sync.RWMutex. This addresses maintainer feedback, avoids double-counting bugs, and ensures that draining upstreams are correctly exposed via the admin API across config reloads without leaking memory. Co-authored-by: Y.Horie <u5.horie@gmail.com>
…server#7277) This refactors the initial approach in PR caddyserver#7281, replacing the UsagePool with a dedicated package-level map protected by a sync.RWMutex. This addresses maintainer feedback, avoids double-counting bugs, and ensures that draining upstreams are correctly exposed via the admin API across config reloads without leaking memory. Co-authored-by: Y.Horie <u5.horie@gmail.com>
|
Alright, I'll close this then. Thanks everyone! |
…server#7277) This refactors the initial approach in PR caddyserver#7281, replacing the UsagePool with a dedicated package-level sync.Map and atomic.Int64 to track in-flight requests without global lock contention. It also introduces a lookup map in the admin API to fix a potential O(n^2) iteration over upstreams, ensuring that draining upstreams are correctly exposed across config reloads without leaking memory. Co-authored-by: Y.Horie <u5.horie@gmail.com> reverseproxy: optimize in-flight tracking and admin API - Replaced sync.RWMutex with sync.Map and atomic.Int64 to avoid lock contention under high RPS. - Introduced a lookup map in the admin API to fix a potential O(n^2) iteration over upstreams.
…server#7277) This refactors the initial approach in PR caddyserver#7281, replacing the UsagePool with a dedicated package-level sync.Map and atomic.Int64 to track in-flight requests without global lock contention. It also introduces a lookup map in the admin API to fix a potential O(n^2) iteration over upstreams, ensuring that draining upstreams are correctly exposed across config reloads without leaking memory. Co-authored-by: Y.Horie <u5.horie@gmail.com> reverseproxy: optimize in-flight tracking and admin API - Replaced sync.RWMutex with sync.Map and atomic.Int64 to avoid lock contention under high RPS. - Introduced a lookup map in the admin API to fix a potential O(n^2) iteration over upstreams.
#7517) This refactors the initial approach in PR #7281, replacing the UsagePool with a dedicated package-level sync.Map and atomic.Int64 to track in-flight requests without global lock contention. It also introduces a lookup map in the admin API to fix a potential O(n^2) iteration over upstreams, ensuring that draining upstreams are correctly exposed across config reloads without leaking memory. Co-authored-by: Y.Horie <u5.horie@gmail.com> reverseproxy: optimize in-flight tracking and admin API - Replaced sync.RWMutex with sync.Map and atomic.Int64 to avoid lock contention under high RPS. - Introduced a lookup map in the admin API to fix a potential O(n^2) iteration over upstreams.
Assistance Disclosure
No AI was used.
Fixes: #7277
Introducing new usagePool named
inflightHostswhich store the host is still in-flight upstream in reverse proxy Cleanup phase. and when proxyLoopIteration returns done, the inflightHost drain in the usagePool.Here is scope for improvement because it is draft status now.