Skip to content

api: Add all in-flight requests /reverse_proxy/upstreams (#7277)#7281

Closed
u5surf wants to merge 2 commits intocaddyserver:masterfrom
u5surf:issue-7277
Closed

api: Add all in-flight requests /reverse_proxy/upstreams (#7277)#7281
u5surf wants to merge 2 commits intocaddyserver:masterfrom
u5surf:issue-7277

Conversation

@u5surf
Copy link
Contributor

@u5surf u5surf commented Sep 30, 2025

Assistance Disclosure

No AI was used.

Fixes: #7277

Introducing new usagePool named inflightHosts which store the host is still in-flight upstream in reverse proxy Cleanup phase. and when proxyLoopIteration returns done, the inflightHost drain in the usagePool.
Here is scope for improvement because it is draft status now.

@u5surf u5surf marked this pull request as ready for review October 4, 2025 14:25
Comment on lines +844 to +848
di.Upstream.Host.countRequest(-1)
inflightHost := inflightHosts.Load(di.Address)
if inflightHost != nil {
host, _ := inflightHost.(*Host)
host.countRequest(-1)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

di.Upstream.Host and inflightHost.(*Host) could point to the same Host, so countRequest(-1) might be applied twice.

Comment on lines +136 to +142
host := new(Host)
existingHost, loaded := inflightHosts.LoadOrStore(u.String(), host)
if loaded {
host = existingHost.(*Host)
}
_ = host.countRequest(numRemaiRequests)
u.Host = host

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just a thought - would it be simpler to just add the old host to inflightHosts rather than creating a new one?

@mholt
Copy link
Member

mholt commented Oct 27, 2025

Wondering if a UsagePool is really the right abstraction for this. UsagePools are useful specifically for managing an allocated value based on reference counting.

If the goal is to just count active requests per upstream, maybe a map of map[string]uint is the way to do it? Sync access to the map, of course. But it might be much simpler...

pauloappbr added a commit to pauloappbr/caddy that referenced this pull request Feb 21, 2026
…server#7277)

This refactors the initial approach in PR caddyserver#7281, replacing the UsagePool
with a dedicated package-level map protected by a sync.RWMutex. This
addresses maintainer feedback, avoids double-counting bugs, and ensures
that draining upstreams are correctly exposed via the admin API across
config reloads without leaking memory.

Co-authored-by: Y.Horie <u5.horie@gmail.com>
@pauloappbr
Copy link
Contributor

Hey @u5surf, I noticed this PR was stalled, so I went ahead and implemented the map + sync.RWMutex refactor suggested by @mholt to replace the UsagePool.

I've opened a PR directly to your fork's branch here: u5surf#1

I made sure to add you as a co-author to keep the history intact. If you review and merge that into your branch, it will automatically update this PR so we can get it merged into Caddy!

@u5surf
Copy link
Contributor Author

u5surf commented Feb 24, 2026

@pauloappbr

I made sure to add you as a co-author to keep the history intact.

Would you submit your PR to this project directly? I don't any care if your PR merge instead of mine. I consider it doesn't need to keep my commit. I would like to delegate the project authors (e.g @mholt) how to handle to contribute with AI agent.

@pauloappbr
Copy link
Contributor

Thanks for the green light, @u5surf! I really appreciate you laying the groundwork and test cases for this.

I've opened a new PR directly to master here: #7517

I also closed the PR on your fork so we can keep the discussion centralized on the new one. @mholt, looking forward to your thoughts on the sync.RWMutex approach!

pauloappbr pushed a commit to pauloappbr/caddy that referenced this pull request Feb 24, 2026
…server#7277)

This refactors the initial approach in PR caddyserver#7281, replacing the UsagePool
with a dedicated package-level map protected by a sync.RWMutex. This
addresses maintainer feedback, avoids double-counting bugs, and ensures
that draining upstreams are correctly exposed via the admin API across
config reloads without leaking memory.

Co-authored-by: Y.Horie <u5.horie@gmail.com>
pauloappbr added a commit to pauloappbr/caddy that referenced this pull request Feb 24, 2026
…server#7277)

This refactors the initial approach in PR caddyserver#7281, replacing the UsagePool
with a dedicated package-level map protected by a sync.RWMutex. This
addresses maintainer feedback, avoids double-counting bugs, and ensures
that draining upstreams are correctly exposed via the admin API across
config reloads without leaking memory.

Co-authored-by: Y.Horie <u5.horie@gmail.com>
pauloappbr added a commit to pauloappbr/caddy that referenced this pull request Feb 24, 2026
…server#7277)

This refactors the initial approach in PR caddyserver#7281, replacing the UsagePool
with a dedicated package-level map protected by a sync.RWMutex. This
addresses maintainer feedback, avoids double-counting bugs, and ensures
that draining upstreams are correctly exposed via the admin API across
config reloads without leaking memory.

Co-authored-by: Y.Horie <u5.horie@gmail.com>
@francislavoie
Copy link
Member

Alright, I'll close this then. Thanks everyone!

pauloappbr added a commit to pauloappbr/caddy that referenced this pull request Feb 24, 2026
…server#7277)

This refactors the initial approach in PR caddyserver#7281, replacing the UsagePool
with a dedicated package-level sync.Map and atomic.Int64 to track
in-flight requests without global lock contention.

It also introduces a lookup map in the admin API to fix a potential
O(n^2) iteration over upstreams, ensuring that draining upstreams
are correctly exposed across config reloads without leaking memory.

Co-authored-by: Y.Horie <u5.horie@gmail.com>

reverseproxy: optimize in-flight tracking and admin API

- Replaced sync.RWMutex with sync.Map and atomic.Int64 to avoid lock contention under high RPS.
- Introduced a lookup map in the admin API to fix a potential O(n^2) iteration over upstreams.
pauloappbr added a commit to pauloappbr/caddy that referenced this pull request Feb 25, 2026
…server#7277)

This refactors the initial approach in PR caddyserver#7281, replacing the UsagePool
with a dedicated package-level sync.Map and atomic.Int64 to track
in-flight requests without global lock contention.

It also introduces a lookup map in the admin API to fix a potential
O(n^2) iteration over upstreams, ensuring that draining upstreams
are correctly exposed across config reloads without leaking memory.

Co-authored-by: Y.Horie <u5.horie@gmail.com>

reverseproxy: optimize in-flight tracking and admin API

- Replaced sync.RWMutex with sync.Map and atomic.Int64 to avoid lock contention under high RPS.
- Introduced a lookup map in the admin API to fix a potential O(n^2) iteration over upstreams.
mholt pushed a commit that referenced this pull request Mar 3, 2026
#7517)

This refactors the initial approach in PR #7281, replacing the UsagePool
with a dedicated package-level sync.Map and atomic.Int64 to track
in-flight requests without global lock contention.

It also introduces a lookup map in the admin API to fix a potential
O(n^2) iteration over upstreams, ensuring that draining upstreams
are correctly exposed across config reloads without leaking memory.

Co-authored-by: Y.Horie <u5.horie@gmail.com>

reverseproxy: optimize in-flight tracking and admin API

- Replaced sync.RWMutex with sync.Map and atomic.Int64 to avoid lock contention under high RPS.
- Introduced a lookup map in the admin API to fix a potential O(n^2) iteration over upstreams.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

/reverse_proxy/upstreams - all in-flight requests

5 participants