Conversation
| pp.enforceLimits() | ||
| if c.tempCapacity > 0 { | ||
| commit = true | ||
| c.bias = 0 |
There was a problem hiding this comment.
This is not an identical change here and I am not sure about the potential effects. Until now, the bias was always only applied to one node that has been inserted or whose capacity was increased. It is meant to protect against quick oscillations so we always use it against the node that is trying to push out others from the pool. Once we decide to grant it the space it needs, it's no longer biased. Now tryActivate can activate multiple clients and in your version the previously activated ones stay biased until we finish the entire operation. This means that further inactive nodes can very easily deactivate recently activated ones if their priorities are very close to each other because they are both biased and the already activated one does not have an advantage against the latest activation attempt. I'd prefer to keep the old and thoroughly tested behavior here.
There was a problem hiding this comment.
Gotcha. It's my mistake, will fix it.
| return p2p.DiscRequested | ||
| } | ||
| activeCount, _ := h.server.clientPool.Active() | ||
| clientConnectionGauge.Update(int64(activeCount)) |
There was a problem hiding this comment.
Why did you remove these updates? Do you think this metric is useless? Btw the metric itself is still there is metrics.go so this looks wrong.
There was a problem hiding this comment.
Because they are already counted in the clientpool internal. https://github.com/ethereum/go-ethereum/blob/master/les/vflux/server/clientpool.go#L154
|
@zsfelfoldi I think all the issues are resolved, please take another look! |
zsfelfoldi
left a comment
There was a problem hiding this comment.
If clientConnectionGauge is not updated in package les any more then we should remove it. Otherwise LGTM.
* les: polish code * les/vflus/server: fixes * les: fix lint
This PR polishes the les internal a bit. Nothing important changed.