Conversation
| return true, old | ||
| } | ||
|
|
||
| func (l *txList) add(tx *types.Transaction) { |
There was a problem hiding this comment.
enqueueTx() and Add() handle queueing a tx which might replace an existing transaction. Most callers were cases where the tx was being moved from pending, so we know it's not a replacement, and we can just call this simplified add() method directly instead.
- use a wrapped map w/ `sync.RWMutex` for `TxPool.all` to remove contention in `TxPool.Get` - refactor TxPool.enqueueTx() callers
| if queue.Empty() { | ||
| delete(pool.queue, addr) | ||
| } | ||
| }() |
There was a problem hiding this comment.
Right now this conditional in the defer only executes if pool.queue[addr] doesn't exist and the queue is empty at the end of the function. Should this execute if pool.queue[addr] does exists but the queue is emptied by the end of the function?
e.g.
queue := pool.queue[addr]
if queue == nil {
queue = newTxList(false)
pool.queue[addr] = queue
}
defer func() {
if queue.Empty() {
delete(pool.queue, addr)
}
}()There was a problem hiding this comment.
Inside of this if pending block, we only add to the queue and never remove, so it should only be empty if we created it and then didn't end up adding any (but pre-emptive creation allows a cleaner queue.add usage down below). That being said, It's still possible to exit this pending conditional and enter the queue one below, which already handles this clean-up case, so there is likely a simpler solution here. Let me take another look and at least document it better if not refactor it.
There was a problem hiding this comment.
Simplified and documented
| type txLookup struct { | ||
| all map[common.Hash]*types.Transaction | ||
| lock sync.RWMutex | ||
| } |
There was a problem hiding this comment.
Should lock be mu for consistency with the other mutexes in gochain?
There was a problem hiding this comment.
Yeah that's my usual preference, but I had copy-pasted this type initially.
|
Looks good overall. Just a couple questions. |
|
lgtm |
use a wrapped map w/
sync.RWMutexforTxPool.allto remove contention inTxPool.Get(from upstream core: use a wrapped map w/sync.RWMutexforTxPool.allto remove contention inTxPool.Get. ethereum/go-ethereum#16670)refactor
TxPool.enqueueTx()callers