-
Notifications
You must be signed in to change notification settings - Fork 992
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
bucket_transactions is now fee_to_weight aware #2758
Conversation
d341e13
to
e37acd0
Compare
I guess the complication here is we end up building buckets with dependencies between them (one bucket depends on a prior bucket). Maybe we're fine with this assumption that the buckets will be sorted consistently. Crazy idea: buckets are not simply single aggregated txs but a vec of aggregated txs - each bucket is a vec of txs ordered by dependency (later ones dependent on earlier one). So we aggregates txs in a bucket as long as this does not reduce the Then we are free to reorder buckets as we see fit (no dependency between buckets).
Does that work? Alternatively: Maybe the simplest solution is for |
The first idea bother me on the fact that making buckets possibly dependant of each other can be tricky for a lot of things.
Let's say you must take 2 transactions to build a block.
This I think we need to the same as above (flatten the buckets and take the last one) as example
What do you mean? |
e37acd0
to
26e522e
Compare
@quentinlesceller Take another look. Reworked this so Buckets of txs can be cut-through. If you want to evict a single tx you can use the last one in the vec. If you want to evict |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good to me. This will drastically simplify the eviction of transaction while keeping a simple data structure.
I would be even more comfortable if @ignopeverell could review as well so we are 99% sure that we did not miss anything.
// Otherwise put it in its own bucket at the end. | ||
// Note we increment the depth here to track the dependency. | ||
tx_buckets | ||
.push(Bucket::new_with_depth(entry.tx.clone(), bucket.depth + 1)); | ||
} | ||
} else { | ||
// Aggregation failed so discard this new tx. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Out of curiosity out frequently this happens and how?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The aggregation failure?
Basically the only way this would fail is if the aggregated tx ended up being too big to fit in a block. Say we attempted two huge transactions that in aggregate was larger than our block weight limit. This should happen only rarely.
The only other way this could happen is if the pool got itself into a bad state somehow with a double spend or something similar in there.
This should never happen in practice as we check for consistency at various stages of the tx pool lifecycle.
Sorry for being so late to the party. So here we'd prioritize a transaction with no dependent ("child") transaction over a potentially much higher paying one with one dependent that lowers its parent's fee/weight ratio? |
Not quite no. We would bucket these up into 3 buckets.
i.e. B and C would not be bucketed together because C would lower the bucket fee/weight ratio. But they would both still be included in (separate) buckets. We would then sort the buckets by fee/weight ratio (while maintaining the ordering constraint between B and C). So in your example B then A then C. Does this makes sense? tl;dr Bucket dependent txs together if no negative effect on fee/weight ratios. Another example, say we have the following -
In this example we would bucket them as -
We would then sort them as The separate buckets only affects the ordering and the filtering, not necessarily the final cut-through for the block. |
Said another way - It is possible to bump a tx up the priority list by sending a high-fee dependent tx. Edit: So by definition -
i.e. Buckets never depend on buckets with lower fee/weight ratios. So we don't need the additional Need to think this through a bit more but I'm pretty sure we can simplify the sorting logic here. |
bucket_transactions now returns the underlying txs
4422a71
to
a22df73
Compare
Thanks a lot for the detailed explanation, all makes sense now. Very nice! |
Reworked
bucket_transactions
to make itfee_to_weight
aware.We also return the vec of underlying txs from
bucket_transactions
. We used to return the aggregated bucket txs.bucket_transactions
now performs the following logic -and a "depth".Bucket depth is incremented if a bucket depends on an earlier bucket.would reduce the aggregate fee_to_weight, in which case we start a new bucket
(and increment the depth).depth (ascending) andfee_to_weight (descending) to preserve dependency ordering and maximize both cut-through and overall fees.The vec of txs returned by
bucket_transactions
satisfies the following -This prevents low fee 0-conf txs from "piggy-backing" off of higher fee txs in the pool.
While enabling low fee 0-conf txs stuck in the pool to be spent via high fee 0-conf txs (CPFP style).
The pool logic will now attempt to maximize cut-through as long as overall fees are not adversely impacted.
A nice side-effect of all this is related to tx eviction (see #2706). The txs at the end of the vec of txs returned by
bucket_transactions
are good candidates for eviction.They are likely to have low
fee_to_weight
and can be safely evicted without affecting dependent txs and without impacting cut-through.