You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When preparing mineable transactions via the txpool we add transactions to buckets and then aggregate each bucket. The buckets allow us to group dependent transactions (0-conf txs) together to maximize aggregation in the candidate block.
But - We do not take max block weight into account when building these buckets.
If we take a transaction at the limit of max block weight and then bucket it along with a 0-conf transaction that spends one of its outputs then we end up producing a transaction that exceed the max block weight.
Proposal:
Every time we add a transaction to an existing bucket (it spends an output that exists in a bucket) we need to construct the aggregate tx and determine its weight. If this exceeds the limit then we cannot proceed with the bucketing, so we should start a new bucket.
This will ensure all buckets are below the max block weight.
Then in the final step (as we do today) we then combine multiple buckets into the final set of aggregate transactions, ensuring we remain below the max block weight.
The text was updated successfully, but these errors were encountered:
When preparing mineable transactions via the txpool we add transactions to buckets and then aggregate each bucket. The buckets allow us to group dependent transactions (0-conf txs) together to maximize aggregation in the candidate block.
But - We do not take max block weight into account when building these buckets.
If we take a transaction at the limit of max block weight and then bucket it along with a 0-conf transaction that spends one of its outputs then we end up producing a transaction that exceed the max block weight.
Proposal:
Every time we add a transaction to an existing bucket (it spends an output that exists in a bucket) we need to construct the aggregate tx and determine its weight. If this exceeds the limit then we cannot proceed with the bucketing, so we should start a new bucket.
This will ensure all buckets are below the max block weight.
Then in the final step (as we do today) we then combine multiple buckets into the final set of aggregate transactions, ensuring we remain below the max block weight.
The text was updated successfully, but these errors were encountered: