Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Evict transactions from the pool by order of fee/weight #1417

Closed
ignopeverell opened this issue Aug 24, 2018 · 9 comments · Fixed by #2797
Closed

Evict transactions from the pool by order of fee/weight #1417

ignopeverell opened this issue Aug 24, 2018 · 9 comments · Fixed by #2797
Labels
enhancement must-have Required for the associated milestone
Milestone

Comments

@ignopeverell
Copy link
Contributor

We should evict transactions when the pool gets full (more than max_pool_size) instead of plain rejecting them, otherwise one could send a lot of transactions right above the minimum fee and block every other from even entering the pool relatively cheaply. The ordering should be very similar to what's used by prepare_mineable_transactions, and we could just eliminate the bottom 10%.

@ignopeverell ignopeverell added this to the Mainnet milestone Aug 24, 2018
@ignopeverell ignopeverell added the must-have Required for the associated milestone label Aug 24, 2018
@yeastplume
Copy link
Member

Looking at now.. makes sense but sounds like a potentially very frustrating user experience when blocks are full, if transactions are just dropped with no real notification to anyone... anyone affected is just going to have to wait around. Perhaps an 'is this tx in the pool' api function might be in order as well?

If we ever start having this problem we're doing very well

@yeastplume
Copy link
Member

yeastplume commented Aug 27, 2018

Been staring at this for a while now, mostly to familarise self with code I haven't thought too hard about.. so please bear with me here.. So a bit confused I am most probably missing something about how this should work.

I'm assuming that we're only referring to the size of the the 'fluffed' tx pool and not the stem pool. max_pool_size on its own probably isn't a very good indicator as to how full the pool actually is, as when each transaction is placed in the txpool (i.e. fluffed pool,) they're big franken-transactions that have been cut-through and aggregated with the contents of the current tx pool as they go through each hop in the stem phase.

Assuming we're looking for the original transactions with fees most suitable to the weight placed on the chain, once something is fluffed it needs to have each individual transaction available in its unmodified form to decide which should be kept and which should be dropped. So this seems to exist, and when add_to_txpool is called all of the transactions related to the passed in transaction are deaggregated (using the original transactions, which are kept all along the way). At the moment, this puts back and outputs/inputs that were cut and returns another Franken-transaction that's been do-cut-through.

So now, do we need to go and de-aggregate every transaction that's in the txpool as well as the current, (which essentially means finding all of the original transactions) apply the weighting criteria, then drop inputs/outputs and kernels from the Franken-transactions accordingly?

Also, do we have any documentation anywhere that describes the full stem/aggregate/fluff/aggregate again process that goes on in the pool?

@antiochp
Copy link
Member

antiochp commented Aug 27, 2018

max_pool_size should be based on tx weight I think (if its not already).
The tx weight is based on # inputs, outputs, kernels, so whether its lots of small txs or one big aggregated tx the overall weight should be the same.

As far as the tx_pool is concerned there are no original txs - all it sees is the aggregated fluffed txs.

they're big franken-transactions that have been cut-through and aggregated with the contents of the current tx pool as they go through each hop in the stem phase.

Note: we only aggregate with txs in the stempool (stem phase, pre-fluffing, not the txpool (after fluffing).

We have to either accept a whole "franken-tx" to the txpool or evict it as a whole unit, there is no info available to de-aggregate it back into constituent txs.

If we could de-aggregate at this point then the whole tx aggregation thing is broken as we have not gained any privacy here...

@yeastplume
Copy link
Member

Okay, thanks.. was a bit confused by the deaggregate function, was assuming it was trying to deaggregate a single transaction by repopulating it against a list of original transactions, but it's operating on already aggregated transactions.

max_pool_size is definitely just based on the number of the items in the pool already, unless I'm missing something.

It doesn't sound like this would work from an end-user perspective. I could have my generously fee'ed transaction aggregated with a whole load of minimum-fee spam transactions, and have it rejected as a result.

@antiochp
Copy link
Member

Yeah the issue around aggregating txs with widely differing fees has been brought up by @tromp before.
Sounds like this may be worth tackling first.

And max_pool_size should probably be reworked to be consistent with block and tx weight calcs.
1,000 aggregated txs each with 1,000 kernels is very different to 1,000 txs each with a single kernel.

@ignopeverell
Copy link
Contributor Author

The idea of the limit is simply to not have all nodes crash with an out-of-memory. Someone with a lot of money, in the presence of low fees, could try to get cheeky. This has happened to more than one blockchain. At this point, we're trading a bad user experience for a very bad network failure.

It'd be best if we could trigger this strictly on transaction size (and not weight or count), simply because that's what would crash the node. And tne evict some cheapo transactions.

@antiochp
Copy link
Member

I see what you're saying - weight is an approximation of size, but we don't care about weight here in the Grin sense, we just care about the bytes themselves.

@sesam
Copy link
Contributor

sesam commented Jan 1, 2019

Some of this is probably already done. Is this closeable / what's still missing?

@quentinlesceller
Copy link
Member

Not fixed yet @sesam. Will try to do a final push this week.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement must-have Required for the associated milestone
Projects
None yet
5 participants