Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fatxpool: add pending_extrinsics_len RPC call #7138

Closed
Tracked by #5472
michalkucharczyk opened this issue Jan 13, 2025 · 9 comments
Closed
Tracked by #5472

fatxpool: add pending_extrinsics_len RPC call #7138

michalkucharczyk opened this issue Jan 13, 2025 · 9 comments
Assignees
Labels
T0-node This PR/Issue is related to the topic “node”.

Comments

@michalkucharczyk
Copy link
Contributor

Add pending_extrinsics_len RPC call to retrive a number of transaction pending in the transaction pool.

Rationale: to efficiently send transactions (e.g. for spamming tests, when certain level of transaction must be maintained to ensure block filling), the sender needs to know what is the number of the transactions in the pool. Current approach with the call (pending_extrinsics) returning all the extrinsics is not effective. The call returning the number of ready/future transactions would be nice.

@michalkucharczyk michalkucharczyk self-assigned this Jan 13, 2025
@michalkucharczyk michalkucharczyk added the T0-node This PR/Issue is related to the topic “node”. label Jan 13, 2025
@bkchr
Copy link
Member

bkchr commented Jan 14, 2025

Why not just track the transactions in the pool? Then you know when they are part of a block etc.

@michalkucharczyk
Copy link
Contributor Author

Sure, tx-events could be used for tracking the count. But it will only work for txs sent with submit_and_watch. Won't work for submit. Also you won't be able to account transaction gossiped from other nodes.

Extra RPC call seems like a super easy and also obvious nice-to-have call.

@bkchr
Copy link
Member

bkchr commented Jan 14, 2025

Also you won't be able to account transaction gossiped from other nodes.

Yeah, but this argument also works the other way around. Because you don't know the state of the tx pool on the other nodes, you don't know if pushing more txs to the local node makes any sense at all.

@michalkucharczyk
Copy link
Contributor Author

If the pool of RPC node is full I won't send transactions there.

If other nodes are full, it is transaction-protocol reposnsibility to make sure txs are making their way to the nodes (and not being silently dropped). Not RPC API.

I believe this call would also be usuful in (production) scenarios when the backend has a huge amount of transactions to be sent into the chain (e.g. in gaming). Instead of sending blindly and watching if anything is dropped, we could just have a number. This could also be a RPC subscription to make it more reliable (so the client does not need to poll).

@bkchr
Copy link
Member

bkchr commented Jan 14, 2025

I believe this call would also be usuful in (production) scenarios when the backend has a huge amount of transactions to be sent into the chain (e.g. in gaming). Instead of sending blindly and watching if anything is dropped, we could just have a number. This could also be a RPC subscription to make it more reliable (so the client does not need to poll).

The tx pool is not only limited by the pure number of transactions, but also by the total size of all these transactions. This means that only observing the number of transactions would not be enough.

IMO if you are interested on what happens to your transaction, send it with submit_and_watch and track the status.

@michalkucharczyk
Copy link
Contributor Author

This means that only observing the number of transactions would not be enough.

Yeah, that is true. We could add more fields to the call. It could even be health checker (e.g. providing txpool status) endpoint.

Also the main point here is that we don't neccessarily want to operate at pool limit boundary. If you have a sender and you want to ensure smooth production of blocks fully filled with transactions (N per block), it is enough to mainintain 2N or 3N transaction in the pool. And to do it you need to know what is inside. This would be useful for a number of testing long-run scenarios.

When it comes to submit_and_watch and submit we should have long term tests for both of them. Also note that submit_and_watch is not reliable - once the channel is broken there is no way to re-connect to it and check what happened with transaction. Maybe we'd need some improvements in this area.

I totally agree that we can implement fancy event tracking and the block monitor on the client side. Having RPC call is just simpler.

@bkchr
Copy link
Member

bkchr commented Jan 15, 2025

I totally agree that we can implement fancy event tracking and the block monitor on the client side. Having RPC call is just simpler.

Is it that fancy?

for tx in endless_x {
     let watcher = submit_and_watch(tx);

     transaction_in_flight.push(async { loop { if watcher.await.is_error_or_in_block() { return } }).await;
}

push internally waits for a free slot if we have too many transactions in flight.

Also note that submit_and_watch is not reliable

Why? What does that mean? Why does it end? Because there is some shitty reverse proxy in front of the node? Because the node has a bug? A connection should not be dropped out of nowhere.

If you have a sender and you want to ensure smooth production of blocks fully filled with transactions (N per block), it is enough to mainintain 2N or 3N transaction in the pool. And to do it you need to know what is inside. This would be useful for a number of testing long-run scenarios.

You are totally arguing here in the assumption that you are connected to the block builder. If you are not connected to the block builder, you have no idea how their tx pool looks like.

@michalkucharczyk
Copy link
Contributor Author

Is it that fancy?

More fancy then just reading a number. Also will not support submit operation (which is in API and also shall be tested).

Why? What does that mean? Why does it end? Because there is some shitty reverse proxy in front of the node? Because the node has a bug? A connection should not be dropped out of nowhere.

I don't mean any bug or reverse proxy. Rather API design which does not provide any means to resume listening in case of anything going wrong (e.g. node crash). So you need to implement block monitor. But it is not main point here.

You are totally arguing here in the assumption that you are connected to the block builder. If you are not connected to the block builder, you have no idea how their tx pool looks like.

Sorry, did not get this part.

For long term testing we need to be sure, we can build full blocks. So yes, I know how network behaves and how blocks are being built, and what I do expect.

Not sure why are we arguing here? This would help us with testing things, w/o any extra effort, just simplifying flows.

If this is not acceptable for you I can live without this, using pending_extrinsics call as I do today with some extra RPC params to work-around response too big problems :).

Should we close it?

@michalkucharczyk michalkucharczyk closed this as not planned Won't fix, can't repro, duplicate, stale Jan 15, 2025
@bkchr
Copy link
Member

bkchr commented Jan 15, 2025

If this is not acceptable for you I can live without this, using pending_extrinsics call as I do today with some extra RPC params to work-around response too big problems :).

These things for sure should be fixed.

One thing we should also not forget is that these APIs all were build without light clients in mind. The new JSONRPC api for example doesn't even have the pending_extrinsicc RPC right now. Maybe still worth to add there in some way and maybe if we put it there, we could bring in a better version as you explained here.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
T0-node This PR/Issue is related to the topic “node”.
Projects
None yet
Development

No branches or pull requests

2 participants