Support receiving verified transactions from the vortexor#5321
Support receiving verified transactions from the vortexor#5321lijunwangs merged 9 commits intoanza-xyz:masterfrom
Conversation
70dcb08 to
bf932aa
Compare
Codecov Report❌ Patch coverage is Additional details and impacted files@@ Coverage Diff @@
## master #5321 +/- ##
=========================================
- Coverage 83.0% 82.9% -0.1%
=========================================
Files 828 830 +2
Lines 375858 376142 +284
=========================================
+ Hits 312113 312182 +69
- Misses 63745 63960 +215 🚀 New features to boost your workflow:
|
461cfc7 to
ab02dd8
Compare
|
|
||
| let (forward_stage_sender, forward_stage_receiver) = bounded(1024); | ||
| let sigverify_stage = { | ||
| let sig_verifier = if let Some(vortexor_receivers) = vortexor_receivers { |
There was a problem hiding this comment.
I think we should optionally spawn vortexor receiver, but we should still spawn the local sigverify stage.
AFAICT the quic and fetch stages still exist, and even if the ports are not advertised on gossip it's possible to still send to them.
This also makes upgrades tied together. I can't restart my vortexor instance because my validator is relying on it.
We should let the operator switch their advertised tpu port(s) at runtime, so they can switch back to local mode to upgrade vortexor without screwing up the validator.
Or switch to another vortexor instance.
There was a problem hiding this comment.
Great point. We plan to support the dynamic management of subscriptions via Admin RPC on the validator. See https://github.com/anza-xyz/agave/blob/master/vortexor/Readme.md
There was a problem hiding this comment.
Right - but we shouldn't just let the quic & fetch stages send to a SV that doesn't exist. It's likely we will cause a panic in that case soon, and then it's a vulnerability if I can guess the ports your tpu is on.
Even if we can't switch off of it in this PR, we shouldn't remove SV imo
There was a problem hiding this comment.
Disable TPU streamers when vortexor receiver is configured.
There was a problem hiding this comment.
Sorry for the delay, but this seems the wrong direction to me.
If the vortexor goes down, the operator is forced to restart their node to go back to normal TPU. This seems like a necessary feature to me, not something that we should do as follow-up.
I'm happy to hear other's opinions, as maybe I'm being overly cautious about this.
There was a problem hiding this comment.
This feature is only enabled when tpu-vortexor-receiver-address enabled. Auto fullback and heartbeat feature will be delivered in follow-on PRs.
|
|
d956aae to
e86c96f
Compare
| ($sender:expr, $batch:expr, $count:expr) => { | ||
| match $sender.send($batch) { | ||
| Ok(_) => { | ||
| trace!("Sent batch: {} received from vortexor successfully", $count); |
There was a problem hiding this comment.
will we ever turn this on? this seems excessive.
There was a problem hiding this comment.
It was only used for debugging. It will be enhanced to metrics.
| Self::recv_send( | ||
| batch_receiver, | ||
| recv_timeout, | ||
| 8, |
There was a problem hiding this comment.
extract this to a constant. can you comment where it came from?
There was a problem hiding this comment.
will do. It is rough estimate to limit the memory usage to max PACKETS_PER_BATCH*8 = 512 PACKETS in the total batches.
| )) | ||
| }; | ||
|
|
||
| let vote_sigverify_stage = { |
There was a problem hiding this comment.
Votes do not go through vortexor? will they in the future? if I want to offload the sigverify task it seems it should be fully offloaded.
There was a problem hiding this comment.
Agreed. I think we can offload votes to vortexor as well. To be done in future PRs.
…ddress argument by default
fbe8b06 to
59b8dca
Compare
apfitzge
left a comment
There was a problem hiding this comment.
Approach seems fine as an intermediate step. We can spawn quic &fetch & sigverifier threads on detecting shutdown or admin rpc change in future PRs.
LGTM - @lijunwangs please get approval from a second relevant person. These are larger arch changes.
@sakridge @bw-solana can you help review these as well? Thanks! |
)" (#6525) The reverted commit introduced the solana-vortexor-receiver crate which was used by solana-core. solana-vortexor-receiver was not set to publish which means solana-core cannot be published either. So, back out solana-vortexor in order to unblock crate publishing This reverts commit e44c17d
)" (#6525) The reverted commit introduced the solana-vortexor-receiver crate which was used by solana-core. solana-vortexor-receiver was not set to publish which means solana-core cannot be published either. So, back out solana-vortexor in order to unblock crate publishing This reverts commit e44c17d (cherry picked from commit 63cf093) # Conflicts: # Cargo.lock # Cargo.toml # programs/sbf/Cargo.lock # svm/examples/Cargo.lock # vortexor-receiver/Cargo.toml
)" (#6525) The reverted commit introduced the solana-vortexor-receiver crate which was used by solana-core. solana-vortexor-receiver was not set to publish which means solana-core cannot be published either. So, back out solana-vortexor in order to unblock crate publishing This reverts commit e44c17d (cherry picked from commit 63cf093)
…or (#5321)" (backport of #6525) (#6529) The reverted commit introduced the solana-vortexor-receiver crate which was used by solana-core. solana-vortexor-receiver was not set to publish which means solana-core cannot be published either. So, back out solana-vortexor in order to unblock crate publishing This reverts commit e44c17d (cherry picked from commit 63cf093) --------- Co-authored-by: steviez <steven@anza.xyz>
Problem
This support receiving verified transactions from the vortexor.
Summary of Changes
The additional argument in the validator can start service receiving the verified transactions:
e.g
:
--tpu-vortexor-receiver-address 10.138.0.136:8100
And the validator can use existing TPU customize address to point its TPU addresses to the vortexor:
e.g.
--public-tpu-address 10.138.0.136:9194 --public-tpu-forwards-address 10.138.0.136:9195
Fixes #