prioritization fee cache: remove lru crate#30
Conversation
|
Looks like this needs a |
|
Updated |
|
Can you benchmark before/after change for comparison? This would run existing bench, if you found these benches are outdated, please feel free to update them 😄 |
|
master 151675b PR 3030611 but I do not think that this display real improvements because we only parse transactions and send results to background update thread master PR diff: diff --git a/runtime/benches/prioritization_fee_cache.rs b/runtime/benches/prioritization_fee_cache.rs
index 8c6bf1fe0a..f97691625d 100644
--- a/runtime/benches/prioritization_fee_cache.rs
+++ b/runtime/benches/prioritization_fee_cache.rs
@@ -44,22 +44,29 @@ fn build_sanitized_transaction(
fn bench_process_transactions_single_slot(bencher: &mut Bencher) {
let prioritization_fee_cache = PrioritizationFeeCache::default();
- let bank = Arc::new(Bank::default_for_tests());
+ let GenesisConfigInfo { genesis_config, .. } = create_genesis_config(10_000);
+ let bank0 = Bank::new_for_benches(&genesis_config);
+ let bank_forks = BankForks::new_rw_arc(bank0);
+ let bank = bank_forks.read().unwrap().working_bank();
+ let collector = solana_sdk::pubkey::new_rand();
+ let mut n = 0;
+ bencher.iter(move || {
+ n += 1;
+ let bank = Bank::new_from_parent(bank.clone(), &collector, n);
- // build test transactions
- let transactions: Vec<_> = (0..5000)
- .map(|n| {
- let compute_unit_price = n % 7;
- build_sanitized_transaction(
- compute_unit_price,
- &Pubkey::new_unique(),
- &Pubkey::new_unique(),
- )
- })
- .collect();
+ let transactions = (0..500)
+ .map(|n| {
+ let compute_unit_price = n % 7;
+ build_sanitized_transaction(
+ compute_unit_price,
+ &Pubkey::new_unique(),
+ &Pubkey::new_unique(),
+ )
+ })
+ .collect::<Vec<_>>();
- bencher.iter(|| {
prioritization_fee_cache.update(&bank, transactions.iter());
+ prioritization_fee_cache.finalize_priority_fee(bank.slot(), bank.bank_id());
});
}
diff --git a/runtime/src/prioritization_fee_cache.rs b/runtime/src/prioritization_fee_cache.rs
index 0490f59445..176a7ba405 100644
--- a/runtime/src/prioritization_fee_cache.rs
+++ b/runtime/src/prioritization_fee_cache.rs
@@ -44,6 +44,9 @@ struct PrioritizationFeeCacheMetrics {
// Accumulated time spent on finalizing block prioritization fees.
total_block_finalize_elapsed_us: AtomicU64,
+
+ finalized: AtomicU64,
+ cache: AtomicU64,
}
impl PrioritizationFeeCacheMetrics {
@@ -65,6 +68,7 @@ impl PrioritizationFeeCacheMetrics {
fn accumulate_total_cache_lock_elapsed_us(&self, val: u64) {
self.total_cache_lock_elapsed_us
.fetch_add(val, Ordering::Relaxed);
+ self.cache.fetch_add(val, Ordering::Relaxed);
}
fn accumulate_total_entry_update_elapsed_us(&self, val: u64) {
@@ -75,6 +79,7 @@ impl PrioritizationFeeCacheMetrics {
fn accumulate_total_block_finalize_elapsed_us(&self, val: u64) {
self.total_block_finalize_elapsed_us
.fetch_add(val, Ordering::Relaxed);
+ self.finalized.fetch_add(val, Ordering::Relaxed);
}
fn report(&self, slot: Slot) {
@@ -161,6 +166,7 @@ impl Drop for PrioritizationFeeCache {
.unwrap()
.join()
.expect("Prioritization fee cache servicing thread failed to join");
+ println!("slot_finalize_time {}us, cache_lock_time {}us", self.metrics.finalized.load(Ordering::Relaxed), self.metrics.cache.load(Ordering::Relaxed));
}
} |
Great to see significant decrease on cache_lock_time! Do you mind to add |
|
master: PR PR after bf3cae6 (update per tx instead of per bank) but honestly not sure would be good to send an update per tx or collect + send per bank |
t-nelson
left a comment
There was a problem hiding this comment.
this seems to be making several independent changes simultaneously. we generally reject these in favor of one pr per change. i'd highly recommend breaking it up
Yea, agree it'd be better (at least easier to review) if can break into separate PRs, perhaps:
|
|
The batch update was removed after the benchmark.
|
Did you read the entirety of my comment? |
|
@tao-stones updated |
|
Can you please rebase on master and clean up the commit history? We are generally opposed to merge commits. Also, it looks like the description needs updating. |
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## master #30 +/- ##
=======================================
Coverage 81.8% 81.9%
=======================================
Files 841 841
Lines 228242 228246 +4
=======================================
+ Hits 186923 186940 +17
+ Misses 41319 41306 -13 |
Moved from solana-labs#35228
Problem
lru::LruCacherequireswritelock for any action.Summary of Changes
Use
BTreeMapinstead ofLruCache, write lock would be required only on finalizing slot +BTreeMapallows easily to remove the oldest slot.