-
Notifications
You must be signed in to change notification settings - Fork 295
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Replace mmap with file io in merkle tree hash calculation #3547
base: master
Are you sure you want to change the base?
Conversation
f8d3ca6
to
d9b0c8a
Compare
Summary
|
8abeb3f
to
187b7c3
Compare
rebase to pick up #3589 |
beb8a1f
to
f6c5c61
Compare
f6c5c61
to
ee3a12b
Compare
@@ -1160,16 +1255,15 @@ impl AccountsHasher<'_> { | |||
let binner = PubkeyBinCalculator24::new(bins); | |||
|
|||
// working_set hold the lowest items for each slot_group sorted by pubkey descending (min_key is the last) | |||
let (mut working_set, max_inclusive_num_pubkeys) = Self::initialize_dedup_working_set( | |||
let (mut working_set, _max_inclusive_num_pubkeys) = Self::initialize_dedup_working_set( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
max_inclusive_num_pubkeys is an estimate of the upper bound for the hash file size. It is only required when we use mmap - creating an mmap requires to specify the initial size, which could be over allocated too. After switching to file writer, we don't need this any more.
accounts-db/src/accounts_hash.rs
Outdated
@@ -619,6 +540,180 @@ impl AccountsHasher<'_> { | |||
(num_hashes_per_chunk, levels_hashed, three_level) | |||
} | |||
|
|||
// This function is called at the top lover level to compute the merkle. It |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this fn is copied from fn compute_merkle_root_from_slices<'b, F, T>(
. Do we need the other fn still?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, we do.
compute_merkle_root_from_slices is still used when we compute merkle tree at 2 level and above, where we have all the data in memory already.
The comments here may be helpful to understand.
// This function is called at the top level to compute the merkle hash. It
// takes a closure that returns an owned vec of hash data at the leaf level
// of the merkle tree. The input data for this bottom level are read from a
// file. For non-leaves nodes, where the input data is already in memory, we
// will use `compute_merkle_root_from_slices`, which is a version that takes
// a borrowed slice of hash data instead.
accounts-db/src/accounts_hash.rs
Outdated
fn get_slice(&self, start: usize) -> &[Hash] { | ||
// return the biggest hash data possible that starts at the overall index 'start' | ||
// start is the index of hashes | ||
fn get_data(&self, start: usize) -> Arc<Vec<u8>> { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can you help me understand the reason for returning Vec<u8>
here and not Vec<Hash>
. I know it will require some casting somewhere.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The cast is happening inside the caller - fn compute_merkle_root_from_start<F, T>
, where T is the type Hash.
My thinking is that function is a low level function to just return the raw bytes (similar to file io read). And let the caller to cast it to the type its want. In a hypothetical case, the caller may want to use a different hash that may have more or less bytes, depending on the type T passed in.
Since fn compute_merkle_root_from_start<F, T>
is already designed to be generic on T
, if we limit ourselves to Hash here, we are limiting compute_merkle_root_from_start
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I made an experiment to change the return value to Vec<Hash>
.
This is what I came up with. It will require to introduce unsafe code, which is awkward. And we may also have the issue with capacity
not dividable by 32 (size of hash).
Therefore, I prefer to keep it as Vec<u8>
and let the caller cast it. wdyt?
@@ -337,7 +338,7 @@ impl CumulativeHashesFromFiles {
// return the biggest hash data possible that starts at the overall index 'start'
// start is the index of hashes
- fn get_data(&self, start: usize) -> Arc<Vec<u8>> {
+ fn get_data(&self, start: usize) -> Arc<Vec<Hash>> {
let (start, offset) = self.cumulative.find(start);
let data_source_index = offset.index[0];
let mut data = self.readers[data_source_index].lock().unwrap();
@@ -349,7 +350,16 @@ impl CumulativeHashesFromFiles {
let mut result_bytes: Vec<u8> = vec![];
data.read_to_end(&mut result_bytes).unwrap();
- Arc::new(result_bytes)
+
+ let hashes = unsafe {
+ let len = result_bytes.len() / std::mem::size_of::<Hash>();
+ let capacity = result_bytes.capacity() / std::mem::size_of::<Hash>();
+ let ptr = result_bytes.as_mut_ptr() as *mut Hash;
+ std::mem::forget(result_bytes);
+ Vec::from_raw_parts(ptr, len, capacity)
+ };
+
+ Arc::new(hashes)
}
}
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I also think we should return Hash
s instead of bytes here. And a Box<[Hash]> should make life easier as well. And no Arc in the return type.
let mut hashes = vec![Hash::default(); num_hashes].into_boxed_slice();
// todo: ensure there are `num_hashes` left to read in `data`. This ensures `.unwrap()` below is safe.
data.read_exact(bytemuck::must_cast_slice_mut(hashes.as_mut()).unwrap();
hashes
I prefer this method because it ensures we always have the correct alignment/size for each element. (I know in this case it is Hash
, which has an alignment of 1. By using Hash
and casting to bytes, the reader doesn't need to go and check every caller/use if we cast the opposite direction.)
And:
- Box vs Vec: we don't want to allow growing the data by the caller after this function returns.
- Arc vs no-Arc: this is something the caller can decide, but nothing requires this function to return an Arc.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
pushed a commit to share the hash calcuation from slice and from owned vec.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yes. done - change get_data() to return Box<[Hash]>. 9cc3366
accounts-db/src/accounts_hash.rs
Outdated
.unwrap(); | ||
|
||
let mut result_bytes: Vec<u8> = vec![]; | ||
data.read_to_end(&mut result_bytes).unwrap(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
note that this allocates what could be a large Vec
. Depends on hashing tuning parameters (num bins, etc.)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we have found in pop testing that large mem allocations can cause us to oom.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good point.
We could add a cap on how many bytes we load here. The downside is that we may need to call this function multiple times.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I commit a change to cap the file read buffer size to 64M.
I'm certainly happy and supportive with the idea of the change here. What is the estimate of the lattice hash creation? Even then, we may need to use this code to create a full lattice hash from scratch initially or for full accountsdb verification. |
accounts-db/src/accounts_hash.rs
Outdated
|
||
// initial fetch - could return entire slice | ||
let data_bytes = get_hash_slice_starting_at_index(0); | ||
let data: &[T] = bytemuck::cast_slice(&data_bytes); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@jeffwashington The caller cast the data to the generic type T here.
accounts-db/src/accounts_hash.rs
Outdated
for _k in 0..end { | ||
if data_index >= data_len { | ||
// we exhausted our data, fetch next slice starting at i | ||
data_bytes = get_hash_slice_starting_at_index(i); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@jeffwashington With a cap only how many bytes we load each time, we may find that we will need to load the time more times.
7069973
to
aeb53a4
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I still need to go through the merkle tree code.
Heh, trying to fit the file io impl into the existing mmap api is really clunky... That's probably the right choice though, as this code will go away after the accounts lt hash is activated.
accounts-db/src/accounts_hash.rs
Outdated
fn get_slice(&self, start: usize) -> &[Hash] { | ||
// return the biggest hash data possible that starts at the overall index 'start' | ||
// start is the index of hashes | ||
fn get_data(&self, start: usize) -> Arc<Vec<u8>> { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I also think we should return Hash
s instead of bytes here. And a Box<[Hash]> should make life easier as well. And no Arc in the return type.
let mut hashes = vec![Hash::default(); num_hashes].into_boxed_slice();
// todo: ensure there are `num_hashes` left to read in `data`. This ensures `.unwrap()` below is safe.
data.read_exact(bytemuck::must_cast_slice_mut(hashes.as_mut()).unwrap();
hashes
I prefer this method because it ensures we always have the correct alignment/size for each element. (I know in this case it is Hash
, which has an alignment of 1. By using Hash
and casting to bytes, the reader doesn't need to go and check every caller/use if we cast the opposite direction.)
And:
- Box vs Vec: we don't want to allow growing the data by the caller after this function returns.
- Arc vs no-Arc: this is something the caller can decide, but nothing requires this function to return an Arc.
accounts-db/src/accounts_hash.rs
Outdated
#[cfg(test)] | ||
const MAX_BUFFER_SIZE: usize = 128; // for testing | ||
#[cfg(not(test))] | ||
const MAX_BUFFER_SIZE: usize = 64 * 1024 * 1024; // 64MB |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd prefer if the function took a start/offset and number of max number of hashes to fetch. This way we avoid magic constants within the implementation here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yeah. done.
c0ea913
to
fe25c2d
Compare
9cc3366
to
6c77371
Compare
Problem
We have noticed that during hash calculation, the performance of block producing
process degrades. Part of it is due to the stress that was put on memory and
disk/io from the accounts hash threads.
When we computes merkle tree hash, we use mmap to store the extracted accounts'
hashes. Mmap is heavy on resource usages, such as memory and disk/io, and put a
stress on the whole system.
In this PR, we propose to switch to use file io, less resource stressful, for
merkle tree hash computation.
Studies on mainnet with this PR shows that file-io use less memory and put less
stress on disk io. The "pure" hash computation time with file io is a little
longer than with mmap. But, we also save the mmap drop time with file io. And
the saving from mmap drop time is more to offset the extra time spent on hash
calculation. Thus, it makes the overall time for computing the hash smaller.
Note that there is an upcoming lattice hash feature, which will be the ultimate
solution to hash, i.e. it remove all the merkle tree hash calculation. However,
before that feature is activated, we could still use this PR as an interim
enhancement for merkle tree hash computation.
Summary of Changes
Fixes #