Sanitize AppendVec's file_size#7373
Conversation
| }; | ||
| } | ||
|
|
||
| const MAXIMUM_APPEND_VEC_FILE_SIZE: usize = 16 * 1024 * 1024 * 1024; // 16 GiB |
There was a problem hiding this comment.
Could be much lower considering it seems AccountsDB never increases its .file_size from DEFAULT_FILE_SIZE.
There was a problem hiding this comment.
It doesn't increase, it may allocate a larger one than default if the store account is larger than the default file size.
There was a problem hiding this comment.
Thanks for confirmation, then maybe we could lower to be close in relation to #7320.
There was a problem hiding this comment.
For now, I think this is fine as it is now because we can assume the running system will have at least 16GB RAM as per the doc and we can be relieved we'll never be tripped by the limit at that high number depending on current AccountDB's behavior and its possible future changes.
Or, could this be lower, say, a half?: 8GB
| let mut rd = Cursor::new(&data[..]); | ||
| let current_len: usize = deserialize_from(&mut rd).map_err(Error::custom)?; | ||
| let map = MmapMut::map_anon(1).map_err(|e| Error::custom(e.to_string()))?; | ||
| Ok(AppendVec { |
There was a problem hiding this comment.
Moved this logic to newly created fn new_empty_map. I can't live with this code. ;)
| map, | ||
| append_offset: Mutex::new(current_len), | ||
| current_len: AtomicUsize::new(current_len), | ||
| file_size: current_len as u64, |
There was a problem hiding this comment.
NOTE: I changed this behavior because this looked just a wrong thing. New code started to initialize this correctly with actual file size.
Or is this intentional to make appencvecs read-only?
| } | ||
|
|
||
| #[test] | ||
| #[should_panic(expected = "too small file size 0 for AppendVec")] |
There was a problem hiding this comment.
I know those should_panics are rather ugly. I'll brush up once this PR's direction is affirmed.
There was a problem hiding this comment.
Other uses are fixed by now. Only this will remain as this is testing a panic from ::new().
| fn sanitize_mmap_size(size: usize) -> io::Result<()> { | ||
| if size == 0 { | ||
| Err(std::io::Error::new( | ||
| std::io::ErrorKind::Other, |
There was a problem hiding this comment.
I know these Errors are not welcomed as explicitly mentioned by #6446.
|
|
||
| #[allow(clippy::mutex_atomic)] | ||
| pub fn set_file<P: AsRef<Path>>(&mut self, path: P) -> io::Result<()> { | ||
| self.path = path.as_ref().to_path_buf(); |
There was a problem hiding this comment.
Only update self.path in an atomic way if everything went successful here.
|
|
||
| let map = unsafe { MmapMut::map_mut(&data)? }; | ||
|
|
||
| self.file_size = file_size; |
There was a problem hiding this comment.
Update file_size correctly, will be used later PRs to check the sum of contained accounts doesn't exceed this and moreover exactly matches to the current_len.
Codecov Report
@@ Coverage Diff @@
## master #7373 +/- ##
========================================
- Coverage 68.7% 58.9% -9.9%
========================================
Files 244 244
Lines 56028 64972 +8944
========================================
- Hits 38530 38309 -221
- Misses 17498 26663 +9165 |
| } | ||
| } | ||
|
|
||
| AppendVec::sanitize_mmap_size(size).unwrap(); |
There was a problem hiding this comment.
yes, unwrap()ing here just because considering this is under the supertubes milestone and there isn't enough time to do large refactoring (would mean Bank.store() starts to fail). There are many other security issues to be fixed for snapshot for the milesone.
I'm pretty sure this code (=AppendVec::new) isn't touched by the snapshot deserialization code path at all.
There was a problem hiding this comment.
Move this above if create {
|
|
||
| #[allow(clippy::mutex_atomic)] | ||
| pub fn new_empty_map(current_len: usize) -> Self { | ||
| let map = MmapMut::map_anon(1).expect("failed to map the data file"); |
There was a problem hiding this comment.
This error handling is made more critical rather undesirably from Err to just move the code here. But combined with this consideration, this can be tolerated to avoid large refactoring. :) And this mmap will almost never fail.
| let file_size = std::fs::metadata(&path)?.len(); | ||
| assert!(current_len <= file_size as usize); | ||
|
|
||
| AppendVec::sanitize_mmap_size(file_size as usize)?; |
There was a problem hiding this comment.
Notice the ?; Now we're correctly propagating this error up to the snapshot deserializing code.
|
Overall it looks pretty good to me. |
|
@mvines pinging for your first impression of this PR. Especially this introduces another fixed/constant cluster-wide-common hard limit in the snapshot area :) |
|
This pull request has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. |
|
This pull request has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. |
Problem
Creation and restoration of
AppendVecis not secure with regard to size of mmap.Currently, validators can easily be
panic!-ed just by putting empty (0-sized) appendvec file or very large sparse appendvec file. The former is forbidden by the specs ofmmapand the later will cause panics depending on system configuration/load.Summary of Changes
Part of #7167