-
-
Notifications
You must be signed in to change notification settings - Fork 184
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Implement Bulk Loading/Writing for Chunks #535
base: master
Are you sure you want to change the base?
Conversation
…ment and use exception to know if file exist
…improved usability
… and adjust file truncation logic
… in LinearFile and tmp filenames on write
… management logic
Are entries ever removed from the file lock when the file is no longer being used? |
@kralverde i refactor the "file lock" and yes i create a clean_cache fn that executes when the read lr the write finishes that check how much references of the lock are being holded. May be i should add a trace log for that but i checked that actually those entre are removed. |
Add drop guard for locks and rework some logic
use async methods where avaliable and stream chunks instead of collec…
…nd optimizing cache cleanup
Create io folder, some refactors
What do you think about using the Minecraft chunk batch packets to may improve network bandwidth and probably also client side performance ? |
This PR implement the Bulk API for chunks loading/writing with some function changes that get leverage of the bulk operation (like reducing allocations, locks and more).
Also this includes some Anvil refactor and the creation of a Traits/API that can be re-used on entity storage system and to implements diferents Writers/Loaders (as DB cached, one that use in Cloud Storage or more optimizations).
Actual Improves
Don´t write a chunk on disk when loaded from disk.
Lock the Chunk Cache entry while generating a new chunk.
( if more threads wants that same chunks, they can wait without doing any work until the chunk is generated )
Every time a bunch of chunks (of the same player) are deallocated, the file is write only 1 time.
The Files that have ongoing writes/reads operations are stored on memory (to avoid disk readings and file lockings).
We have a DashMap for File Cache to avoid writing collisions and don`t corrupt files. (not an Anvil issue at least usually but it was a problem with the Linear format that enforce an FULL FILE WRITE)
Chunk writing operation now are made on a
path.tmp
so now the writing operation cant corrupt files ( when the writting finish, the file name is change, so the file is never swap before the full writting )Now the file Lock/Cache implements a
clean_cache
fn that ensure no memory leak along the server run.Actual Changes
LoadedData::Error((position,Err))
: Manage errors per chunk loaded on the bulk.LoadedData::Missing(position)
: Manage missing chunks (not previusly generated) becaouse the return chunks order is not enforced.LoadedData::Loaded(Data)
: Manage normal returned chunks.ChunkFileManager
. (the actual ChunkIO implementation for Anvil and Linear)