Skip to content

Comments

feat: add Persistence skeleton for db writes#8818

Merged
Rjected merged 13 commits intomatt/engine2from
dan/database-thread
Jun 24, 2024
Merged

feat: add Persistence skeleton for db writes#8818
Rjected merged 13 commits intomatt/engine2from
dan/database-thread

Conversation

@Rjected
Copy link
Member

@Rjected Rjected commented Jun 14, 2024

This will eventually be what writes to disk during live sync, the idea being it will receive requests to persist things, and eventually persist them all at once. The idea is that some higher level of chain handling should be the one which knows where the tip is, ie, how much of recent history will be in memory. Then that component will send a request for flushing in memory state to disk

@Rjected Rjected added the A-engine Related to the engine implementation label Jun 14, 2024
@Rjected Rjected force-pushed the dan/database-thread branch from ef337a1 to eb51dad Compare June 18, 2024 15:36
@Rjected Rjected marked this pull request as ready for review June 18, 2024 15:50
Cargo.toml Outdated
Comment on lines 466 to 471
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can we roll this back
I'd like to send this directly to main

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

sg

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we need to support unwind as well

Comment on lines 31 to 37
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we expect that this will be spawned, right?

and internally this spawns regular std::thread to execute the actions?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this should probably happen on a std thread

Comment on lines 18 to 17
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

do we expect this to be a spawned service that dispatches the actions?

if not then this perhaps doesn't need the receiver and instead controls only the spawned threads that do the IO

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yeah, this would be spawned, are you suggesting just passing the receiver to the spawned threads then?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I guess that doesn't make a ton of sense, not sure what you mean then about not needing a receiver of some sort

Copy link
Collaborator

@mattsse mattsse Jun 18, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yeah, this would be spawned,

then this makes sense, but we also need to spawn the actual IO work to a dedicated thread

I wonder if we then even need to spawn this type at all because doesn't make to much sense to send a notification to another task just so this one can spawn the IO stuff

@Rjected Rjected force-pushed the dan/database-thread branch from eb51dad to 2f15aca Compare June 18, 2024 17:02
@Rjected Rjected requested a review from mattsse June 20, 2024 15:41
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we expect that this will only do IO so this should be spawned on a regular endless thread, like we do with the mdbx db

see also: https://ryhl.io/blog/async-what-is-blocking/

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

makes sense, switched to sync loop

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd also like a PersistenceHandle type that wraps the corresponding sender

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

added one

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

if we do std::thread then we need the std::mpsc variant

@Rjected Rjected force-pushed the dan/database-thread branch from a5251e9 to 22563f0 Compare June 21, 2024 16:44
@Rjected Rjected requested a review from mattsse June 21, 2024 17:31
Comment on lines +85 to +100
pub async fn save_blocks(&self, blocks: Vec<ExecutedBlock>) -> B256 {
let (tx, rx) = oneshot::channel();
self.sender
.send(PersistenceAction::SaveFinalizedBlocks((blocks, tx)))
.expect("should be able to send");
rx.await.expect("todo: err handling")
}

/// Tells the persistence task to remove blocks above a certain block number. The removed blocks
/// are returned by the task.
pub async fn remove_blocks_above(&self, block_num: u64) -> Vec<ExecutedBlock> {
let (tx, rx) = oneshot::channel();
self.sender
.send(PersistenceAction::RemoveBlocksAbove((block_num, tx)))
.expect("should be able to send");
rx.await.expect("todo: err handling")
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thinking about how we will integrate this (poll) then we likely want a function that accepts the action directly, but these should be useful for testing etc.

@Rjected Rjected merged commit 5b17ac3 into matt/engine2 Jun 24, 2024
@Rjected Rjected deleted the dan/database-thread branch June 24, 2024 14:54
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

A-engine Related to the engine implementation

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants