Skip to content

Conversation

@tnull
Copy link
Collaborator

@tnull tnull commented Oct 31, 2025

Now based on #691

We bump the LDK depedencies to the just-released 2.0 release candidate and account for minor last-minute API changes.

Furthermore, we implement lazy deletion for VssStore by tracking pending lazy deletes and supplying them as delete_items on the next put operation.

tnull added 2 commits October 31, 2025 11:33
We bump the LDK depedencies to the just-released 2.0 release candidate
and account for minor last-minute API changes.
We implement `lazy` deletion for `VssStore` by tracking pending lazy deletes
and supplying them as `delete_items` on the next `put` operation.
@ldk-reviews-bot
Copy link

ldk-reviews-bot commented Oct 31, 2025

👋 Thanks for assigning @joostjager as a reviewer!
I'll wait for their review and will help manage the review process.
Once they submit their review, I'll check if a second reviewer would be helpful.

@tnull tnull requested a review from joostjager October 31, 2025 10:52
We add a testcase that ensures we only delete a lazily-deleted key
after the next write operation succeeds.

Co-authored by Claude AI
@tnull tnull force-pushed the 2025-10-bump-to-ldk-2.0rc1 branch from a64be1d to 0a26053 Compare October 31, 2025 11:05
.pending_lazy_deletes
.try_lock()
.ok()
.and_then(|mut guard| guard.take())
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm, we might lose some lazy deletes if the write below would fail. I do wonder if we should go out of our way to restore the pending items in such a case, or if we're fine just leaning into the 'may or may not succeed' API contract here.

@joostjager Any opinion?

Similiarly, I do wonder if we should spawn-and-forget some tasks on Drop to attempt cleaning up the pending deletes on shutdown?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good question how loose we can get away with. Those keys are then never deleted anymore, which isn't great? I am not sure.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What do you both think about re-adding the delete_items back to pending_lazy_deletes if write fails? We get to retry deleting them on a subsequent write attempt.

@tnull tnull added this to the 0.7 milestone Oct 31, 2025
Copy link
Contributor

@joostjager joostjager left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wonder if lazy delete is needed for VSS? Maybe we can just ignore the flag and delete sync always.

@tnull
Copy link
Collaborator Author

tnull commented Oct 31, 2025

I wonder if lazy delete is needed for VSS? Maybe we can just ignore the flag and delete sync always.

Well, esp. now that we're on MonitorUpdatingPersister, it should make a difference in performance on cleanup.

@joostjager
Copy link
Contributor

I just don't know if it is worth it for end-user nodes to worry about this. Even persisting full monitors always apparently was good enough performance?

In this PR the delete semantics are again different, because a write is needed before deletes are executed. It doesn't make it easier to understand.

@tnull
Copy link
Collaborator Author

tnull commented Oct 31, 2025

I just don't know if it is worth it for end-user nodes to worry about this. Even persisting full monitors always apparently was good enough performance?

In this PR the delete semantics are again different, because a write is needed before deletes are executed. It doesn't make it easier to understand.

I'm not sure the 'end user' would ever need to worry about, as there are very little lazy deletes to begin with?

As you know I was open to dropping the lazy flag, but now that we have it (again), we should also implement it.

Copy link
Contributor

@joostjager joostjager left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code looks good. Just not sure about the implications of lingering data if something goes wrong / a write never happens.

In particular for the incremental channel updates, it would be good if that is always followed by a write somehow without much delay. Idk if that's indeed the case?

.pending_lazy_deletes
.try_lock()
.ok()
.and_then(|mut guard| guard.take())
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good question how loose we can get away with. Those keys are then never deleted anymore, which isn't great? I am not sure.


let delete_items = self
.pending_lazy_deletes
.try_lock()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I briefly thought "oh why can't we do this inside the lock that we already obtain", but that doesn't work because here we are also processing deletes on other keys.

let obfuscated_key =
self.build_obfuscated_key(&primary_namespace, &secondary_namespace, &key);

let key_value = KeyValue { key: obfuscated_key, version: -1, value: vec![] };
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can't we store just keys here?

@tnull tnull changed the title Bump to LDK 2.0.0-rc1 and implement lazy deletes for VssStore Implement lazy deletes for VssStore Oct 31, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants