-
Notifications
You must be signed in to change notification settings - Fork 39
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Persist large gappy state blocks as a single snapshot #211
Comments
Currently:
This only works because we process Initialise first prior to Accumulate and assume that either: and that we won't process new timeline events until we've Initialise'd at least once. These assumptions break It breaks because you can have someone: If we do not process the state blocks, we'll end up with bad room state. We partly fixed this by prepending unknown state This complicates matters because of the following scenario:
We want the slow poller to use snapshot S when it processes T+1. This means we need to do Initialise/Accumulate in a single atomic operation, otherwise you could imagine a scenario where we process the fast poller's Separately, we need to have some way to know what the previous (last) state snapshot was for each room for each poller so we can ensure we base our
When the slow poller catches up to T+200, it will see that we have event T+200 in the DB already, and do no further work. |
Problem: the timeline ordering is messed up with this solution because the slow poller will be inserting unknown old events which have a higher event NID, which will be caught by the nid range calcs. We could maybe say "if you're an old state snapshot then ignore your timeline events" BUT how do you detect who is old and who is not? We cannot store max(event_nid) for the state snapshot to tell new from old because slower pollers could get old state which will have a higher event nid as it is an unknown event. |
dmr sez we need an authoritative ordering from the HS to fix this for sure. Others want this too e.g https://github.com/matrix-org/matrix-spec-proposals/blob/andybalaam/event-thread-and-order/proposals/4033-event-thread-and-order.md |
Conclusion: use origin server ts to drop old events, but how? |
Otherwise we're just going to be guessing and using heuristics. |
Assuming we had matrix-org/matrix-spec-proposals#4033 - what would this look like?
Something like:
|
We're going to mark this as a known problem, and wait for MSC4033 to land to fix it properly, as we then have an authoratative ordering. Because of this, we HAVE to handle the fact that the proxy can diverge from the HS wrt state (this has always been true due to sync v2 not being able to convey state resets correctly). To provide an escape hatch, I propose we do #232 so a leave/re-join would self-heal the room. The problem with that is now we have two concurrent snapshots: whatever was there before and now the new join event. We want to make sure we drop the old snapshot and use the "latest" snapshot (read: most recent). To do this without racing, we need to handle:
|
Conclusion, make 2 PRs:
This fixes the problem with prependStateEvents for all four scenarios:
|
#235 for point 1. |
As per #211 - this combines Initialise and Accumulate into a single ProcessRoomEvents functions which can do snapshots / timelines. Implements the "brand new snapshot on create event" logic, which currently does not correctly invalidate caches. E2E tests pass, but integ tests are broken.
We think the approach in #248 is the correct way to fix this. But it touches a lot of core code; it is a high-risk change. Taking a step back, we know about two ways to hit this:
@kegsay says that we could detect (1) pretty easily and use a different means to avoid the large mass of prepended state events. That is a much lower risk change and might be preferable in the short term. That wouldn't handle (2) though. |
Current thinking: only create a brand-new snapshot in Accumulate if the timeline is the start of the room. That avoids the pain of (2) and (3) of #211 (comment). (4) isn't changed: it'll work but will still be slow. There's also the race described in #211 (comment) which would need rethinking to solve. |
Described in #211 (comment). This is essentially cherry picked from #248, in particular the commit 8c7046e This should prevent creating new snapshots that don't reflect the state of the room. We'll need a followup task to clean up bad snapshots.
#255 should fix this for the case where we get stray events from upstream. It still means that if a poller is expired and reconnects months later and has 30k+ state events in their gappy sync it will be slow, but this is an improvement on the status quo. |
Closing this, as this issue is far too large and consists of many moving parts, most of which are now fixed. The remaining issue is:
Which can be fixed by #270 in the case where the poller is deactivated due to inactivity. This is trickier for OIDC refreshed tokens though, as we don't want an expired token to cause rooms to be deleted. Assuming we delete the device and tokens when we expire after 30 days, and only that triggers room deletions, then we may get away with just #270 because then expired tokens won't delete rooms, and we allow a grace period of 30 days for the client to reappear before nuking the rooms. This assumes that the state block for a room isn't larger after 30 days. |
Ideally, we would instead make the events in the
state
block part of the state snapshot and then add new timeline events inAccumulate
only. This:This implies a third function in the accumulator between
Initialise
andAccumulate
which can both snapshot and roll forward the timeilne in a single txn.Originally posted by @kegsay in #196 (comment)
Original gappy state impl in #71.
The text was updated successfully, but these errors were encountered: