-
-
Notifications
You must be signed in to change notification settings - Fork 26
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
DO NOT MERGE - Rollback Discussion #440
base: main
Are you sure you want to change the base?
Conversation
DO NOT MERGE - Rollback Discussion
var full_state_snapshot := _states.get_snapshot(tick).as_dictionary() | ||
var full_state_data := _full_state_encoder.encode(tick, _get_owned_state_props()) | ||
_submit_full_state.rpc(full_state_data, tick) | ||
|
||
NetworkPerformance.push_full_state_broadcast(full_state_snapshot) | ||
NetworkPerformance.push_sent_state_broadcast(full_state_snapshot) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This section repeats 5 times in varying forms and can be broken down into 2 categories, sending to all peers, and sending to a specific peer.
Proposal for a single function to be created, _broadcast_full_state(tick: int, peer: int = -1)
This allows _broadcast_state(tick)
to be used to submit an RPC to all clients, with _broadcast_state(tick, peer)
to be used to broadcast to a specific peer. With a single if peer == -1
we have the branch for full rpc or rpc with peer with their relevant NetworkPerformance
calls handled.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Approved, this part has been bothering me as well
var is_mutated := _get_recorded_state_props().any(func(pe): | ||
return NetworkRollback.is_mutated(pe.node, tick - 1)) | ||
|
||
# Record if there's any properties to record and we're past the latest known state OR something | ||
# was mutated | ||
if not _get_recorded_state_props().is_empty() and (tick > _latest_state_tick or is_mutated): | ||
if _skipset.is_empty(): | ||
_states.set_snapshot(tick, _PropertySnapshot.extract(_get_recorded_state_props())) | ||
else: | ||
var record_properties = _get_recorded_state_props()\ | ||
.filter(func(pe): return \ | ||
not _skipset.has(pe.node) or \ | ||
NetworkRollback.is_mutated(pe.node, tick - 1)) | ||
|
||
var merge_state = _states.get_history(tick - 1) | ||
var record_state = _PropertySnapshot.extract(record_properties) | ||
|
||
_states.set_snapshot(tick, merge_state.merge(record_state)) | ||
|
||
# Push metrics | ||
NetworkPerformance.push_rollback_nodes_simulated(_simset.size()) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This can be broken off as its own function, _record_state(tick: int)
NetworkPerformance.push_sent_state_broadcast(full_state_snapshot) | ||
elif full_state_interval > 0 and tick > _next_full_state_tick: | ||
# Send full state so we can send deltas from there | ||
_logger.trace("Broadcasting full state for tick %d", [tick]) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why does this option have a _logger.trace()
but the others don't?
_freshness_store.notify_processed(node, tick) | ||
_simset.add(node) | ||
|
||
func _record_tick(tick: int): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
_record_tick()
currently does 3 things:
- Logic to prepare for the other 2 responsibilities
- Broadcasting the state to other peers (including full state vs diff logic)
- Recording the state
These can be broken down further, so_record_tick()
continues to handle the preparation, and calls separate functions_broadcast_tick()
and_record_state()
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Additional possibility of renaming _record_tick()
to _finalize_tick()
or similar, and using _record_tick()
instead of _record_state()
to better reflect responsibilities.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd prefer for _record_tick()
to keep the role of recording ticks, since that's the whole point of the method and the signal. We could also look at the current behavior as such:
- If there's any state we own:
- Grab state for things that we own and were simulated
- Broadcast that state
- Record state
I think extracting 1.i and 1.ii would get this method much cleaner, without having to find multiple ways of saying "record stuff".
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd prefer for
_record_tick()
to keep the role of recording ticks, since that's the whole point of the method and the signal.
Currently it does more than just recording the tick though, so would it not make more sense to have NetworkRollback.on_record_tick
link to a finalize_tick()
function, of which it calls 2. record_tick()
after calling (or performing) 1.i and 1.ii?
To leave the entry point as record_tick()
would mean that record_tick()
is also responsible for initiating the broadcasting of said tick.
(For the record, I also disagree with the naming of NetworkRollback.on_record_tick
as more gets done during that step but there's nothing we can do about that so it's moot.)
var _states := _PropertyHistoryBuffer.new() | ||
var _inputs := _PropertyHistoryBuffer.new() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
To facilitate reuse of code for StateSynchronizer
and other future synchronizers, what are your thoughts on a proposed new class that handles the logic for each of the state portion and input portion of this class?
Attached to these lines to put this comment somewhere, but it's not limited to just these lines.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What would these classes do? For StateSynchronizer
, we could reuse the diff state logic, but most of that is covered by HistoryBuffer
and DiffHistoryEncoder
, leaving only the orchestration code left, which could potentially be refactored into an object that decides when to send full states and when to use diff states.
Would be interested to hear more on this.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What would these classes do?
They would take responsibility of the HistoryBuffer
s and HistoryEncoder
s. The primary goal being to allow all code that can be reused to be packaged in such a way as to facilitate that reuse.
- As you mentioned, handling diff state logic (Accounting for ~7% of
RollbackSynchronzier
by line count) - Managing RPC calls in conjunction with some form of a
NetworkManager
(~18%) - Processing settings/authority (~9%)
All such functionality is needed for other rollback-enabled nodes and accounts for over 200 lines of code that can be cleaned from RollbackSynchronizer
and reused in other nodes.
--- DO NOT MERGE ---
This is a discussion regarding
rollback-synchronizer.gd
as it relates to the #358 refactor, uploaded as a PR to allow per-line comment chains.