refactor(side-dag): general improvements#1080
Conversation
cc30326 to
c1d33fa
Compare
02792c0 to
e965a13
Compare
5da8d8c to
ef14b24
Compare
10949f1 to
7b24072
Compare
7b24072 to
ad34b3f
Compare
ee73bbf to
e140836
Compare
ad34b3f to
713b3e2
Compare
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## master #1080 +/- ##
==========================================
- Coverage 85.08% 85.00% -0.08%
==========================================
Files 314 314
Lines 23937 23957 +20
Branches 3616 3621 +5
==========================================
- Hits 20367 20365 -2
- Misses 2867 2879 +12
- Partials 703 713 +10 ☔ View full report in Codecov by Sentry. |
e140836 to
81fd0ce
Compare
713b3e2 to
576b4ac
Compare
81fd0ce to
5e46ebd
Compare
576b4ac to
c4e2dea
Compare
|
I had to make some changes because I noticed the behavior was not working as expected, in relation to cancellation of block production. Before these changes,
All these issues have been addressed in a separate commit, 37ce560. |
The base branch was changed.
452b00e to
56f9987
Compare
0abdc8d to
e652a8b
Compare
e652a8b to
101c8dd
Compare
| """Schedule propagation of a new block.""" | ||
| if not self.manager.can_start_mining(): | ||
| # We're syncing, so we'll try again later | ||
| self._log.info('cannot produce new block, node not synced') |
There was a problem hiding this comment.
I feel that the full node would not recover from getting out of sync after it was already synced. From recovery, I mean that this full node would stop producing blocks.
There was a problem hiding this comment.
The full node recovers as long as it receives a block from another peer. I added an assertion in an existing test to cover this, in 0d4fea6
| self._schedule_block_lc = LoopingCall(self._schedule_block) | ||
| self._schedule_block_lc.clock = self._reactor | ||
| self._delayed_call: IDelayedCall | None = None | ||
| self._start_producing_lc: LoopingCall = LoopingCall(self._safe_start_producing) |
There was a problem hiding this comment.
I still don't understand why we need this LoopingCall. Anyway, I'm ok to get this PR merged and get this changed later because it has no impact on mainnet and testnet.
There was a problem hiding this comment.
Using either a LoopingCall or a recursive callLater, separate from the main callLater logic (triggered reactively via pubsub) were the only good ways I found to implement this while covering the following requirements:
- A single full node should be able to produce blocks by itself.
- If block production throws during the "start producing" phase, production should be tried again. However, if it fails after a call via pubsub, it should not be retried.
- We have to consider that the call to
can_start_mining()may throw an exception.
I created an issue to review this later: #1097
0d4fea6 to
c0203e5
Compare
c0203e5 to
c605607
Compare
Motivation
Implement some of the requested changes in previous PRs.
Acceptance Criteria
poa.in_turn_signer_index()and addpoa.get_signer_index_distance().poa.calculate_weight()to decrease the block weight inversely proportional to the index distance.PoaBlockProducer:Checklist
master, confirm this code is production-ready and can be included in future releases as soon as it gets merged