-
Notifications
You must be signed in to change notification settings - Fork 92
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Validator improvements. #569
Conversation
- Add retry count to error results. - Add early exit to retry loop when the rounds are equal.
Codecov Report
@@ Coverage Diff @@
## develop #569 +/- ##
========================================
Coverage 46.39% 46.39%
========================================
Files 24 24
Lines 3899 3899
========================================
Hits 1809 1809
Misses 1818 1818
Partials 272 272 Continue to review full report at Codecov.
|
@@ -235,7 +236,7 @@ func callProcessor(processor Processor, addrInput string, config Params, results | |||
|
|||
result, err := processor.ProcessAddress(algodData, indexerData) | |||
|
|||
if err == nil && (result.Equal || i >= config.retries) { | |||
if err == nil && (result.Equal || result.SameRound || i >= config.retries) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think result.Equal
is unnecessary.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Since the reward base only changes every 30 rounds or so, most of the time it doesn't matter if we're on the same round. In that case result.Equal
can be true and result.SameRound
false.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would like to stay on the safe side and only return results with the same round. Efficiency shouldn't suffer since most of the time algod and indexer are synchronized.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There isn't any risk because the comparison has to succeed for the test to pass. Why bother waiting and trying again if the test has already passed?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Because two results having same data for different rounds doesn't mean they will have same data for the same round.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You are right. But can we make the retry delay 100ms? We can also make the number of retries very large because now we will not retry that many times if there is a discrepancy.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That doesn't account for the fact that every time a goroutines is waiting we aren't checking 160/<num-threads>
other accounts.
Also, only dots in the output doesn't mean that algod and indexer are synchronized. There's no indicator for matches that ignore that the round is mismatched.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I thought that Indexer normally lags behind algod by several seconds. We would effectively be pausing the checks by that delta for every round. Hypothetically, lets say Indexer is lagging behind algod by 100ms and we're able to hit that time perfectly.
That would put our idle time at 100ms / 4.4s = 2.3%
In the run from above it took 17.25 hours. (17.25 * 1.023) = 17.25, so an extra ~24 minutes of downtime.
I think those numbers are probably pretty generous and the impact could be much larger.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Got 79000 retries in my validation, so I guess it's pretty considerable. Can we still decrease retry timeout to 100ms and increase number of retries?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, you can set those to whatever you want - there are CLI arguments
Summary
Validator improvements:
Test Plan