-
Notifications
You must be signed in to change notification settings - Fork 296
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
multi: Implement DCP0011 PoW hash consensus vote. #3115
Conversation
08e8d73
to
8ac7a63
Compare
f00349d
to
4a21846
Compare
6ec2de3
to
cc4dc6e
Compare
I updated this to add an additional commit ( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
First pass at reviewing, still looking at the two most important commits.
cc4dc6e
to
464b509
Compare
464b509
to
c5d4321
Compare
c5d4321
to
6e80561
Compare
idealTimeDelta := heightDelta * targetSecsPerBlock | ||
exponentBig := big.NewInt(timeDelta - idealTimeDelta) | ||
exponentBig.Lsh(exponentBig, 16) | ||
exponentBig.Quo(exponentBig, big.NewInt(halfLife)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Assuming the overall difficulty subsystem is indeed maintaining timeDelta
close to idealTimeDelta
as intended, the magnitude here will be small (say 24 bits at most?).
So multiplying by a 2^16 factor bounds this to around 40 bits in the common case (with another 23 bits to spare to put it in the int64 at the next stage) so this checks out.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, given the stated assumption, and for the main network, that's correct about 24 bits.
In fact, one of the reference test data sets I created involves generating a series of blocks starting from the hardest difficulty that are each spaced at the half life so that it halves the difficulty each block until it gets to the easiest difficulty. The final time delta is 9744000 which results in a difference from the ideal schedule of 9676800 which is indeed 24 bits.
// | ||
// Per DCP0011, the goal equation is: | ||
// | ||
// nextDiff = min(max(startDiff * 2^((Δt - Δh*Ib)/halfLife), 1), powLimit) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
One thing that I notice is that this changes the, let's say "locality", of the diff algo. While the previous algo based on the work diff intervals means the cumulative block time delta can drift from the lifetime target (and in fact, that is the case today, because the difference between the expected and actual block time is ~17 hours), the ASERT algo ensures the global produced lifetime should approach the expected one.
I'm still poking to see if I find any undesired consequences, specially in the far future as the magnitude of delta_t and delta_h increase, but this seems like a fair goal equation.
Also, something to surface here (that might not be obvious to anyone just glancing at the PRs) is that this is changing diff retargeting to being performed every block, as opposed to only every 144 blocks (on mainnet).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, the change to per-block difficulty is very intentional. I'm glad to see you noticed that. The DCP (which isn't up yet) calls this out more explicitly (though it still needs a bit more refinement) along with the motivation behind it.
In short though, your observation is correct. It improves the responsiveness and allows the algorithm to better maintain the target block time (aka ideal block schedule). Another important aspect is that it means the network can more easily adjust to large swings in the hash power, notably large drops, versus the current EMA approach with a larger retarget interval since it only needs a single block at the no-longer-ideal difficulty versus potentially an entire interval.
It also happens to be more efficient to calculate and doesn't require looping through a bunch of previous blocks and intervals.
// Note that a full unsigned 64-bit type is required to avoid overflow in | ||
// the internal 16.48 fixed point calculation. Also, the overall result is | ||
// guaranteed to be positive and a maximum of 17 bits, so it is safe to cast | ||
// to a uint32. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Additionally, the sign of the overall exponent is being tracked in shifts
(and frac
was already cast into a uint64 in the previous step), so this checks out.
coef1 = 195766423245049
coef2 = 971821376
coef3 = 5127
frac64 = 0xffff
inner = ((coef1*frac64) + (coef2*frac64*frac64) + (coef3*frac64*frac64*frac64) + (1<<47))
inner.bit_length()
# answer: 64
((1<<16) + (inner >> 48)).bit_length()
# answer: 17
This only works because the coefficients (in particular, polyCoeff3
) are kept small. A slightly higher polyCoeff3 could cause the inner addition to overflow the 64 bits.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Was all of this obvious enough from the comment, or do you think it would be beneficial to expand a bit?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Comment was good enough. I did breakout into a python repl to verify the numbers (and it took a couple of tries because I made a typo in the coefficients, which was producing 65 bits instead of 64), but I was able to follow it.
|
||
// Limit the target difficulty to the valid hardest and easiest values. | ||
// The valid range is [1, powLimit]. | ||
if nextDiff.Sign() == 0 { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Given that the last block of operations is a mul by the fractional (previously enforced to be positive) 2^f part, followed by a left or right shift (which doesn't change the sign in the case of big int), this is assured to be >= 0 (never negative), so checking for equality here is correct.
commit "chaingen: Add ASERT difficulty algorithm support. " has
|
6e80561
to
6c11756
Compare
Fixed. |
6c11756
to
14baadc
Compare
blockchain/standalone/pow_test.go
Outdated
{222, 9657000, 0x1c400000}, | ||
{223, 9700500, 0x1d008000}, | ||
{224, 9744000, 0x1d00ffff}, | ||
{225, 9787500, 0x1d00ffff}, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So, in the very worst case scenario possible, it takes ~113 days for the difficulty to reach the minimum
This makes the functions the chain generator test code provides for determining if blocks are solved and solving blocks methods on the generate type itself so they are able to make decisions based on the generator state in the future.
5cdc97a
to
da38208
Compare
// Note that a full unsigned 64-bit type is required to avoid overflow in | ||
// the internal 16.48 fixed point calculation. Also, the overall result is | ||
// guaranteed to be positive and a maximum of 17 bits, so it is safe to cast | ||
// to a uint32. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Comment was good enough. I did breakout into a python repl to verify the numbers (and it took a couple of tries because I made a typo in the coefficients, which was producing 65 bits instead of 64), but I was able to follow it.
] | ||
} | ||
] | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Double checked vectors match previous hard coded tests
// | ||
// This function MUST only be called with the blake3 proof of work agenda active | ||
// after having gone through a vote. That is to say it MUST NOT be called when | ||
// the agenda is forced to always be active. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Rechecked this condition in call sites after squash.
// | ||
// This function MUST only be called with the blake3 proof of work agenda active | ||
// and, unless the agenda is always forced to be active, with the chain state | ||
// lock held (for writes). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Rechecked this condition in call sites after squash.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Made a second review after the changes to the networks where the blake3 agenda is force-activated.
Looks like only another couple of nits to fix.
This separates the notion of the block hash from the proof of work hash so the proof of work hashing function can independently be changed without affecting anything else that relies on the block hash as generated by blake256r14.
This adds methods for computing the version 2 proof of work hash using blake3 as defined by DCP0011 to the block and block header types. Note that this commit only makes the new methods available and does not implement any logic based on them.
This adds support for generating blocks solved by blake3 to the test chain generator. It accomplishes this by introducing a new type and method, named PowHashAlgorithm and UsePowHashAlgo, respectively, to the chain generator along with logic to change the hash function accordingly. The method takes a parameter of the new type in order to allow the caller selectively choose which hash function to use. This will allow future code to test all edge conditions around switching the hash function over to blake3 as defined in DCP0011.
0b3b0e5
to
67419a8
Compare
This consolidates the logic that serializes the data that represents work to be solved in the getwork RPC and notifywork WebSocket notification. The work data consists of the serialized block header along with any internal padding that makes the data ready for callers to make use of only the final chunk along with the midstate for the rest when solving the block. Having it serialized in one place is both more convenient and less error prone since changing it will now automatically apply to all methods of obtaining work. It also updates tests to use the new function.
This adds a new function to the blockchain/standalone module named CheckProofOfWorkHash which can be used to independently verify that a provided hash is less than a provided compact target difficulty without also additionally checking that the compact difficulty is within a valid range. Since there is already CheckProofOfWorkRange to perform the latter, this provides callers additional flexibility of performing the required checks independently without being forced to re-check the range. It also updates includes comprehensive tests.
This modifies the chain parameters for all network to use a constant for the proof of work limit bits instead of hard coding the value directly.
67419a8
to
5795cf0
Compare
Commit "chaingen: Add ASERT difficulty algorithm support" doesn't successfully run |
This adds a new exported function to the blockchain/standalone module named CalcASERTDiff which can be used to calculate a required target difficulty using the Absolutely Scheduled Exponentially weighted Rising Targets (ASERT) algorithm defined in DCP0011. It also updates the documentation and includes comprehensive tests. Note that this commit only makes the new function available and does not implement any logic based on it.
This adds per-network parameters to chaincfg for the new difficulty algorithm including the initial starting difficulty for BLAKE3 and the half life.
This adds support for generating blocks that require the new difficulty algorithm defined in DCP0011 to the test chain generator. It accomplishes this by introducing a new type and method, named PowDifficultyAlgorithm and UsePowDiffAlgo, respectively, to the chain generator along with logic to calculate the appropriate difficulty accordingly. The method takes a parameter of the new type in order to allow the caller selectively choose which difficulty algorithm to use. This will allow future code to test all edge conditions around switching the difficulty algorithm over to the new one defined in DCP0011.
This implements the agenda for voting on changing the hash function to BLAKE3, resetting the target difficulty, and changing the difficulty algorithm to ASERT (Absolutely Scheduled Exponentially weighted Rising Targets) as defined in DCP0011 along with consensus tests. In terms of the overall effects, this includes updates to: - The validation logic for the proof of work hash and required block difficulty - Enforcement of block submission via the getwork and submitblock RPCs - Mining template generation - The output of the getwork and notifywork RPCs - The internal CPU miner Also note that this does not implement the block version bump that will ultimately be needed by the mining code since there are multiple consensus votes gated behind it and will therefore be done separately. The following is an overview of the changes: - Introduce convenience function for determining if the vote passed and is now active - Introduce convenience function for determining whether or not the agenda is forced active on networks other than the main network - Modify block validation to enforce BLAKE3 as the proof of work hash algorithm in accordance with the state of the vote - Modify block validation to enforce target difficulties per the ASERT algorithm based on a reset initial starting difficulty in accordance with the state of the vote - Update internal CPU miner to solve blocks with either BLAKE256 or BLAKE3 in accordance with the state of the vote - Update the getwork, notifywork, and submitblock RPCs to support BLAKE3 in accordance with the state of the vote - Add tests for determining if the agenda is active for both mainnet and testnet - Add tests for the getwork RPC including submission of a block solved with BLAKE3 - Add tests to ensure proper behavior for the hash function and difficulty semantics before and after the vote agenda is active
5795cf0
to
85e3701
Compare
Good catch since I prefer for every intermediate commit to continue to build and pass tests. I've moved the relevant changes to add the new parameters to a separate commit ( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Tests passing on all commits now!
This implements the agenda for voting on changing the hash function to BLAKE3, resetting the target difficulty, and changing the difficulty algorithm to ASERT (Absolutely Scheduled Exponentially weighted Rising Targets) as defined in DCP0011 along with consensus tests.
In terms of the overall effects, this includes updates to:
getwork
andsubmitblock
RPCsgetwork
andnotifywork
RPCsAlso note that this does not implement the block version bump that will ultimately be needed by the mining code since there are multiple consensus votes gated behind it and will therefore be done separately.
The following are some graphs that show the behavior and responsiveness of the difficulty algorithm for select data in the included reference tests. In particular, the tests which use main network parameters and random solve times per a Poisson distribution for a stable hashrate, rapidly increasing hashrate, and rapidly decreasing hashrate. The first graph shows the per-block difficulty for the three scenarios all combined. The next three graphs show the individual histograms of the solve times overlaid with the normalized Poisson distribution to show the test data for these tests follows the expected distribution as claimed:
The various changes have been separated into a series of commits to help make the review process easier.
The following is an overview of the changes:
PowHashV1
andPowHashV2
methods towire
for computing the existing BLAKE256 and new BLAKE3 hashes, respectivelygetwork
RPC andnotifywork
WebSocket
notificationblockchain/standalone
namedCheckProofOfWorkHash
to independently verify that a provided hash is less than a provided compact target difficulty without also additionally checking that the compact difficulty is within a valid rangeblockchain/standalone
for calculating a target difficulty using the new ASERT difficulty algorithmchaincfg
for the new difficulty algorithm including the initial starting difficulty for BLAKE3 and the half lifegetwork
,notifywork
, andsubmitblock
RPCs to support BLAKE3 in accordance with the state of the voteCheckProofOfWorkHash
blockchain/standalone
getwork
RPC including submission of a block solved with BLAKE3blockchain/standalone
documentation