-
Notifications
You must be signed in to change notification settings - Fork 296
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
multi: Implement DCP0011 PoW hash consensus vote. #3115
Changes from all commits
3e621ba
dbb6b78
b2878aa
66a59e0
6c323c1
42d34e0
07c52b7
d9b1921
a13ca3a
2db55eb
85e3701
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Large diffs are not rendered by default.
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,5 +1,5 @@ | ||
// Copyright (c) 2013-2016 The btcsuite developers | ||
// Copyright (c) 2015-2019 The Decred developers | ||
// Copyright (c) 2015-2023 The Decred developers | ||
// Use of this source code is governed by an ISC | ||
// license that can be found in the LICENSE file. | ||
|
||
|
@@ -182,22 +182,223 @@ func CheckProofOfWorkRange(difficultyBits uint32, powLimit *big.Int) error { | |
return checkProofOfWorkRange(target, powLimit) | ||
} | ||
|
||
// CheckProofOfWork ensures the provided block hash is less than the provided | ||
// compact target difficulty and that the target difficulty is in min/max range | ||
// per the provided proof-of-work limit. | ||
func CheckProofOfWork(blockHash *chainhash.Hash, difficultyBits uint32, powLimit *big.Int) error { | ||
// checkProofOfWorkHash ensures the provided hash is less than the provided | ||
// target difficulty. | ||
func checkProofOfWorkHash(powHash *chainhash.Hash, target *big.Int) error { | ||
// The proof of work hash must be less than the target difficulty. | ||
hashNum := HashToBig(powHash) | ||
if hashNum.Cmp(target) > 0 { | ||
str := fmt.Sprintf("proof of work hash %064x is higher than "+ | ||
"expected max of %064x", hashNum, target) | ||
return ruleError(ErrHighHash, str) | ||
} | ||
|
||
return nil | ||
} | ||
|
||
// CheckProofOfWorkHash ensures the provided hash is less than the provided | ||
// compact target difficulty. | ||
func CheckProofOfWorkHash(powHash *chainhash.Hash, difficultyBits uint32) error { | ||
target := CompactToBig(difficultyBits) | ||
return checkProofOfWorkHash(powHash, target) | ||
} | ||
|
||
// CheckProofOfWork ensures the provided hash is less than the provided compact | ||
// target difficulty and that the target difficulty is in min/max range per the | ||
// provided proof-of-work limit. | ||
// | ||
// This is semantically equivalent to and slightly more efficient than calling | ||
// CheckProofOfWorkRange followed by CheckProofOfWorkHash. | ||
func CheckProofOfWork(powHash *chainhash.Hash, difficultyBits uint32, powLimit *big.Int) error { | ||
target := CompactToBig(difficultyBits) | ||
if err := checkProofOfWorkRange(target, powLimit); err != nil { | ||
return err | ||
} | ||
|
||
// The block hash must be less than the target difficulty. | ||
hashNum := HashToBig(blockHash) | ||
if hashNum.Cmp(target) > 0 { | ||
str := fmt.Sprintf("block hash of %064x is higher than expected max "+ | ||
"of %064x", hashNum, target) | ||
return ruleError(ErrHighHash, str) | ||
// The proof of work hash must be less than the target difficulty. | ||
return checkProofOfWorkHash(powHash, target) | ||
} | ||
|
||
// CalcASERTDiff calculates an absolutely scheduled exponentially weighted | ||
// target difficulty for the given set of parameters using the algorithm defined | ||
// in DCP0011. | ||
// | ||
// The Absolutely Scheduled Exponentially weighted Rising Targets (ASERT) | ||
// algorithm defines an ideal schedule for block issuance and calculates the | ||
// difficulty based on how far the most recent block's timestamp is ahead or | ||
// behind that schedule. | ||
// | ||
// The target difficulty is set exponentially such that it is doubled or halved | ||
// for every multiple of the half life the most recent block is ahead or behind | ||
// the ideal schedule. | ||
// | ||
// The starting difficulty bits parameter is the initial target difficulty all | ||
// calculations use as a reference. This value is defined on a per-chain basis. | ||
// It must be non-zero and less than or equal to the provided proof of work | ||
// limit or the function will panic. | ||
// | ||
// The time delta is the number of seconds that have elapsed between the most | ||
// recent block and an initial reference timestamp. | ||
// | ||
// The height delta is the number of blocks between the most recent block height | ||
// and an initial reference height. It must be non-negative or the function | ||
// will panic. | ||
// | ||
// NOTE: This only performs the primary target difficulty calculation and does | ||
// not include any additional special network rules such as enforcing a maximum | ||
// allowed test network difficulty. It is up to the caller to impose any such | ||
// additional restrictions. | ||
// | ||
// This function is safe for concurrent access. | ||
func CalcASERTDiff(startDiffBits uint32, powLimit *big.Int, targetSecsPerBlock, | ||
timeDelta, heightDelta, halfLife int64) uint32 { | ||
|
||
// Ensure parameter assumptions are not violated. | ||
// | ||
// 1. The starting target difficulty must be in the range [1, powLimit] | ||
// 2. The height to calculate the difficulty for must come after the height | ||
// of the reference block | ||
startDiff := CompactToBig(startDiffBits) | ||
if startDiff.Sign() <= 0 || startDiff.Cmp(powLimit) > 0 { | ||
panicf("starting difficulty %064x is not in the valid range [1, %064x]", | ||
startDiff, powLimit) | ||
} | ||
if heightDelta < 0 { | ||
panicf("provided height delta %d is negative", heightDelta) | ||
} | ||
|
||
return nil | ||
// Calculate the target difficulty by multiplying the provided starting | ||
// target difficulty by an exponential scaling factor that is determined | ||
// based on how far ahead or behind the ideal schedule the given time delta | ||
// is along with a half life that acts as a smoothing factor. | ||
// | ||
// Per DCP0011, the goal equation is: | ||
// | ||
// nextDiff = min(max(startDiff * 2^((Δt - Δh*Ib)/halfLife), 1), powLimit) | ||
// | ||
// However, in order to avoid the need to perform floating point math which | ||
// is problematic across languages due to uncertainty in floating point math | ||
// libs, the formula is implemented using a combination of fixed-point | ||
// integer arithmetic and a cubic polynomial approximation to the 2^x term. | ||
// | ||
// In particular, the goal cubic polynomial approximation over the interval | ||
// 0 <= x < 1 is: | ||
// | ||
// 2^x ~= 1 + 0.695502049712533x + 0.2262697964x^2 + 0.0782318x^3 | ||
// | ||
// This approximation provides an absolute error margin < 0.013% over the | ||
// aforementioned interval of [0,1) which is well under the 0.1% error | ||
// margin needed for good results. Note that since the input domain is not | ||
// constrained to that interval, the exponent is decomposed into an integer | ||
// part, n, and a fractional part, f, such that f is in the desired range of | ||
// [0,1). By exponent rules 2^(n + f) = 2^n * 2^f, so the strategy is to | ||
// calculate the result by applying the cubic polynomial approximation to | ||
// the fractional part and using the fact that multiplying by 2^n is | ||
// equivalent to an arithmetic left or right shift depending on the sign. | ||
// | ||
// In other words, start by calculating the exponent (x) using 64.16 fixed | ||
// point and decompose it into integer (n) and fractional (f) parts as | ||
// follows: | ||
// | ||
// 2^16 * (Δt - Δh*Ib) (Δt - Δh*Ib) << 16 | ||
// x = ------------------- = ------------------ | ||
// halfLife halfLife | ||
// | ||
// x | ||
// n = ---- = x >> 16 | ||
// 2^16 | ||
// | ||
// f = x (mod 2^16) = x & 0xffff | ||
// | ||
// The use of 64.16 fixed point for the exponent means both the integer (n) | ||
// and fractional (f) parts have an additional factor of 2^16. Since the | ||
// fractional part of the exponent is cubed in the polynomial approximation | ||
// and (2^16)^3 = 2^48, the addition step in the approximation is internally | ||
// performed using 16.48 fixed point to compensate. | ||
// | ||
// In other words, the fixed point formulation of the goal cubic polynomial | ||
// approximation for the fractional part is: | ||
// | ||
// 195766423245049*f + 971821376*f^2 + 5127*f^3 + 2^47 | ||
// 2^f ~= 2^16 + --------------------------------------------------- | ||
// 2^48 | ||
// | ||
// Finally, the final target difficulty is calculated using x.16 fixed point | ||
// and then clamped to the valid range as follows: | ||
// | ||
// startDiff * 2^f * 2^n | ||
// nextDiff = --------------------- | ||
// 2^16 | ||
// | ||
// nextDiff = min(max(nextDiff, 1), powLimit) | ||
// | ||
// NOTE: The division by the half life uses Quo instead of Div because it | ||
// must be truncated division (which is truncated towards zero as Quo | ||
// implements) as opposed to the Euclidean division that Div implements. | ||
idealTimeDelta := heightDelta * targetSecsPerBlock | ||
exponentBig := big.NewInt(timeDelta - idealTimeDelta) | ||
exponentBig.Lsh(exponentBig, 16) | ||
exponentBig.Quo(exponentBig, big.NewInt(halfLife)) | ||
Comment on lines
+338
to
+341
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Assuming the overall difficulty subsystem is indeed maintaining So multiplying by a 2^16 factor bounds this to around 40 bits in the common case (with another 23 bits to spare to put it in the int64 at the next stage) so this checks out. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Yes, given the stated assumption, and for the main network, that's correct about 24 bits. In fact, one of the reference test data sets I created involves generating a series of blocks starting from the hardest difficulty that are each spaced at the half life so that it halves the difficulty each block until it gets to the easiest difficulty. The final time delta is 9744000 which results in a difference from the ideal schedule of 9676800 which is indeed 24 bits. |
||
|
||
// Decompose the exponent into integer and fractional parts. Since the | ||
// exponent is using 64.16 fixed point, the bottom 16 bits are the | ||
// fractional part and the integer part is the exponent arithmetic right | ||
// shifted by 16. | ||
frac64 := uint64(exponentBig.Int64() & 0xffff) | ||
shifts := exponentBig.Rsh(exponentBig, 16).Int64() | ||
|
||
// Calculate 2^16 * 2^(fractional part) of the exponent. | ||
// | ||
// Note that a full unsigned 64-bit type is required to avoid overflow in | ||
// the internal 16.48 fixed point calculation. Also, the overall result is | ||
// guaranteed to be positive and a maximum of 17 bits, so it is safe to cast | ||
// to a uint32. | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Additionally, the sign of the overall exponent is being tracked in coef1 = 195766423245049
coef2 = 971821376
coef3 = 5127
frac64 = 0xffff
inner = ((coef1*frac64) + (coef2*frac64*frac64) + (coef3*frac64*frac64*frac64) + (1<<47))
inner.bit_length()
# answer: 64
((1<<16) + (inner >> 48)).bit_length()
# answer: 17 This only works because the coefficients (in particular, There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Was all of this obvious enough from the comment, or do you think it would be beneficial to expand a bit? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Comment was good enough. I did breakout into a python repl to verify the numbers (and it took a couple of tries because I made a typo in the coefficients, which was producing 65 bits instead of 64), but I was able to follow it. |
||
const ( | ||
polyCoeff1 uint64 = 195766423245049 // ceil(0.695502049712533 * 2^48) | ||
polyCoeff2 uint64 = 971821376 // ceil(0.2262697964 * 2^32) | ||
polyCoeff3 uint64 = 5127 // ceil(0.0782318 * 2^16) | ||
) | ||
fracFactor := uint32(1<<16 + (polyCoeff1*frac64+ | ||
polyCoeff2*frac64*frac64+ | ||
polyCoeff3*frac64*frac64*frac64+ | ||
1<<47)>>48) | ||
|
||
// Calculate the target difficulty per the previous discussion: | ||
// | ||
// startDiff * 2^f * 2^n | ||
// nextDiff = --------------------- | ||
// 2^16 | ||
// | ||
// Note that by exponent rules 2^n / 2^16 = 2^(n - 16). This takes | ||
// advantage of that property to reduce the multiplication by 2^n and | ||
// division by 2^16 to a single shift. | ||
// | ||
// This approach also has the benefit of lowering the maximum magnitude | ||
// relative to what would be the case when first left shifting by a larger | ||
// value and then right shifting after. Since arbitrary precision integers | ||
// are used for this implementation, it doesn't make any difference from a | ||
// correctness standpoint, however, it does potentially lower the amount of | ||
// memory for the arbitrary precision type and can be used to help prevent | ||
// overflow in implementations that use fixed precision types. | ||
nextDiff := new(big.Int).Set(startDiff) | ||
nextDiff.Mul(nextDiff, big.NewInt(int64(fracFactor))) | ||
shifts -= 16 | ||
if shifts >= 0 { | ||
nextDiff.Lsh(nextDiff, uint(shifts)) | ||
} else { | ||
nextDiff.Rsh(nextDiff, uint(-shifts)) | ||
} | ||
|
||
// Limit the target difficulty to the valid hardest and easiest values. | ||
// The valid range is [1, powLimit]. | ||
if nextDiff.Sign() == 0 { | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Given that the last block of operations is a mul by the fractional (previously enforced to be positive) 2^f part, followed by a left or right shift (which doesn't change the sign in the case of big int), this is assured to be >= 0 (never negative), so checking for equality here is correct. |
||
// The hardest valid target difficulty is 1 since it would be impossible | ||
// to find a non-negative integer less than 0. | ||
nextDiff.SetInt64(1) | ||
} else if nextDiff.Cmp(powLimit) > 0 { | ||
nextDiff.Set(powLimit) | ||
} | ||
|
||
// Convert the difficulty to the compact representation and return it. | ||
return BigToCompact(nextDiff) | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
One thing that I notice is that this changes the, let's say "locality", of the diff algo. While the previous algo based on the work diff intervals means the cumulative block time delta can drift from the lifetime target (and in fact, that is the case today, because the difference between the expected and actual block time is ~17 hours), the ASERT algo ensures the global produced lifetime should approach the expected one.
I'm still poking to see if I find any undesired consequences, specially in the far future as the magnitude of delta_t and delta_h increase, but this seems like a fair goal equation.
Also, something to surface here (that might not be obvious to anyone just glancing at the PRs) is that this is changing diff retargeting to being performed every block, as opposed to only every 144 blocks (on mainnet).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, the change to per-block difficulty is very intentional. I'm glad to see you noticed that. The DCP (which isn't up yet) calls this out more explicitly (though it still needs a bit more refinement) along with the motivation behind it.
In short though, your observation is correct. It improves the responsiveness and allows the algorithm to better maintain the target block time (aka ideal block schedule). Another important aspect is that it means the network can more easily adjust to large swings in the hash power, notably large drops, versus the current EMA approach with a larger retarget interval since it only needs a single block at the no-longer-ideal difficulty versus potentially an entire interval.
It also happens to be more efficient to calculate and doesn't require looping through a bunch of previous blocks and intervals.