Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

multi: Implement DCP0011 PoW hash consensus vote. #3115

Merged
merged 11 commits into from
Jun 7, 2023
504 changes: 435 additions & 69 deletions blockchain/chaingen/generator.go

Large diffs are not rendered by default.

4 changes: 2 additions & 2 deletions blockchain/fullblocktests/generate.go
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
// Copyright (c) 2016 The btcsuite developers
// Copyright (c) 2016-2022 The Decred developers
// Copyright (c) 2016-2023 The Decred developers
// Use of this source code is governed by an ISC
// license that can be found in the LICENSE file.

Expand Down Expand Up @@ -1689,7 +1689,7 @@ func Generate(includeLargeReorg bool) (tests [][]TestInstance, err error) {
// it's not solved and then replace it in the generator's state.
{
origHash := bmf3.BlockHash()
for chaingen.IsSolved(&bmf3.Header) {
for g.IsSolved(&bmf3.Header) {
bmf3.Header.Nonce++
}
g.UpdateBlockState("bmf3", origHash, "bmf3", bmf3)
Expand Down
26 changes: 16 additions & 10 deletions blockchain/fullblocktests/params.go
Original file line number Diff line number Diff line change
Expand Up @@ -106,22 +106,28 @@ var regNetParams = &chaincfg.Params{
DNSSeeds: nil, // NOTE: There must NOT be any seeds.

// Chain parameters
GenesisBlock: &regNetGenesisBlock,
GenesisHash: *newHashFromStr("2ced94b4ae95bba344cfa043268732d230649c640f92dce2d9518823d3057cb0"),
PowLimit: regNetPowLimit,
PowLimitBits: 0x207fffff,
ReduceMinDifficulty: false,
MinDiffReductionTime: 0, // Does not apply since ReduceMinDifficulty false
GenerateSupported: true,
MaximumBlockSizes: []int{1000000, 1310720},
MaxTxSize: 1000000,
TargetTimePerBlock: time.Second,
GenesisBlock: &regNetGenesisBlock,
GenesisHash: *newHashFromStr("2ced94b4ae95bba344cfa043268732d230649c640f92dce2d9518823d3057cb0"),
PowLimit: regNetPowLimit,
PowLimitBits: 0x207fffff,
ReduceMinDifficulty: false,
MinDiffReductionTime: 0, // Does not apply since ReduceMinDifficulty false
GenerateSupported: true,
MaximumBlockSizes: []int{1000000, 1310720},
MaxTxSize: 1000000,
TargetTimePerBlock: time.Second,

// Version 1 difficulty algorithm (EMA + BLAKE256) parameters.
WorkDiffAlpha: 1,
WorkDiffWindowSize: 8,
WorkDiffWindows: 4,
TargetTimespan: time.Second * 8, // TimePerBlock * WindowSize
RetargetAdjustmentFactor: 4,

// Version 2 difficulty algorithm (ASERT + BLAKE3) parameters.
WorkDiffV2Blake3StartBits: 0x207fffff,
WorkDiffV2HalfLifeSecs: 6, // 6 * TimePerBlock

// Subsidy parameters.
BaseSubsidy: 50000000000,
MulSubsidy: 100,
Expand Down
7 changes: 6 additions & 1 deletion blockchain/go.mod
Original file line number Diff line number Diff line change
Expand Up @@ -20,6 +20,11 @@ require (
github.com/decred/dcrd/crypto/ripemd160 v1.0.1 // indirect
github.com/decred/dcrd/dcrec/edwards/v2 v2.0.2 // indirect
github.com/decred/slog v1.2.0 // indirect
github.com/klauspost/cpuid/v2 v2.0.9 // indirect
lukechampine.com/blake3 v1.2.1 // indirect
)

replace github.com/decred/dcrd/chaincfg/v3 => ../chaincfg
replace (
github.com/decred/dcrd/chaincfg/v3 => ../chaincfg
github.com/decred/dcrd/wire => ../wire
)
6 changes: 4 additions & 2 deletions blockchain/go.sum
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,9 @@ github.com/decred/dcrd/dcrutil/v4 v4.0.0 h1:AY00fWy/ETrMHN0DNV3XUbH1aip2RG1AoTy5
github.com/decred/dcrd/dcrutil/v4 v4.0.0/go.mod h1:QQpX5WVH3/ixVtiW15xZMe+neugXX3l2bsrYgq6nz4M=
github.com/decred/dcrd/txscript/v4 v4.0.0 h1:BwaBUCMCmg58MCYoBhxVjL8ZZKUIfoJuxu/djmh8h58=
github.com/decred/dcrd/txscript/v4 v4.0.0/go.mod h1:OJtxNc5RqwQyfrRnG2gG8uMeNPo8IAJp+TD1UKXkqk8=
github.com/decred/dcrd/wire v1.5.0 h1:3SgcEzSjqAMQvOugP0a8iX7yQSpiVT1yNi9bc4iOXVg=
github.com/decred/dcrd/wire v1.5.0/go.mod h1:fzAjVqw32LkbAZIt5mnrvBR751GTa3e0rRQdOIhPY3w=
github.com/decred/slog v1.2.0 h1:soHAxV52B54Di3WtKLfPum9OFfWqwtf/ygf9njdfnPM=
github.com/decred/slog v1.2.0/go.mod h1:kVXlGnt6DHy2fV5OjSeuvCJ0OmlmTF6LFpEPMu/fOY0=
github.com/klauspost/cpuid/v2 v2.0.9 h1:lgaqFMSdTdQYdZ04uHyN2d/eKdOMyi2YLSvlQIBFYa4=
github.com/klauspost/cpuid/v2 v2.0.9/go.mod h1:FInQzS24/EEf25PyTYn52gqo7WaD8xa0213Md/qVLRg=
lukechampine.com/blake3 v1.2.1 h1:YuqqRuaqsGV71BV/nm9xlI0MKUv4QC54jQnBChWbGnI=
lukechampine.com/blake3 v1.2.1/go.mod h1:0OFRp7fBtAylGVCO40o87sbupkyIGgbpv1+M1k1LM6k=
1 change: 1 addition & 0 deletions blockchain/standalone/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -26,6 +26,7 @@ The provided functions fall into the following categories:
- Calculating work values based on the compact target difficulty
- Checking a block hash satisfies a target difficulty and that target
difficulty is within a valid range
- Calculating required target difficulties using the ASERT algorithm
- Merkle root calculation
- Calculation from individual leaf hashes
- Calculation from a slice of transactions
Expand Down
3 changes: 2 additions & 1 deletion blockchain/standalone/doc.go
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
// Copyright (c) 2019-2022 The Decred developers
// Copyright (c) 2019-2023 The Decred developers
// Use of this source code is governed by an ISC
// license that can be found in the LICENSE file.

Expand Down Expand Up @@ -34,6 +34,7 @@ The provided functions fall into the following categories:
- Calculating work values based on the compact target difficulty
- Checking a block hash satisfies a target difficulty and that target
difficulty is within a valid range
- Calculating required target difficulties using the ASERT algorithm

# Merkle root calculation

Expand Down
11 changes: 10 additions & 1 deletion blockchain/standalone/error.go
Original file line number Diff line number Diff line change
@@ -1,9 +1,18 @@
// Copyright (c) 2019-2022 The Decred developers
// Copyright (c) 2019-2023 The Decred developers
// Use of this source code is governed by an ISC
// license that can be found in the LICENSE file.

package standalone

import "fmt"

// panicf is a convenience function that formats according to the given format
// specifier and arguments and panics with it.
func panicf(format string, args ...interface{}) {
str := fmt.Sprintf(format, args...)
panic(str)
}

// ErrorKind identifies a kind of error. It has full support for errors.Is and
// errors.As, so the caller can directly check against an error kind when
// determining the reason for an error.
Expand Down
225 changes: 213 additions & 12 deletions blockchain/standalone/pow.go
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
// Copyright (c) 2013-2016 The btcsuite developers
// Copyright (c) 2015-2019 The Decred developers
// Copyright (c) 2015-2023 The Decred developers
// Use of this source code is governed by an ISC
// license that can be found in the LICENSE file.

Expand Down Expand Up @@ -182,22 +182,223 @@ func CheckProofOfWorkRange(difficultyBits uint32, powLimit *big.Int) error {
return checkProofOfWorkRange(target, powLimit)
}

// CheckProofOfWork ensures the provided block hash is less than the provided
// compact target difficulty and that the target difficulty is in min/max range
// per the provided proof-of-work limit.
func CheckProofOfWork(blockHash *chainhash.Hash, difficultyBits uint32, powLimit *big.Int) error {
// checkProofOfWorkHash ensures the provided hash is less than the provided
// target difficulty.
func checkProofOfWorkHash(powHash *chainhash.Hash, target *big.Int) error {
// The proof of work hash must be less than the target difficulty.
hashNum := HashToBig(powHash)
if hashNum.Cmp(target) > 0 {
str := fmt.Sprintf("proof of work hash %064x is higher than "+
"expected max of %064x", hashNum, target)
return ruleError(ErrHighHash, str)
}

return nil
}

// CheckProofOfWorkHash ensures the provided hash is less than the provided
// compact target difficulty.
func CheckProofOfWorkHash(powHash *chainhash.Hash, difficultyBits uint32) error {
target := CompactToBig(difficultyBits)
return checkProofOfWorkHash(powHash, target)
}

// CheckProofOfWork ensures the provided hash is less than the provided compact
// target difficulty and that the target difficulty is in min/max range per the
// provided proof-of-work limit.
//
// This is semantically equivalent to and slightly more efficient than calling
// CheckProofOfWorkRange followed by CheckProofOfWorkHash.
func CheckProofOfWork(powHash *chainhash.Hash, difficultyBits uint32, powLimit *big.Int) error {
target := CompactToBig(difficultyBits)
if err := checkProofOfWorkRange(target, powLimit); err != nil {
return err
}

// The block hash must be less than the target difficulty.
hashNum := HashToBig(blockHash)
if hashNum.Cmp(target) > 0 {
str := fmt.Sprintf("block hash of %064x is higher than expected max "+
"of %064x", hashNum, target)
return ruleError(ErrHighHash, str)
// The proof of work hash must be less than the target difficulty.
return checkProofOfWorkHash(powHash, target)
}

// CalcASERTDiff calculates an absolutely scheduled exponentially weighted
// target difficulty for the given set of parameters using the algorithm defined
// in DCP0011.
//
// The Absolutely Scheduled Exponentially weighted Rising Targets (ASERT)
// algorithm defines an ideal schedule for block issuance and calculates the
// difficulty based on how far the most recent block's timestamp is ahead or
// behind that schedule.
//
// The target difficulty is set exponentially such that it is doubled or halved
// for every multiple of the half life the most recent block is ahead or behind
// the ideal schedule.
//
// The starting difficulty bits parameter is the initial target difficulty all
// calculations use as a reference. This value is defined on a per-chain basis.
// It must be non-zero and less than or equal to the provided proof of work
// limit or the function will panic.
//
// The time delta is the number of seconds that have elapsed between the most
// recent block and an initial reference timestamp.
//
// The height delta is the number of blocks between the most recent block height
// and an initial reference height. It must be non-negative or the function
// will panic.
//
// NOTE: This only performs the primary target difficulty calculation and does
// not include any additional special network rules such as enforcing a maximum
// allowed test network difficulty. It is up to the caller to impose any such
// additional restrictions.
//
// This function is safe for concurrent access.
func CalcASERTDiff(startDiffBits uint32, powLimit *big.Int, targetSecsPerBlock,
timeDelta, heightDelta, halfLife int64) uint32 {

// Ensure parameter assumptions are not violated.
//
// 1. The starting target difficulty must be in the range [1, powLimit]
// 2. The height to calculate the difficulty for must come after the height
// of the reference block
startDiff := CompactToBig(startDiffBits)
if startDiff.Sign() <= 0 || startDiff.Cmp(powLimit) > 0 {
panicf("starting difficulty %064x is not in the valid range [1, %064x]",
startDiff, powLimit)
}
if heightDelta < 0 {
panicf("provided height delta %d is negative", heightDelta)
}

return nil
// Calculate the target difficulty by multiplying the provided starting
// target difficulty by an exponential scaling factor that is determined
// based on how far ahead or behind the ideal schedule the given time delta
// is along with a half life that acts as a smoothing factor.
//
// Per DCP0011, the goal equation is:
//
// nextDiff = min(max(startDiff * 2^((Δt - Δh*Ib)/halfLife), 1), powLimit)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

One thing that I notice is that this changes the, let's say "locality", of the diff algo. While the previous algo based on the work diff intervals means the cumulative block time delta can drift from the lifetime target (and in fact, that is the case today, because the difference between the expected and actual block time is ~17 hours), the ASERT algo ensures the global produced lifetime should approach the expected one.

I'm still poking to see if I find any undesired consequences, specially in the far future as the magnitude of delta_t and delta_h increase, but this seems like a fair goal equation.

Also, something to surface here (that might not be obvious to anyone just glancing at the PRs) is that this is changing diff retargeting to being performed every block, as opposed to only every 144 blocks (on mainnet).

Copy link
Member Author

@davecgh davecgh May 25, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, the change to per-block difficulty is very intentional. I'm glad to see you noticed that. The DCP (which isn't up yet) calls this out more explicitly (though it still needs a bit more refinement) along with the motivation behind it.

In short though, your observation is correct. It improves the responsiveness and allows the algorithm to better maintain the target block time (aka ideal block schedule). Another important aspect is that it means the network can more easily adjust to large swings in the hash power, notably large drops, versus the current EMA approach with a larger retarget interval since it only needs a single block at the no-longer-ideal difficulty versus potentially an entire interval.

It also happens to be more efficient to calculate and doesn't require looping through a bunch of previous blocks and intervals.

//
// However, in order to avoid the need to perform floating point math which
// is problematic across languages due to uncertainty in floating point math
// libs, the formula is implemented using a combination of fixed-point
// integer arithmetic and a cubic polynomial approximation to the 2^x term.
//
// In particular, the goal cubic polynomial approximation over the interval
// 0 <= x < 1 is:
//
// 2^x ~= 1 + 0.695502049712533x + 0.2262697964x^2 + 0.0782318x^3
//
// This approximation provides an absolute error margin < 0.013% over the
// aforementioned interval of [0,1) which is well under the 0.1% error
// margin needed for good results. Note that since the input domain is not
// constrained to that interval, the exponent is decomposed into an integer
// part, n, and a fractional part, f, such that f is in the desired range of
// [0,1). By exponent rules 2^(n + f) = 2^n * 2^f, so the strategy is to
// calculate the result by applying the cubic polynomial approximation to
// the fractional part and using the fact that multiplying by 2^n is
// equivalent to an arithmetic left or right shift depending on the sign.
//
// In other words, start by calculating the exponent (x) using 64.16 fixed
// point and decompose it into integer (n) and fractional (f) parts as
// follows:
//
// 2^16 * (Δt - Δh*Ib) (Δt - Δh*Ib) << 16
// x = ------------------- = ------------------
// halfLife halfLife
//
// x
// n = ---- = x >> 16
// 2^16
//
// f = x (mod 2^16) = x & 0xffff
//
// The use of 64.16 fixed point for the exponent means both the integer (n)
// and fractional (f) parts have an additional factor of 2^16. Since the
// fractional part of the exponent is cubed in the polynomial approximation
// and (2^16)^3 = 2^48, the addition step in the approximation is internally
// performed using 16.48 fixed point to compensate.
//
// In other words, the fixed point formulation of the goal cubic polynomial
// approximation for the fractional part is:
//
// 195766423245049*f + 971821376*f^2 + 5127*f^3 + 2^47
// 2^f ~= 2^16 + ---------------------------------------------------
// 2^48
//
// Finally, the final target difficulty is calculated using x.16 fixed point
// and then clamped to the valid range as follows:
//
// startDiff * 2^f * 2^n
// nextDiff = ---------------------
// 2^16
//
// nextDiff = min(max(nextDiff, 1), powLimit)
//
// NOTE: The division by the half life uses Quo instead of Div because it
// must be truncated division (which is truncated towards zero as Quo
// implements) as opposed to the Euclidean division that Div implements.
idealTimeDelta := heightDelta * targetSecsPerBlock
exponentBig := big.NewInt(timeDelta - idealTimeDelta)
exponentBig.Lsh(exponentBig, 16)
exponentBig.Quo(exponentBig, big.NewInt(halfLife))
Comment on lines +338 to +341
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Assuming the overall difficulty subsystem is indeed maintaining timeDelta close to idealTimeDelta as intended, the magnitude here will be small (say 24 bits at most?).

So multiplying by a 2^16 factor bounds this to around 40 bits in the common case (with another 23 bits to spare to put it in the int64 at the next stage) so this checks out.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, given the stated assumption, and for the main network, that's correct about 24 bits.

In fact, one of the reference test data sets I created involves generating a series of blocks starting from the hardest difficulty that are each spaced at the half life so that it halves the difficulty each block until it gets to the easiest difficulty. The final time delta is 9744000 which results in a difference from the ideal schedule of 9676800 which is indeed 24 bits.


// Decompose the exponent into integer and fractional parts. Since the
// exponent is using 64.16 fixed point, the bottom 16 bits are the
// fractional part and the integer part is the exponent arithmetic right
// shifted by 16.
frac64 := uint64(exponentBig.Int64() & 0xffff)
shifts := exponentBig.Rsh(exponentBig, 16).Int64()

// Calculate 2^16 * 2^(fractional part) of the exponent.
//
// Note that a full unsigned 64-bit type is required to avoid overflow in
// the internal 16.48 fixed point calculation. Also, the overall result is
// guaranteed to be positive and a maximum of 17 bits, so it is safe to cast
// to a uint32.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Additionally, the sign of the overall exponent is being tracked in shifts (and frac was already cast into a uint64 in the previous step), so this checks out.

coef1 = 195766423245049
coef2 = 971821376
coef3 = 5127
frac64 = 0xffff
inner = ((coef1*frac64) + (coef2*frac64*frac64) + (coef3*frac64*frac64*frac64) + (1<<47))
inner.bit_length()
# answer: 64

((1<<16) + (inner >> 48)).bit_length()
# answer: 17

This only works because the coefficients (in particular, polyCoeff3) are kept small. A slightly higher polyCoeff3 could cause the inner addition to overflow the 64 bits.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Was all of this obvious enough from the comment, or do you think it would be beneficial to expand a bit?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Comment was good enough. I did breakout into a python repl to verify the numbers (and it took a couple of tries because I made a typo in the coefficients, which was producing 65 bits instead of 64), but I was able to follow it.

const (
polyCoeff1 uint64 = 195766423245049 // ceil(0.695502049712533 * 2^48)
polyCoeff2 uint64 = 971821376 // ceil(0.2262697964 * 2^32)
polyCoeff3 uint64 = 5127 // ceil(0.0782318 * 2^16)
)
fracFactor := uint32(1<<16 + (polyCoeff1*frac64+
polyCoeff2*frac64*frac64+
polyCoeff3*frac64*frac64*frac64+
1<<47)>>48)

// Calculate the target difficulty per the previous discussion:
//
// startDiff * 2^f * 2^n
// nextDiff = ---------------------
// 2^16
//
// Note that by exponent rules 2^n / 2^16 = 2^(n - 16). This takes
// advantage of that property to reduce the multiplication by 2^n and
// division by 2^16 to a single shift.
//
// This approach also has the benefit of lowering the maximum magnitude
// relative to what would be the case when first left shifting by a larger
// value and then right shifting after. Since arbitrary precision integers
// are used for this implementation, it doesn't make any difference from a
// correctness standpoint, however, it does potentially lower the amount of
// memory for the arbitrary precision type and can be used to help prevent
// overflow in implementations that use fixed precision types.
nextDiff := new(big.Int).Set(startDiff)
nextDiff.Mul(nextDiff, big.NewInt(int64(fracFactor)))
shifts -= 16
if shifts >= 0 {
nextDiff.Lsh(nextDiff, uint(shifts))
} else {
nextDiff.Rsh(nextDiff, uint(-shifts))
}

// Limit the target difficulty to the valid hardest and easiest values.
// The valid range is [1, powLimit].
if nextDiff.Sign() == 0 {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Given that the last block of operations is a mul by the fractional (previously enforced to be positive) 2^f part, followed by a left or right shift (which doesn't change the sign in the case of big int), this is assured to be >= 0 (never negative), so checking for equality here is correct.

// The hardest valid target difficulty is 1 since it would be impossible
// to find a non-negative integer less than 0.
nextDiff.SetInt64(1)
} else if nextDiff.Cmp(powLimit) > 0 {
nextDiff.Set(powLimit)
}

// Convert the difficulty to the compact representation and return it.
return BigToCompact(nextDiff)
}
Loading