Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Get/Set aggregate public key from Clarity #4165

Merged
merged 43 commits into from
Dec 14, 2023

Conversation

jcnelson
Copy link
Member

This merges #4113 and #4126

jferrant and others added 30 commits December 11, 2023 16:55
call set-aggregate-public-key during append_block

pass reward cycle and agg pubkey as symbolic expressions to private contract call

read aggregate public key from parent reward cycle then set it in the following

remove TODO comment referencing the issue being fixed

use from instead of as for explicit cast

fmt fixes
…see if the key is set automatically for a new reward cycle
* Refactor some of the reused structs from `neon_node`
* Fix a logic-bug in `nakamoto::coordinator`: the first prepare phase information will be a Epoch2x block, so the reward set calculation has to handle that.
* Add `nakamoto_node` module based on `neon_node`
* Add simple integration test for `nakamoto_node`
…ommits at tenure_id changes, cargo fmt-stacks
@jcnelson
Copy link
Member Author

Actually, let me rebase this onto feat/naka-neon

Copy link

codecov bot commented Dec 13, 2023

Codecov Report

Attention: 29 lines in your changes are missing coverage. Please review.

Comparison is base (e71444a) 82.70% compared to head (b63ca21) 82.89%.

Files Patch % Lines
stackslib/src/chainstate/nakamoto/mod.rs 91.56% 14 Missing ⚠️
stackslib/src/chainstate/burn/db/sortdb.rs 50.00% 6 Missing ⚠️
testnet/stacks-node/src/mockamoto/tests.rs 96.52% 4 Missing ⚠️
stackslib/src/clarity_vm/clarity.rs 96.72% 2 Missing ⚠️
stackslib/src/net/mod.rs 77.77% 2 Missing ⚠️
testnet/stacks-node/src/nakamoto_node/miner.rs 90.00% 1 Missing ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##             next    #4165      +/-   ##
==========================================
+ Coverage   82.70%   82.89%   +0.19%     
==========================================
  Files         429      429              
  Lines      301662   302009     +347     
==========================================
+ Hits       249478   250362     +884     
+ Misses      52184    51647     -537     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

Copy link
Collaborator

@xoloki xoloki left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There's a stale comment in observe_set_aggregate_key, but ow LGTM. Thanks so much for untangling this @jcnelson

@@ -2484,6 +2493,16 @@ impl NakamotoChainState {
);
}

if !clarity_tx.config.mainnet {
Self::set_aggregate_public_key(
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah doing this here instead of append_block seems much more correct.

stackslib/src/chainstate/nakamoto/mod.rs Outdated Show resolved Hide resolved
})
.expect("FATAL: failed to start mockamoto main thread");

// complete within 5 seconds or abort (we are only observing one block)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This comment was in the original code, but we're now observing up to 100 blocks in 120 secs. Assuming that this is what we really want to be doing, we should fix this comment.

@@ -218,13 +225,49 @@ impl RunLoop {
.map(|e| (e.address.clone(), e.amount))
.collect();

// TODO (nakamoto-neon): check if we're trying to setup a self-signing network
// and set the right genesis data
let agg_pubkey_boot_callback = if let Some(self_signer) = self.config.self_signing() {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah this was the key I was needing. Was wondering how to set boot contract state, but couldn't seem to figure it out!

…eward cycles_ because integration tests can run in epoch 2.x for many reward cycles before the epoch 3 transition (and, the neon boot code sets the initial aggregate public key)
Comment on lines 2591 to 2636
for parent_reward_cycle in (0..=my_reward_cycle).rev() {
// carry forward the aggregate public key in the past reward cycle to the current
// reward cycle. It may be several cycles back, such as in integration tests where
// nakamoto boots up several reward cycles after the initial aggregate public key was set.
// TODO: replace with signer voting
debug!(
"Try setting aggregate public key in reward cycle {}, parent {}",
my_reward_cycle, parent_reward_cycle
);
// execute `set-aggregate-public-key` using `clarity-tx`
let Some(aggregate_public_key) = clarity_tx
.connection()
.with_readonly_clarity_env(
mainnet,
chain_id,
ClarityVersion::Clarity2,
StacksAddress::burn_address(mainnet).into(),
None,
LimitedCostTracker::Free,
|vm_env| {
vm_env.execute_contract_allow_private(
&boot_code_id(POX_4_NAME, mainnet),
"get-aggregate-public-key",
&vec![SymbolicExpression::atom_value(Value::UInt(u128::from(
parent_reward_cycle,
)))],
true,
)
},
)
.ok()
.map(|agg_key_value| {
let agg_key_opt = agg_key_value.expect_optional().map(|agg_key_buff| {
Value::buff_from(agg_key_buff.expect_buff(33))
.expect("failed to reconstruct buffer")
});
agg_key_opt
})
.flatten()
else {
debug!(
"No aggregate public key in parent cycle {}",
parent_reward_cycle
);
continue;
};
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looping over reward cycles should be unnecessary. If an integration test requires it, the integration test itself should be altered, rather than this code.

Copy link
Member Author

@jcnelson jcnelson Dec 13, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The problem that this fixes is that integration tests do the following:

  1. Install the aggregate public key in pox-4 for reward cycle 0
  2. Mine 200-ish Bitcoin blocks (which is many reward cycles)
  3. Mine some epoch 2.x blocks and instantiate pox-4 (more reward cycles)
  4. Start epoch 3, and try to build a block

Without searching back to reward cycle 0, this last step would fail with a panic because (1) there's no way to "pull forward" the aggregate public key from reward cycle 0 when the first Stacks block is mined without this search code, and (2) epoch 2.x block-production doesn't use this code.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Remember, this whole code path goes away once aggregate public key voting comes online. Then, we'd have stackers vote for (and thus set) the aggregate public key in pox-4 in the penultimate reward cycle prior to epoch 3.0. This removes the need for this entire method.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What you described sounds incorrect to me— the epoch 3.0 instantiation code sets all pre-pox-4 cycles’ agg key. Then the set-agg-key method is invoked in every block after that. This loop is masking an issue.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actually, it's the epoch 2.5 instantiation logic that populates the pre-pox-4 reward cycles' aggregate public key. And in the integration tests, epoch 2.5 starts at Bitcoin block height 6, which is well before the first Stacks block is mined.

@jcnelson jcnelson requested a review from kantai December 13, 2023 20:02
.ok_or(ChainstateError::DBError(DBError::NotFoundError))?;
let aggregate_key_block_header =
Self::get_canonical_block_header(chainstate.db(), sortdb)?.unwrap();

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you add a todo comment here with a reference to #4109 ? As currently implemented here (which I think is fine for now) the at_block is variable: when a new stacks block is received, get_canonical_block_header() returns a different Stacks block, meaning this get_aggregate_public_key() will return different values for the same block at different times.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm actually no longer sure that #4109 is desirable, if we're going to allow stackers to re-try DKG in the middle of the cycle. If that were the case, then we'd want the current behavior.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe so, but that merits discussion and that discussion shouldn't stall this PR. Leaving a pointer to that issue makes it so that it's easy to find the code that assumes one approach or the other.

As written, this function may be somewhat dangerous -- I don't think it's strange for a developer to believe that the expected aggregate key for a given block is always the same, but this function will return different results at different times.

@@ -1803,6 +1800,29 @@ impl NakamotoChainState {
))
}

/// Get the aggregate public key for a block
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe just expand this comment that this is fetching the expected stacker aggregate key for a given block.

…egration test framework has mined some blocks
@jcnelson jcnelson merged commit 8cffc38 into next Dec 14, 2023
2 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants