Skip to content

feat(nano): implement consensus [part 13]#1290

Merged
jansegre merged 1 commit intomasterfrom
feat/nano/consensus
Jun 23, 2025
Merged

feat(nano): implement consensus [part 13]#1290
jansegre merged 1 commit intomasterfrom
feat/nano/consensus

Conversation

@glevco
Copy link
Contributor

@glevco glevco commented Jun 3, 2025

Motivation

Continue the nano merges, updating the consensus.

Acceptance Criteria

  • Update consensus to support nano. No changes should be made while the feature is not enabled.
  • Add missing resources that depended on the consensus update.
  • Update some tests.

Checklist

  • If you are requesting a merge into master, confirm this code is production-ready and can be included in future releases as soon as it gets merged

@glevco glevco self-assigned this Jun 3, 2025
@glevco glevco requested a review from jansegre as a code owner June 3, 2025 22:21
@glevco glevco requested a review from msbrogli as a code owner June 3, 2025 22:21
@glevco glevco changed the title feat(nano): implement consensus feat(nano): implement consensus [part 13] Jun 3, 2025
@glevco glevco moved this from Todo to In Progress (WIP) in Hathor Network Jun 3, 2025
@glevco glevco moved this from In Progress (WIP) to In Progress (Done) in Hathor Network Jun 3, 2025
@glevco glevco force-pushed the feat/nano/resources-and-events branch 2 times, most recently from e502727 to 010c4b9 Compare June 3, 2025 22:32
@glevco glevco force-pushed the feat/nano/consensus branch 2 times, most recently from 8ab5488 to 84a4de9 Compare June 3, 2025 22:36
@codecov
Copy link

codecov bot commented Jun 3, 2025

Codecov Report

Attention: Patch coverage is 54.71311% with 221 lines in your changes missing coverage. Please review.

Project coverage is 78.00%. Comparing base (6d625c6) to head (0519490).
Report is 2 commits behind head on master.

Files with missing lines Patch % Lines
hathor/nanocontracts/resources/state.py 6.03% 109 Missing ⚠️
hathor/nanocontracts/resources/nc_exec_logs.py 18.51% 44 Missing ⚠️
hathor/consensus/block_consensus.py 81.65% 24 Missing and 7 partials ⚠️
hathor/consensus/transaction_consensus.py 65.38% 13 Missing and 5 partials ⚠️
hathor/consensus/consensus.py 79.31% 8 Missing and 4 partials ⚠️
hathor/manager.py 0.00% 4 Missing ⚠️
hathor/consensus/consensus_settings.py 93.33% 0 Missing and 1 partial ⚠️
hathor/transaction/vertex_parser.py 50.00% 0 Missing and 1 partial ⚠️
hathor/verification/vertex_verifier.py 0.00% 0 Missing and 1 partial ⚠️

❌ Your project status has failed because the head coverage (78.00%) is below the target coverage (82.00%). You can increase the head coverage or adjust the target coverage.

Additional details and impacted files
@@            Coverage Diff             @@
##           master    #1290      +/-   ##
==========================================
+ Coverage   75.66%   78.00%   +2.34%     
==========================================
  Files         426      426              
  Lines       31455    31904     +449     
  Branches     4873     4950      +77     
==========================================
+ Hits        23800    24887    +1087     
+ Misses       6839     6066     -773     
- Partials      816      951     +135     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@glevco glevco force-pushed the feat/nano/resources-and-events branch 2 times, most recently from 0880346 to 758f0c5 Compare June 11, 2025 20:40
@glevco glevco force-pushed the feat/nano/consensus branch 2 times, most recently from 0b6f616 to d5fb072 Compare June 11, 2025 20:41
@glevco glevco force-pushed the feat/nano/resources-and-events branch 3 times, most recently from fdfe8f2 to 18d0a50 Compare June 17, 2025 15:46
@glevco glevco force-pushed the feat/nano/consensus branch from d5fb072 to d9685f0 Compare June 17, 2025 15:48
@glevco glevco force-pushed the feat/nano/resources-and-events branch from 18d0a50 to 4790263 Compare June 17, 2025 22:18
@glevco glevco moved this from In Progress (Done) to In Review (WIP) in Hathor Network Jun 17, 2025
@glevco glevco force-pushed the feat/nano/consensus branch from d9685f0 to d150a7a Compare June 17, 2025 22:18
@glevco glevco changed the base branch from feat/nano/resources-and-events to master June 17, 2025 23:28
@glevco glevco force-pushed the feat/nano/consensus branch 4 times, most recently from ffcb995 to d394b7b Compare June 18, 2025 15:37
@github-actions
Copy link

github-actions bot commented Jun 18, 2025

🐰 Bencher Report

Branchfeat/nano/consensus
Testbedubuntu-22.04
Click to view all benchmark results
BenchmarkLatencyBenchmark Result
minutes (m)
(Result Δ%)
Lower Boundary
minutes (m)
(Limit %)
Upper Boundary
minutes (m)
(Limit %)
sync-v2 (up to 20000 blocks)📈 view plot
🚷 view threshold
1.64 m
(+0.37%)Baseline: 1.63 m
1.47 m
(89.66%)
1.79 m
(91.25%)
🐰 View full continuous benchmarking report in Bencher

@glevco glevco force-pushed the feat/nano/consensus branch from d394b7b to 6b09bf8 Compare June 18, 2025 20:57
jansegre
jansegre previously approved these changes Jun 18, 2025
Comment on lines +185 to +209
# Get fields.
fields: dict[str, NCValueSuccessResponse | NCValueErrorResponse] = {}
param_fields: list[str] = params.fields
for field in param_fields:
key_field = self.get_key_for_field(field)
if key_field is None:
fields[field] = NCValueErrorResponse(errmsg='invalid format')
continue

try:
field_type = blueprint_class.__annotations__[field]
except KeyError:
fields[field] = NCValueErrorResponse(errmsg='not a blueprint field')
continue

try:
field_nc_type = make_nc_type_for_type(field_type)
value = nc_storage.get_obj(key_field.encode(), field_nc_type)
except KeyError:
fields[field] = NCValueErrorResponse(errmsg='field not found')
continue

if type(value) is bytes:
value = value.hex()
fields[field] = NCValueSuccessResponse(value=value)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This API will have to change soon to properly support "deep keys" without custom string syntax.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Postponed thread.

Comment on lines 209 to 213
if not (self.soft_voided_tx_ids & voided_by):
return voided_by
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this "shortcut" correct? For example:

# assuming these:
# self. soft_voided_txs_ids = {"sv1"}
# self._settings.NC_EXECUTION_FAIL_ID = "ncf"
self._filter_out_soft_voided_entries({"ncf", "foo"})  == {"ncf", "foo"}  # will "shortcut" because it doesn't have "sv1"
self._filter_out_soft_voided_entries({"ncf", "foo", "sv1"})  == {"foo"}  # will skip "ncf" (not add it to `ret`)

It seems harmless because these other "failure ids" are removed later, but it does seem to me that the intended behavior of this method is not consistent: it removes failure ids in some cases but not others.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree it's weird.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Both _filter_out_soft_voided_entries and _filter_out_nc_fail_entries are used only once, in filter_out_voided_by_entries_from_parents, and do basically the same thing. I think they could be unified.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Postponed thread.

Comment on lines +52 to +60
self.execute_nano_contracts(tx)

def execute_nano_contracts(self, tx: Transaction) -> None:
"""This method is called when the transaction is added to the mempool.

The method is currently only executed when the transaction is confirmed by a block.
Hence, we do nothing here.
"""
pass
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So this isn't needed and this code can be removed, right?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Postponed thread.

@jansegre jansegre moved this from In Review (WIP) to In Review (Done) in Hathor Network Jun 18, 2025
assert tx.storage is not None
tx2 = tx.storage.get_transaction(h)
tx2_meta = tx2.get_metadata()
tx2_voided_by: set[VertexId] = tx2_meta.voided_by or set()
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

FIX: I think we must assert tx2_meta.voided_by just like in line 280, instead of doing or set().

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Line 226 on the consensus.py file also does or set() in a similar situation. So which one is correct?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since this relates to code that already existed in the consensus before nano, I'm demoting this to a postponed thread.

@glevco glevco moved this from In Review (Done) to In Review (WIP) in Hathor Network Jun 18, 2025
Comment on lines +131 to +147
for tx_affected in context.txs_affected:
if not tx_affected.is_nano_contract():
# Not a nano tx? Skip!
continue
if tx_affected.get_metadata().first_block:
# Not in mempool? Skip!
continue
assert isinstance(tx_affected, Transaction)
nano_header = tx_affected.get_nano_header()
try:
nano_header.get_blueprint_id()
except NanoContractDoesNotExist:
from hathor.transaction.validation_state import ValidationState
tx_affected.set_validation(ValidationState.INVALID)
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

FIX: I think it's bug-prone to use nano_header.get_blueprint_id() and check for NanoContractDoesNotExist. It's hard to reason about all possibilities. It would be better if this check was more explicit.

Maybe we should just add a comment explaining the purpose of this, which is checking whether the nc_id of this tx stopped existing after a reorg. Then we can improve the code later, as this will have to be refactored anyway to remove invalid txs from the storage.

Copy link
Contributor Author

@glevco glevco Jun 23, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added a comment for it in https://github.com/HathorNetwork/nano-hathor-core/pull/266 and cherry-picked it here on commit 39c9299. I'm leaving this thread open though, so we can revisit it during the post-merge pass. So it's a postponed thread.

assert isinstance(tx_affected, Transaction)
nano_header = tx_affected.get_nano_header()
try:
nano_header.get_blueprint_id()
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

FIX: on a related note, I think there's at least one bug in NanoHeader.get_blueprint_id(). Its first lines are:

if self.is_creating_a_new_contract():
    blueprint_id = BlueprintId(NCVertexId(self.nc_id))
    return blueprint_id

But what if the initialize method of this contract calls self.syscall.set_blueprint()? It would mean the blueprint_id of this contract is not self.nc_id anymore after the method runs, and therefore the return above would be incorrect.

It doesn't look like this affects this call in the consensus because the tx is guaranteed to be in the mempool, therefore initialize hasn't run yet. But we must check other places that call get_blueprint_id() and check whether they could be affected. I think we should review all usages and all internal logic of this method.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This bug is unrelated to the consensus and this PR, so I'm demoting it to a postponed thread.

assert tx.storage is not None
tx2 = tx.storage.get_transaction(h)
tx2_meta = tx2.get_metadata()
tx2_voided_by: set[VertexId] = tx2_meta.voided_by or set()
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Line 226 on the consensus.py file also does or set() in a similar situation. So which one is correct?

Comment on lines 209 to 213
if not (self.soft_voided_tx_ids & voided_by):
return voided_by
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree it's weird.

Comment on lines 209 to 213
if not (self.soft_voided_tx_ids & voided_by):
return voided_by
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Both _filter_out_soft_voided_entries and _filter_out_nc_fail_entries are used only once, in filter_out_voided_by_entries_from_parents, and do basically the same thing. I think they could be unified.

Comment on lines +113 to +115
if cur_meta.nc_block_root_id is not None:
# Reset nc_block_root_id to force re-execution.
cur_meta.nc_block_root_id = None
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

FIX: Shouldn't this be an assert cur_meta.nc_block_root_id is not None? Except for the own block that is being handled.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Postponing thread because I want Marcelo's confirmation.

Comment on lines +162 to +163
assert tx_conflict_meta.first_block is None
assert tx_conflict_meta.voided_by
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

FIX: Add a comment explaining why these asserts must hold. What if the conflicting tx is also a nano, it could have a first block, no?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Postponing thread because I want Marcelo's confirmation.

Comment on lines +252 to +254
case NCExecutionState.SKIPPED:
assert tx_meta.voided_by
assert self._settings.NC_EXECUTION_FAIL_ID not in tx_meta.voided_by
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also assert that at least one (or the only?) reason it's voided_by is a tx that should have NC_EXECUTION_FAIL_ID

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Postponed thread.

voided_by2.discard(parent.hash)
voided_by.update(self.context.consensus.filter_out_soft_voided_entries(parent, voided_by2))
voided_by.update(self.context.consensus.filter_out_voided_by_entries_from_parents(parent, voided_by2))
voided_by.discard(self._settings.NC_EXECUTION_FAIL_ID)
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe this discard is related to #1290 (comment)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Postponed thread.

pedroferreira1
pedroferreira1 previously approved these changes Jun 23, 2025
@glevco glevco moved this from In Review (WIP) to In Review (Done) in Hathor Network Jun 23, 2025
@glevco glevco force-pushed the feat/nano/consensus branch from 6b09bf8 to 13588c1 Compare June 23, 2025 18:07
@glevco glevco dismissed stale reviews from pedroferreira1 and jansegre via 39c9299 June 23, 2025 23:08
Co-authored-by: Marcelo Salhab Brogliato <msbrogli@gmail.com>
Co-authored-by: Jan Segre <jan@hathor.network>
@glevco glevco force-pushed the feat/nano/consensus branch from 39c9299 to 0519490 Compare June 23, 2025 23:14
@jansegre jansegre merged commit ad46131 into master Jun 23, 2025
7 checks passed
@jansegre jansegre deleted the feat/nano/consensus branch June 23, 2025 23:15
@github-project-automation github-project-automation bot moved this from In Review (Done) to Waiting to be deployed in Hathor Network Jun 23, 2025
@jansegre jansegre mentioned this pull request Jul 22, 2025
2 tasks
@jansegre jansegre moved this from Waiting to be deployed to Done in Hathor Network Jul 22, 2025
@jansegre jansegre mentioned this pull request Aug 7, 2025
2 tasks
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

Archived in project

Development

Successfully merging this pull request may close these issues.

3 participants