Conversation
…environments When HATHOR_DEV_MINER=true (or --dev-miner), the service runs in a simplified mode that replaces both cpuminer and the stratum protocol: - Blocks are mined in-process via a background loop that polls the fullnode for templates, solves trivial PoW, and submits blocks at a configurable pace (HATHOR_BLOCK_INTERVAL / --block-interval) - Transactions are mined synchronously without external miners - The same HTTP API (submit-job, job-status, cancel-job, health) is preserved for drop-in compatibility with existing client libraries New files: - txstratum/dev/block_miner.py: Background block mining loop - txstratum/dev/tx_miner.py: Synchronous PoW solver for transactions - txstratum/dev/manager.py: DevMiningManager (replaces TxMiningManager) - tests/test_dev_miner.py: Tests for all dev-miner components - claude.md: Repository documentation for Claude Code https://claude.ai/code/session_018N1SkY9C2UDKnorSkp32PB
Shows how to build and run the tx-mining-service with --dev-miner as a drop-in replacement for tx-mining-service + cpuminer. https://claude.ai/code/session_018N1SkY9C2UDKnorSkp32PB
…ode readiness The fullnode /v1a/health endpoint can return status=fail even when the node is fully functional and serving API requests (e.g. its internal health check has stricter criteria about sync state or connected peers). Instead, try fetching a block template directly — if that succeeds, the fullnode is ready for mining. https://claude.ai/code/session_018N1SkY9C2UDKnorSkp32PB
The block miner was sleeping the full block_interval after mining, making actual block-to-block time = mining_time + interval. Now it measures elapsed mining time and sleeps only the remainder. Also moves solve_block to a thread executor so the event loop stays responsive during the CPU-bound nonce search. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…ments Update fullnode image to hathor-core:test-mode-block-weight and add the --test-mode-block-weight flag so block mining is as trivial as tx mining (weight ~1, nonce found on first try). Split the block interval test into two scenarios: - trivial weight (realistic with --test-mode-block-weight) - slow mining (regression safeguard for environments without the flag) Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
solve_tx used a 2^20 nonce limit which is insufficient for real transaction weights (~22), causing PoW to silently fail. Increased both solve_tx and solve_block to use the full 2^32 nonce space. Also moved solve_tx in DevMiningManager to run_in_executor so it no longer blocks the event loop during transaction mining. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Expand comments across all dev-miner files to explain design decisions and contrast with the production mining path, making the branch suitable for human review of the proposed changes. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- Remove claude.md from repo (AI tool config, not project code) - Fix double docstring in test_dev_miner.py to match existing convention - Add comment explaining blocks_found: 0 in DevMiningManager.status() - Remove unused self.tx_filters from RunDevService, document omission - Document configure_logging reuse and signal handler duplication with rationale (avoid modifying production code) and future improvements - Extract shared MAX_NONCE constant to txstratum/dev/__init__.py Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- Add missing docstrings for uptime property and __init__ methods - Rephrase docstrings to use imperative mood - Remove unused JobStatus import from test_dev_miner.py Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
📝 WalkthroughWalkthroughAdds an in-process dev-miner feature: transaction and block PoW solvers, a DevMiningManager, an interval-driven BlockMiner, CLI flags to run dev-miner, a Docker Compose dev-miner stack, protocol typing, and comprehensive tests exercising the new flow. Changes
Sequence Diagram(s)sequenceDiagram
participant Client as HTTP Client
participant API as Dev-Miner HTTP API
participant Manager as DevMiningManager
participant Executor as Thread Executor
participant Backend as Fullnode Backend
Client->>API: POST /submit-job (tx bytes)
API->>Manager: add_job(TxJob)
Manager->>Manager: schedule async mining task
Manager-->>API: Return job UUID
API-->>Client: Job UUID + status MINING
par Mining Task
Manager->>Executor: run_in_executor(solve_tx)
Executor->>Executor: iterate nonces, verify_pow()
Executor-->>Manager: nonce found / failed
Manager->>Backend: push_tx_or_block() [opt]
Backend-->>Manager: confirmation/error
Manager->>Manager: update job status & schedule cleanup
and Job Polling
Client->>API: GET /job-status?uuid=...
API->>Manager: get_job(uuid)
Manager-->>API: TxJob status
API-->>Client: JSON status
end
sequenceDiagram
participant BlockMiner as BlockMiner
participant Backend as Fullnode Backend
participant Executor as Thread Executor
BlockMiner->>BlockMiner: start()
loop Mining Loop
BlockMiner->>Backend: get_block_template()
Backend-->>BlockMiner: BlockTemplate
BlockMiner->>BlockMiner: construct Block
BlockMiner->>Executor: run_in_executor(solve_block)
Executor->>Executor: iterate nonces, verify_pow()
Executor-->>BlockMiner: Block with valid nonce
BlockMiner->>Backend: push_tx_or_block(block)
Backend-->>BlockMiner: confirmation
BlockMiner->>BlockMiner: sleep to maintain block_interval
end
BlockMiner->>BlockMiner: stop()
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 inconclusive)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches
🧪 Generate unit tests (beta)
Comment |
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (6)
txstratum/cli.py (2)
405-405: Consider extractingconfigure_loggingto a standalone function.The call
RunService.configure_logging(self, args)works but is unconventional—it invokes an instance method as an unbound method with an unrelatedself. The comment acknowledges this as a future improvement. For clarity, consider extracting it to a module-level function in a follow-up.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@txstratum/cli.py` at line 405, The call RunService.configure_logging(self, args) is using an instance method as an unbound method; extract the logic inside RunService.configure_logging into a new module-level function (e.g., configure_logging(args, logger=None)) and update RunService.configure_logging to delegate to that function (or replace callers to call the new function), ensuring all references (RunService.configure_logging, configure_logging) are updated and tests/usage in cli.py now call the module-level configure_logging with the appropriate arguments.
501-512: In-flight mining tasks may not be cleanly cancelled during shutdown.When
_shutdown()is called, any mining tasks spawned viaasyncio.ensure_futureinDevMiningManager.add_job()are not explicitly awaited or cancelled. This could result in "Task was destroyed but it is pending" warnings or race conditions if tasks try to access the backend afterbackend.stop().Consider storing task references in
DevMiningManagerand cancelling them instop(), similar to howBlockMinerhandles its_task.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@txstratum/cli.py` around lines 501 - 512, The shutdown flow currently doesn't ensure mining tasks started by DevMiningManager.add_job are cancelled before backend.stop; update DevMiningManager to track spawned tasks (store asyncio.Task objects similar to BlockMiner._task), implement cancellation+awaiting in DevMiningManager.stop() to cancel all tracked tasks and await their completion, and then change _shutdown to call await self.manager.stop() (so tasks are cancelled/awaited) before awaiting self.backend.stop(); ensure DevMiningManager.add_job returns or stores the Task reference so stop() can cancel it.txstratum/dev/block_miner.py (2)
173-181: Uselog.exceptioninstead oflog.errorfor better debugging.When logging the "Block rejected by fullnode" error, using
log.exceptionwould include the traceback context, which is helpful for debugging unexpected rejection reasons.Suggested change
try: await self.backend.push_tx_or_block(bytes(block)) except PushTxFailed: - self.log.error( + self.log.exception( "Block rejected by fullnode", height=template.height, hash=block.hash_hex, ) return🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@txstratum/dev/block_miner.py` around lines 173 - 181, The except block catching PushTxFailed should log the traceback for better debugging: change the except clause in the block where await self.backend.push_tx_or_block(bytes(block)) is handled (currently "except PushTxFailed:") to capture the exception (e.g. "except PushTxFailed as e:") and call self.log.exception("Block rejected by fullnode", height=template.height, hash=block.hash_hex) (log.exception already includes the traceback), replacing the existing self.log.error call so the rejection includes exception context.
108-122: Consider narrowing the exception type.The bare
except Exceptionat line 120 catches all exceptions including unexpected ones. For a readiness poll, consider catching more specific exceptions (e.g.,aiohttp.ClientError,ConnectionError) to avoid masking programming errors during development.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@txstratum/dev/block_miner.py` around lines 108 - 122, In _wait_for_fullnode, replace the broad "except Exception" that surrounds the call to backend.get_block_template with a narrower exception tuple (for example aiohttp.ClientError and ConnectionError or whatever transport/client exceptions backend.get_block_template can raise) so you only handle transient network/readiness errors; update imports to include the chosen exception types and keep the retry/logging behavior (self.log.info("Waiting for fullnode...")) for those specific exceptions while letting unexpected exceptions propagate so they aren't masked.tests/test_dev_miner.py (1)
62-97: Test class setup creates unused fixtures.
TestSolveTx.get_application()creates a manager and app that aren't used by the test methods (test_solve_tx_trivial_weightandtest_solve_tx_standard_weightonly callsolve_txdirectly). Consider using a simpler base class orunittest.TestCasefor these tests.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/test_dev_miner.py` around lines 62 - 97, The TestSolveTx class unnecessarily inherits AioHTTPTestCase and defines an async get_application that creates DevMiningManager and App fixtures which the test methods never use; change TestSolveTx to inherit from unittest.TestCase (or SimpleTestCase) instead of AioHTTPTestCase, remove the async get_application method and the manager/healthcheck/myapp attributes, and ensure imports are updated to import unittest so the test methods continue to call solve_tx(tx) directly (refer to TestSolveTx, get_application, DevMiningManager, App, solve_tx, test_solve_tx_trivial_weight, and test_solve_tx_standard_weight).txstratum/dev/manager.py (1)
163-169: Consider catching more specific exceptions.The bare
except Exceptionwill catch all errors including unexpected ones likeAttributeErrororTypeError. Consider catching network-related exceptions specifically, or at minimum log the exception type for debugging.Suggested improvement
try: parents: List[bytes] = await self.backend.get_tx_parents() - except Exception as e: + except Exception: - job.mark_as_failed(f"Failed to get parents: {e}") + self.log.exception("Failed to get tx parents", job_id=job.uuid.hex()) + job.mark_as_failed("Failed to get parents") self.txs_failed += 1 self._schedule_cleanup(job) return🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@txstratum/dev/manager.py` around lines 163 - 169, Replace the broad "except Exception" around the call to self.backend.get_tx_parents() in manager.py with more specific exception handling: catch network/IO-related errors that backend may raise (e.g., asyncio.TimeoutError, aiohttp.ClientError, OSError) and handle them by calling job.mark_as_failed, incrementing self.txs_failed, and calling self._schedule_cleanup(job); if you cannot enumerate all backend error types, at minimum include the exception type in the failure message (use type(e).__name__) when calling job.mark_as_failed so unexpected AttributeError/TypeError are obvious in logs and can be re-raised or handled separately.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@txstratum/dev/manager.py`:
- Around line 145-151: The code starts mining tasks with asyncio.ensure_future
in the blocks that call _mine_with_parents(job) and _mine_job(job) but never
stores references, so modify the manager to track active tasks (e.g., add a
self._tasks set initialized in __init__), create tasks via
asyncio.create_task(...) for _mine_with_parents and _mine_job, add each new task
to self._tasks and attach a done callback to remove it when finished, and update
stop() to cancel remaining tasks and await them (use task.cancel() and await
asyncio.gather(*self._tasks, return_exceptions=True)) to ensure clean shutdown
and avoid pending-task warnings.
---
Nitpick comments:
In `@tests/test_dev_miner.py`:
- Around line 62-97: The TestSolveTx class unnecessarily inherits
AioHTTPTestCase and defines an async get_application that creates
DevMiningManager and App fixtures which the test methods never use; change
TestSolveTx to inherit from unittest.TestCase (or SimpleTestCase) instead of
AioHTTPTestCase, remove the async get_application method and the
manager/healthcheck/myapp attributes, and ensure imports are updated to import
unittest so the test methods continue to call solve_tx(tx) directly (refer to
TestSolveTx, get_application, DevMiningManager, App, solve_tx,
test_solve_tx_trivial_weight, and test_solve_tx_standard_weight).
In `@txstratum/cli.py`:
- Line 405: The call RunService.configure_logging(self, args) is using an
instance method as an unbound method; extract the logic inside
RunService.configure_logging into a new module-level function (e.g.,
configure_logging(args, logger=None)) and update RunService.configure_logging to
delegate to that function (or replace callers to call the new function),
ensuring all references (RunService.configure_logging, configure_logging) are
updated and tests/usage in cli.py now call the module-level configure_logging
with the appropriate arguments.
- Around line 501-512: The shutdown flow currently doesn't ensure mining tasks
started by DevMiningManager.add_job are cancelled before backend.stop; update
DevMiningManager to track spawned tasks (store asyncio.Task objects similar to
BlockMiner._task), implement cancellation+awaiting in DevMiningManager.stop() to
cancel all tracked tasks and await their completion, and then change _shutdown
to call await self.manager.stop() (so tasks are cancelled/awaited) before
awaiting self.backend.stop(); ensure DevMiningManager.add_job returns or stores
the Task reference so stop() can cancel it.
In `@txstratum/dev/block_miner.py`:
- Around line 173-181: The except block catching PushTxFailed should log the
traceback for better debugging: change the except clause in the block where
await self.backend.push_tx_or_block(bytes(block)) is handled (currently "except
PushTxFailed:") to capture the exception (e.g. "except PushTxFailed as e:") and
call self.log.exception("Block rejected by fullnode", height=template.height,
hash=block.hash_hex) (log.exception already includes the traceback), replacing
the existing self.log.error call so the rejection includes exception context.
- Around line 108-122: In _wait_for_fullnode, replace the broad "except
Exception" that surrounds the call to backend.get_block_template with a narrower
exception tuple (for example aiohttp.ClientError and ConnectionError or whatever
transport/client exceptions backend.get_block_template can raise) so you only
handle transient network/readiness errors; update imports to include the chosen
exception types and keep the retry/logging behavior (self.log.info("Waiting for
fullnode...")) for those specific exceptions while letting unexpected exceptions
propagate so they aren't masked.
In `@txstratum/dev/manager.py`:
- Around line 163-169: Replace the broad "except Exception" around the call to
self.backend.get_tx_parents() in manager.py with more specific exception
handling: catch network/IO-related errors that backend may raise (e.g.,
asyncio.TimeoutError, aiohttp.ClientError, OSError) and handle them by calling
job.mark_as_failed, incrementing self.txs_failed, and calling
self._schedule_cleanup(job); if you cannot enumerate all backend error types, at
minimum include the exception type in the failure message (use type(e).__name__)
when calling job.mark_as_failed so unexpected AttributeError/TypeError are
obvious in logs and can be re-raised or handled separately.
ℹ️ Review info
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (8)
.gitignoredocker-compose.dev-miner.ymltests/test_dev_miner.pytxstratum/cli.pytxstratum/dev/__init__.pytxstratum/dev/block_miner.pytxstratum/dev/manager.pytxstratum/dev/tx_miner.py
Store asyncio.Task references in a set so stop() can cancel and await them, preventing pending-task warnings on shutdown. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- Simplify TestSolveTx to inherit unittest.TestCase (no async fixtures needed) - Fix shutdown ordering: cancel mining tasks before closing backend session - Use log.exception for PushTxFailed to include traceback - Narrow _wait_for_fullnode catch to (ClientError, OSError, RuntimeError) - Include exception type name in _mine_with_parents failure message Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
There was a problem hiding this comment.
Actionable comments posted: 4
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
txstratum/cli.py (1)
371-530:⚠️ Potential issue | 🟡 MinorCI is failing on formatting for this file.
The pipeline reports
Black formatting check would reformat this file. Please runblackontxstratum/cli.pyto unblock the test workflow.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@txstratum/cli.py` around lines 371 - 530, CI indicates this file fails Black formatting; run the Black formatter on the module containing RunDevService and main (the txstratum CLI implementation), reformat the file, stage and commit the changes so the file is Black-compliant, then push the commit to unblock the pipeline.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@tests/test_dev_miner.py`:
- Around line 1-413: The file fails Black formatting; run the Black formatter
(e.g., `black tests/test_dev_miner.py`) to reformat the test file so it matches
the project's style. Ensure you reformat the whole file including classes
TestSolveTx, TestDevMiningManager, TestSolveBlock, TestBlockMinerInterval and
helper functions update_timestamp, _make_miner, _wait_for_blocks,
_assert_regular_intervals so CI's Black check passes, then commit the
reformatted file.
In `@txstratum/dev/block_miner.py`:
- Around line 77-83: In the BlockMiner initializer, validate the incoming
block_interval_ms before converting to seconds: in the constructor (where
block_interval_ms is assigned to block_interval_s) add a guard that raises a
ValueError if block_interval_ms is <= 0 with a clear message; keep the existing
conversion to block_interval_s = block_interval_ms / 1000.0 and ensure
references to block_interval_ms and block_interval_s remain unchanged so callers
and internal scheduling logic continue to work.
In `@txstratum/dev/manager.py`:
- Around line 65-66: cancel_job() currently only updates state and removes the
job record but never cancels the running asyncio.Task, so the mining coroutine
can continue and publish results; modify cancel_job to locate the corresponding
asyncio.Task in self._tasks (the Task created where the mining coroutine is
spawned — the code that adds to self._tasks when starting a job), call
task.cancel(), and await or schedule its cancellation (handle
asyncio.CancelledError in the mining coroutine) before removing it from
self._tasks and updating state; also ensure the code path that spawns the mining
task (the block that creates and adds the Task to self._tasks) consistently
stores a reference keyed by job id so cancel_job(job_id) can find and cancel the
correct Task.
- Around line 212-219: The propagation block only catches PushTxFailed, so other
exceptions from self.backend.push_tx_or_block can leave a job stuck in MINING;
update the try/except around the call in the job.propagate branch to catch any
Exception (in addition to PushTxFailed), and in that generic-except call
job.mark_as_failed("Fullnode rejected transaction" or include exception
message), increment self.txs_failed, invoke self._schedule_cleanup(job), and log
the caught exception; target the block around self.backend.push_tx_or_block and
reuse job.mark_as_failed, self.txs_failed, and self._schedule_cleanup to ensure
all failures are treated the same.
---
Outside diff comments:
In `@txstratum/cli.py`:
- Around line 371-530: CI indicates this file fails Black formatting; run the
Black formatter on the module containing RunDevService and main (the txstratum
CLI implementation), reformat the file, stage and commit the changes so the file
is Black-compliant, then push the commit to unblock the pipeline.
ℹ️ Review info
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (4)
tests/test_dev_miner.pytxstratum/cli.pytxstratum/dev/block_miner.pytxstratum/dev/manager.py
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- Change _tasks from set to dict keyed by job.uuid so cancel_job can find and cancel the corresponding asyncio.Task - Broaden push_tx_or_block catch to Exception (not just PushTxFailed) so network errors don't leave jobs stuck in MINING - Add block_interval_ms > 0 validation in BlockMiner.__init__ - Run Black on test_dev_miner.py, manager.py, and block_miner.py Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
There was a problem hiding this comment.
Actionable comments posted: 4
🧹 Nitpick comments (1)
txstratum/cli.py (1)
443-451: Fail fast or wire through unsupported flags in--dev-minermode.
RunDevServicecurrently buildsApp(...)with a reduced argument set, while CLI still accepts filter/version-related options. Consider either passing those options through or explicitly rejecting them in dev-miner mode to avoid silent behavior differences.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@txstratum/cli.py` around lines 443 - 451, RunDevService builds the App using a reduced argument set but the CLI still accepts filter/version-related flags; update the RunDevService code path where api_app = App(...) is constructed to either forward the remaining CLI flags (e.g., pass through self.args.max_filter_bits, self.args.filter_version, any version/filter flags) into the App constructor alongside existing args (max_tx_weight, max_timestamp_delta, tx_timeout, fix_invalid_timestamp, only_standard_script) or explicitly validate and fail fast: detect when --dev-miner is active and any unsupported flags (the filter/version-related CLI args) are set and raise a clear error (ValueError or click.UsageError) naming the offending flags so users know they are incompatible with dev-miner mode; implement one behavior consistently and add a short unit/test to assert the new validation or forwarding works.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@txstratum/cli.py`:
- Line 422: The HealthCheck and App constructors are typed to TxMiningManager
but receive DevMiningManager in dev-miner mode causing type errors; introduce a
shared protocol (e.g., define a MiningManagerProtocol with the common methods
used by HealthCheck and App) or change the constructors' type hints to
Union[TxMiningManager, DevMiningManager]; update the type annotations for
HealthCheck.__init__ and App.__init__ (and any local variables like self.manager
where used) to use the new MiningManagerProtocol or the union so both
DevMiningManager and TxMiningManager satisfy static type checks.
- Around line 402-405: Call RunService.configure_logging with a typed cast of
self to satisfy mypy: import cast from typing and change the invocation to
RunService.configure_logging(cast(RunService, self), args). This keeps the
existing behavior while making the cross-class method call type-safe; reference
the RunService.configure_logging call and the RunDevService instance in the
change.
In `@txstratum/dev/block_miner.py`:
- Line 90: The _task attribute is currently annotated as a bare asyncio.Task
which triggers strict typing errors; update its type annotation to
asyncio.Task[None] to reflect that the _run coroutine returns None. Locate the
class attribute declaration for self._task in block_miner.py (the symbol _task)
and change its annotation to use the generic type asyncio.Task[None]; ensure
imports/types remain compatible with existing asyncio usage and that the _run
method signature remains as returning None so the type argument is correct.
In `@txstratum/dev/manager.py`:
- Line 64: The _tasks attribute uses an unparameterized asyncio.Task which fails
strict type checks; update the annotation on self._tasks in Manager (symbol:
_tasks) from Dict[bytes, asyncio.Task] to Dict[bytes, asyncio.Task[None]] to
reflect that stored coroutines (_mine_with_parents and _mine_job) return None,
and ensure imports/typing are compatible (e.g., Python version/types) so the
subscripted asyncio.Task is accepted by the type checker.
---
Nitpick comments:
In `@txstratum/cli.py`:
- Around line 443-451: RunDevService builds the App using a reduced argument set
but the CLI still accepts filter/version-related flags; update the RunDevService
code path where api_app = App(...) is constructed to either forward the
remaining CLI flags (e.g., pass through self.args.max_filter_bits,
self.args.filter_version, any version/filter flags) into the App constructor
alongside existing args (max_tx_weight, max_timestamp_delta, tx_timeout,
fix_invalid_timestamp, only_standard_script) or explicitly validate and fail
fast: detect when --dev-miner is active and any unsupported flags (the
filter/version-related CLI args) are set and raise a clear error (ValueError or
click.UsageError) naming the offending flags so users know they are incompatible
with dev-miner mode; implement one behavior consistently and add a short
unit/test to assert the new validation or forwarding works.
ℹ️ Review info
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (4)
tests/test_dev_miner.pytxstratum/cli.pytxstratum/dev/block_miner.pytxstratum/dev/manager.py
- Add MiningManager Protocol so HealthCheck and App accept both TxMiningManager and DevMiningManager without type errors - Make configure_logging a @staticmethod (it never uses self) - Parameterize asyncio.Task[None] in manager.py and block_miner.py Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
There was a problem hiding this comment.
Actionable comments posted: 2
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@txstratum/dev/manager.py`:
- Around line 188-203: The solve step currently calls solve_tx via
loop.run_in_executor without honoring job.timeout, which can leave a job stuck
in MINING; wrap the blocking call with an asyncio timeout (e.g.,
asyncio.wait_for around loop.run_in_executor(None, solve_tx, tx) using
job.timeout) and catch asyncio.TimeoutError to call job.mark_as_failed("PoW
timed out") (or similar), increment self.txs_timeout, call
self._schedule_cleanup(job), and return; keep existing failure handling for a
non-solved result intact and ensure job.started_at/status are unchanged except
when marking failed on timeout.
- Around line 103-113: The status dict in DevMiningManager currently hardcodes
"blocks_found": 0; instead, accept and use a real getter for
BlockMiner.blocks_found (e.g., a callable or lambda stored on the manager) so
the status method returns the live count, and when constructing DevMiningManager
in txstratum.cli pass a getter referencing self.block_miner.blocks_found (or a
small wrapper) so /mining-status reflects actual block production; update
DevMiningManager to store that getter (e.g., self.get_blocks_found) and use it
in place of the literal 0.
ℹ️ Review info
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (6)
txstratum/api.pytxstratum/cli.pytxstratum/dev/block_miner.pytxstratum/dev/manager.pytxstratum/healthcheck/healthcheck.pytxstratum/protocols.py
| return { | ||
| "miners": [], | ||
| "miners_count": 0, | ||
| "total_hashrate_ghs": 0, | ||
| "started_at": self.started_at, | ||
| "txs_solved": self.txs_solved, | ||
| "txs_timeout": self.txs_timeout, | ||
| # Block mining is handled by BlockMiner (separate component); | ||
| # this manager only tracks transaction mining statistics. | ||
| "blocks_found": 0, | ||
| "uptime": self.uptime, |
There was a problem hiding this comment.
Expose real blocks_found in dev manager status.
Line 112 hardcodes "blocks_found": 0, but block production is actually tracked by txstratum/dev/block_miner.py (BlockMiner.blocks_found). This makes /mining-status inaccurate in dev-miner mode.
🔧 Proposed direction
-from typing import TYPE_CHECKING, Any, Dict, List, Optional
+from typing import TYPE_CHECKING, Any, Callable, Dict, List, Optional
...
- def __init__(self, backend: "HathorClient"):
+ def __init__(
+ self,
+ backend: "HathorClient",
+ *,
+ get_blocks_found: Optional[Callable[[], int]] = None,
+ ):
...
+ self._get_blocks_found = get_blocks_found or (lambda: 0)
...
- "blocks_found": 0,
+ "blocks_found": self._get_blocks_found(),Then wire it in txstratum/cli.py when creating DevMiningManager (pass a getter for self.block_miner.blocks_found).
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@txstratum/dev/manager.py` around lines 103 - 113, The status dict in
DevMiningManager currently hardcodes "blocks_found": 0; instead, accept and use
a real getter for BlockMiner.blocks_found (e.g., a callable or lambda stored on
the manager) so the status method returns the live count, and when constructing
DevMiningManager in txstratum.cli pass a getter referencing
self.block_miner.blocks_found (or a small wrapper) so /mining-status reflects
actual block production; update DevMiningManager to store that getter (e.g.,
self.get_blocks_found) and use it in place of the literal 0.
There was a problem hiding this comment.
This is acceptable for now: it is the first iteration of this DevMiner. It should be simple enough to solve most use cases, not complete and too complex to review.
There was a problem hiding this comment.
@tuliomir, understood — keeping it simple for the first iteration makes sense. I'll drop this suggestion.
| job.status = JobStatus.MINING | ||
| job.started_at = txstratum.time.time() | ||
| start = txstratum.time.time() | ||
|
|
||
| tx = job.get_tx() | ||
| # Update timestamp to current time — the fullnode validates that tx | ||
| # timestamps are within an acceptable delta of the current time. | ||
| tx.timestamp = int(txstratum.time.time()) | ||
|
|
||
| loop = asyncio.get_event_loop() | ||
| solved = await loop.run_in_executor(None, solve_tx, tx) | ||
| if not solved: | ||
| job.mark_as_failed("Failed to solve PoW") | ||
| self.txs_failed += 1 | ||
| self._schedule_cleanup(job) | ||
| return |
There was a problem hiding this comment.
Honor job.timeout during solve to preserve API behavior parity.
Line 198 runs PoW without a timeout guard. A timed request can stay MINING indefinitely, and txs_timeout never changes, which diverges from expected timeout semantics.
⏱️ Proposed fix
loop = asyncio.get_event_loop()
- solved = await loop.run_in_executor(None, solve_tx, tx)
+ solve_future = loop.run_in_executor(None, solve_tx, tx)
+ try:
+ solved = (
+ await asyncio.wait_for(solve_future, timeout=job.timeout)
+ if job.timeout is not None
+ else await solve_future
+ )
+ except asyncio.TimeoutError:
+ job.status = JobStatus.TIMEOUT
+ job.message = "Mining timeout"
+ job.total_time = txstratum.time.time() - job.created_at
+ self.txs_timeout += 1
+ self._schedule_cleanup(job)
+ return
+
if not solved:
job.mark_as_failed("Failed to solve PoW")
self.txs_failed += 1
self._schedule_cleanup(job)
return🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@txstratum/dev/manager.py` around lines 188 - 203, The solve step currently
calls solve_tx via loop.run_in_executor without honoring job.timeout, which can
leave a job stuck in MINING; wrap the blocking call with an asyncio timeout
(e.g., asyncio.wait_for around loop.run_in_executor(None, solve_tx, tx) using
job.timeout) and catch asyncio.TimeoutError to call job.mark_as_failed("PoW
timed out") (or similar), increment self.txs_timeout, call
self._schedule_cleanup(job), and return; keep existing failure handling for a
non-solved result intact and ensure job.started_at/status are unchanged except
when marking failed on timeout.
There was a problem hiding this comment.
This is acceptable for now: it is the first iteration of this DevMiner. It should be simple enough to solve most use cases, not complete and too complex to review. If a high weight block or transaction is submitted, the mining will be stuck anyway.
There was a problem hiding this comment.
@tuliomir, understood! That's a fair trade-off for a first iteration aimed at lightweight dev/test scenarios — where transactions and blocks will use test-mode weights. No objection to keeping it simple for now.
✏️ Learnings added
Learnt from: tuliomir
Repo: HathorNetwork/tx-mining-service PR: 163
File: txstratum/dev/manager.py:188-203
Timestamp: 2026-03-03T23:25:05.197Z
Learning: In `txstratum/dev/manager.py`, `DevMiningManager._mine_job` intentionally does not honor `job.timeout` or guard the `solve_tx` executor call with `asyncio.wait_for`. This is a deliberate design decision for the first iteration: DevMiner is only meant for dev/test environments with low-weight transactions (using fullnode --test-mode-tx-weight), and a high-weight submission would block mining regardless. Timeout semantics parity with TxMiningManager is deferred to a future iteration.
Note: Learnings are effective only in the context of similar code segments. To apply general coding guidelines, please configure review instructions. You can manage existing learnings in the app.
Motivation
Integration tests in client apps like the
hathor-wallet-libcurrently require three Docker containers (fullnode + tx-mining-service + cpuminer) and suffer from non-deterministic block timing, causing test flakiness around reward lock releases and confirmation accumulation. See RFC 102 for full context and architectural decisions.Description
Adds a
--dev-minermode that replaces the production mining stack (stratum server + cpuminer) with in-process mining. This eliminates one container and makes block production deterministic:Before:
fullnode ──stratum──▶ tx-mining-service ◀──stratum──── cpuminerAfter:
fullnode ◀──HTTP──── tx-mining-service (--dev-miner)Three new modules in
txstratum/dev/:tx_miner.py— Solves transaction PoW by brute-forcing nonces. With--test-mode-tx-weighton the fullnode, the first nonce almost always works.block_miner.py— Background loop that produces blocks at a configurable interval (--block-interval), with timing compensation to maintain steady cadence even when PoW isn't trivial.manager.py—DevMiningManager, a drop-in replacement forTxMiningManager. Implements the same interface consumed by the HTTP API, sotxstratum/api.pyrequires no changes.The CLI routes to
RunDevService(instead ofRunService) when--dev-mineris passed. No production code paths are modified.Depends on: hathor-core#1609 (
--test-mode-block-weightCLI flag)Acceptance Criteria
--dev-minerenables in-process transaction and block mining without stratum--block-intervalcontrols block production cadence in milliseconds (default: 1000)submit-job,job-status,mining-status,health) works identically to productionMiningManagerclass )Checklist
master, confirm this code is production-ready and can be included in future releases as soon as it gets mergedSummary by CodeRabbit
New Features
Tests
Chores