Skip to content

perf(daemon): release pooled connections instead of destroying them#399

Merged
andreabadesso merged 18 commits into
masterfrom
perf/daemon-pool-release
Apr 24, 2026
Merged

perf(daemon): release pooled connections instead of destroying them#399
andreabadesso merged 18 commits into
masterfrom
perf/daemon-pool-release

Conversation

@andreabadesso
Copy link
Copy Markdown
Collaborator

@andreabadesso andreabadesso commented Apr 17, 2026

Motivation

Tier 1 fix #1 from #395. Every handler in services/index.ts ended with mysql.destroy() in its finally block, which closes the underlying TCP socket and defeats the connection pool — every subsequent call paid a fresh TCP + MySQL auth handshake (plus TLS in prod). Effectively the pool size was 0.

Stacked on #398 (the CI bench workflow) so the workflow can fire against this change and produce its first real comparison report. When the stack merges, the final result lands on master together.

Acceptance Criteria

  • All 11 mysql.destroy() calls in services/index.ts replaced with mysql.release()
  • getDbConnection() return type corrected to PoolConnection so .release() is available without casts
  • No behavior change for callers — connection is still unavailable to the handler after the finally block, just returned to the pool rather than destroyed
  • No regression in integration test fixtures the local bench can reach

Measured impact

Local bench against VOIDED_TOKEN_AUTHORITY (66 events), 5 baseline runs vs 8 candidate runs:

metric baseline p50 (ms) candidate p50 (ms) Δ 95% CI
totalMs 1253.6 967.0 -22.9% [-28.8%, -2.7%] 🟢
metadataDiff 138.3 42.40 -69.4% [-73.4%, -55.8%] 🟢
handleTxFirstBlock 26.79 15.47 -42.3% [-45.4%, -25.8%] 🟢
handleVertexAccepted 490.6 370.0 -24.6% [-31.3%, +2.6%] ⚪ (directional)

The handlers that did only 1–2 queries each show the largest relative gain — connection acquire was a dominant fraction of their total cost. handleVertexAccepted does more DB work so the relative effect is smaller, but directionally improved.

These numbers are local (not CI), so absolute values aren't apples-to-apples with the CI runner — but the direction and magnitude are consistent with issue #395's estimate of "~10–50ms savings per event, depending on RDS latency and TLS."

Checklist

  • If you are requesting a merge into master, confirm this code is production-ready and can be included in future releases as soon as it gets merged
  • Make sure either the unit tests and/or the QA tests are capable of testing the new features (integration test suite covers these code paths — behavior is unchanged, only the connection lifecycle is different)
  • Make sure you do not include new dependencies in the project unless strictly necessary and do not include dev-dependencies as production ones. (no new deps)

🤖 Generated with Claude Code

Summary by CodeRabbit

  • Bug Fixes

    • Connections are now reliably returned to the pool and cleaned up on errors, preventing resource leaks and improving event processing reliability.
  • Refactor

    • Transaction and error handling for event processing reorganized so rollbacks are isolated and connections are always released back to the pool.
  • Tests

    • Unit tests updated to match the new pooled connection lifecycle and to cover rollback/error scenarios.

@coderabbitai
Copy link
Copy Markdown

coderabbitai Bot commented Apr 17, 2026

Note

Reviews paused

It looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the reviews.auto_review.auto_pause_after_reviewed_commits setting.

Use the following commands to manage reviews:

  • @coderabbitai resume to resume automatic reviews.
  • @coderabbitai review to trigger a single review.

Use the checkboxes below for quick actions:

  • ▶️ Resume reviews
  • 🔍 Trigger review
📝 Walkthrough

Walkthrough

Refactored daemon DB handling to use pooled connections: types updated to PoolConnection, handlers acquire/release pooled connections, transaction flow moved/isolated into inner try/catch with rollback handling, and tests updated to expect release() and adjusted rollback-triggering fixtures. (≤50 words)

Changes

Cohort / File(s) Summary
DB helper
packages/daemon/src/db/index.ts
Return type changed from Promise<MysqlConnection>Promise<PoolConnection>; added PoolConnection import; runtime still returns pool.getConnection().
Service handlers
packages/daemon/src/services/index.ts
Handlers now use pooled `PoolConnection
Tests
packages/daemon/__tests__/services/services.test.ts
Mocks and assertions switched from destroyrelease; adjusted a handleVertexAccepted rollback test to throw from a later DB call and updated event fixture data shape to reach the new rollback path.

Sequence Diagram(s)

sequenceDiagram
  participant Event
  participant Service as "Daemon Service\n(handler)"
  participant Pool as "DB Pool\n(getDbConnection)"
  participant DB as "PoolConnection\n/ Database"

  Event->>Service: deliver event (e.g., vertexAccepted)
  Service->>Pool: getDbConnection()
  Pool-->>Service: PoolConnection (mysql)
  Service->>DB: optional read-only checks (e.g., getTransactionById)
  alt needs transaction
    Service->>DB: beginTransaction()
    Service->>DB: transactional queries/updates
    alt success
      Service->>DB: commit()
    else error
      Service->>DB: rollback()
      Note right of DB: if rollback fails, connection destroyed and error rethrown
    end
  end
  Service->>DB: release()
  DB-->>Service: released
  Service-->>Event: handler done / rethrow on error
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

Possibly related PRs

Suggested reviewers

  • pedroferreira1
  • luislhl

Poem

🐰 I hopped to the pool and left destroy behind,
I begin, I commit, or a gentle rollback I find,
I release what I borrow, then bound off on my way,
Soft paws tidy state, and connections saved for play.

🚥 Pre-merge checks | ✅ 4 | ❌ 1

❌ Failed checks (1 inconclusive)

Check name Status Explanation Resolution
Linked Issues check ❓ Inconclusive The PR is stacked on #398 and addresses daemon connection pooling performance, but linked issue #1 is too broad (initial implementation) to validate specific requirements. Clarify which issue(s) define the specific requirements for this pooled connection lifecycle refactor, or confirm #398 is the primary reference.
✅ Passed checks (4 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title accurately describes the main change: replacing mysql.destroy() with mysql.release() for pooled connection lifecycle management.
Out of Scope Changes check ✅ Passed All changes are scoped to connection lifecycle management: replacing destroy() with release(), updating return types, and adding rollback error handling.
Docstring Coverage ✅ Passed No functions found in the changed files to evaluate docstring coverage. Skipping docstring coverage check.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch perf/daemon-pool-release

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@andreabadesso andreabadesso self-assigned this Apr 17, 2026
@andreabadesso andreabadesso added the bug Something isn't working label Apr 17, 2026
@andreabadesso andreabadesso added enhancement New feature or request and removed bug Something isn't working labels Apr 17, 2026
@github-actions
Copy link
Copy Markdown

github-actions Bot commented Apr 17, 2026

�[94m➤�[39m YN0000: · Yarn 4.7.0
�[94m➤�[39m �[90mYN0000�[39m: ┌ Resolution step
::group::Resolution step
�[94m➤�[39m YN0085: │ �[38;5;70m+�[39m �[38;5;173mts-node�[39m�[38;5;111m@�[39m�[38;5;111mnpm:10.9.2�[39m, �[38;5;166m@cspotcode/�[39m�[38;5;173msource-map-support�[39m�[38;5;111m@�[39m�[38;5;111mnpm:0.8.1�[39m, �[38;5;166m@jridgewell/�[39m�[38;5;173mresolve-uri�[39m�[38;5;111m@�[39m�[38;5;111mnpm:3.1.2�[39m, �[38;5;166m@jridgewell/�[39m�[38;5;173msourcemap-codec�[39m�[38;5;111m@�[39m�[38;5;111mnpm:1.5.5�[39m, and �[38;5;220m13�[39m more.
::endgroup::
�[94m➤�[39m �[90mYN0000�[39m: └ Completed
�[94m➤�[39m �[90mYN0000�[39m: ┌ Post-resolution validation
::group::Post-resolution validation
�[93m➤�[39m YN0002: │ �[38;5;173mroot-workspace-0b6124�[39m�[38;5;111m@�[39m�[38;5;111mworkspace:.�[39m doesn't provide �[38;5;166m@types/�[39m�[38;5;173mnode�[39m (�[38;5;111mp3fc06�[39m), requested by �[38;5;173mts-node�[39m.
�[93m➤�[39m YN0002: │ �[38;5;173mroot-workspace-0b6124�[39m�[38;5;111m@�[39m�[38;5;111mworkspace:.�[39m doesn't provide �[38;5;173mtypescript�[39m (�[38;5;111mp98e8b�[39m), requested by �[38;5;173mts-node�[39m.
�[93m➤�[39m YN0086: │ Some peer dependencies are incorrectly met by your project; run �[38;5;111myarn explain peer-requirements �[39m for details, where �[38;5;111m�[39m is the six-letter p-prefixed code.
::endgroup::
�[94m➤�[39m �[90mYN0000�[39m: └ Completed
�[94m➤�[39m �[90mYN0000�[39m: ┌ Fetch step
::group::Fetch step
�[94m➤�[39m YN0013: │ �[38;5;220m6�[39m packages were added to the project (�[38;5;160m+ 410.36 KiB�[39m).
::endgroup::
�[94m➤�[39m �[90mYN0000�[39m: └ Completed in 0s 239ms
�[94m➤�[39m �[90mYN0000�[39m: ┌ Link step
::group::Link step
::endgroup::
�[94m➤�[39m �[90mYN0000�[39m: └ Completed
�[93m➤�[39m YN0000: · Done with warnings in 0s 534ms

Sync benchmark comparison

Scenario: VOIDED_TOKEN_AUTHORITY (66 events)
Runs: baseline=5 (label: baseline), candidate=5 (label: candidate), warmup=1/1
Bootstrap samples: 10000, seed: 42
Verdict: 🟢 3 improvements · 🔴 2 regressions · ⚪ 14 noise · ⚠️ 0 skipped

metric baseline p50 (ms) candidate p50 (ms) Δ 95% CI
totalMs 840.7 683.8 -18.7% [-25.8%, -8.2%] 🟢
addOrUpdateTx 8.702 9.877 +13.5% [+2.3%, +17.7%] 🔴
addUtxos 8.767 9.140 +4.3% [-4.3%, +12.4%]
clearTxProposalForVoidedTx 0.365 0.355 -2.6% [-15.4%, +6.9%]
getAddressWalletInfo 9.424 9.580 +1.6% [-6.2%, +9.2%]
getTransactionById 21.77 24.57 +12.9% [-6.0%, +34.2%]
getTxOutputsFromTx 1.522 1.642 +7.9% [-15.3%, +107.3%]
handleTxFirstBlock 11.25 6.968 -38.1% [-47.5%, -35.3%] 🟢
handleVertexAccepted 194.8 161.8 -17.0% [-23.7%, +6.8%]
handleVoidedTx 34.69 30.27 -12.7% [-27.5%, +10.6%]
markUtxosAsVoided 1.465 1.550 +5.8% [-59.4%, +94.0%]
metadataDiff 54.62 18.68 -65.8% [-67.4%, -56.7%] 🟢
unspendInputs 0.750 0.781 +4.1% [-14.0%, +12.7%]
updateAddressTablesWithTx 29.61 31.76 +7.3% [-7.5%, +15.1%]
updateTxOutputSpentBy 1.287 1.397 +8.5% [-1.8%, +14.8%]
updateWalletTablesWithTx 0.257 0.222 -13.6% [-21.4%, +24.0%]
voidAddressTransaction 9.302 9.802 +5.4% [-21.6%, +29.0%]
voidTransaction 1.351 1.715 +26.9% [+5.0%, +138.9%] 🔴
voidWalletTransaction 1.397 1.441 +3.1% [-4.1%, +12.9%]

🟢/🔴 mean the 95% CI is fully on one side of 0. ⚪ means the CI crosses 0 — the difference is indistinguishable from noise at this run count. This report is informational only; CI runner variance makes hard gates unreliable at the run counts we can afford in CI.

andreabadesso and others added 2 commits April 20, 2026 12:25
Tier 1 fix #1 from #395. Every handler in services/index.ts ended with
mysql.destroy() in its finally block, which closes the underlying TCP
socket and defeats the pool — every subsequent call paid a fresh TCP +
MySQL auth handshake. Effectively the connection pool size was 0.

Replaced all 11 occurrences with mysql.release() so connections return
to the pool. Also corrected the getDbConnection() return type to
PoolConnection (from its alias Connection) so .release() is available
without casts.

Measured against VOIDED_TOKEN_AUTHORITY (66 events, 5 baseline runs vs
8 candidate runs):

| metric | baseline p50 | candidate p50 | Δ | 95% CI |
|---|---:|---:|---:|---|
| totalMs | 1253.6 | 967.0 | -22.9% | [-28.8%, -2.7%] 🟢 |
| metadataDiff | 138.3 | 42.40 | -69.4% | [-73.4%, -55.8%] 🟢 |
| handleTxFirstBlock | 26.79 | 15.47 | -42.3% | [-45.4%, -25.8%] 🟢 |

handleVertexAccepted directionally improved (-24.6%) but CI crosses 0
at this run count. The small handlers that do only 1–2 queries show
the largest relative gain — connection acquire was previously a
dominant fraction of their cost.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
services.test.ts was mocking getDbConnection to return objects with a
destroy jest.fn(), and asserting destroy was called. After the pool
lifecycle fix (handlers now call release() instead of destroy()), those
mocks no longer match and all 44 service tests fail with
"mysql.release is not a function".

Update the mocks and assertions to track release() calls instead.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
@andreabadesso andreabadesso force-pushed the perf/daemon-pool-release branch from 2fa0ffe to 5b4cdf3 Compare April 20, 2026 15:26
@andreabadesso andreabadesso changed the base branch from feat/daemon-bench-ci to master April 20, 2026 15:26
@andreabadesso andreabadesso moved this from Todo to In Progress (Done) in Hathor Network Apr 20, 2026
Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (3)
packages/daemon/__tests__/services/services.test.ts (1)

1764-1855: ⚠️ Potential issue | 🟡 Minor

Assert release() in the handleNcExecVoided tests too.

This suite now mocks release, but none of these paths verify it, so a cleanup regression in this handler could slip through.

🧪 Proposed test assertions
@@
     expect(addOrUpdateTx).not.toHaveBeenCalled();
     expect(mockDb.commit).toHaveBeenCalled();
+    expect(mockDb.release).toHaveBeenCalled();
@@
     expect(deleteTokens).toHaveBeenCalledWith(mockDb, [nanoToken1, nanoToken2]);
     expect(addOrUpdateTx).not.toHaveBeenCalled();
     expect(mockDb.commit).toHaveBeenCalled();
+    expect(mockDb.release).toHaveBeenCalled();
@@
     await handleNcExecVoided(context as any);
 
     expect(deleteTokens).not.toHaveBeenCalled();
     expect(mockDb.commit).toHaveBeenCalled();
+    expect(mockDb.release).toHaveBeenCalled();
@@
     await handleNcExecVoided(context as any);
 
     expect(deleteTokens).toHaveBeenCalledWith(mockDb, [nanoToken]);
     expect(mockDb.commit).toHaveBeenCalled();
+    expect(mockDb.release).toHaveBeenCalled();
@@
     expect(mockDb.rollback).toHaveBeenCalled();
     expect(mockDb.commit).not.toHaveBeenCalled();
     expect(logger.error).toHaveBeenCalledWith('handleNcExecVoided error: ', error);
+    expect(mockDb.release).toHaveBeenCalled();
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/daemon/__tests__/services/services.test.ts` around lines 1764 -
1855, The tests for handleNcExecVoided currently mock mockDb.release but never
assert it; update each spec to assert resource cleanup by adding
expect(mockDb.release).toHaveBeenCalled() alongside the existing commit/rollback
assertions (e.g., in the "should not delete any tokens...", "should delete only
nano-created tokens...", "should NOT delete traditional...", "should delete nano
tokens...", and "should rollback on error and rethrow" tests); ensure the error
test asserts rollback and then expect(mockDb.release).toHaveBeenCalled() as well
so release is verified in both success and failure paths.
packages/daemon/src/services/index.ts (2)

342-356: ⚠️ Potential issue | 🔴 Critical

Add rollback before early returns in handleVertexAccepted.

Lines 349 and 356 return early after beginTransaction() without calling commit or rollback. The finally block at line 588 calls release(), which returns an active transaction to the pool instead of closing it properly.

Proposed fix
       if (dbTx) {
         const daemonUptime = getDaemonUptime();
         // We do not log if the daemon has just started, because it's expected that
         // we receive an initial duplicate transaction from the fullnode in this case.
         if (daemonUptime < DUPLICATE_TX_ALERT_GRACE_PERIOD) {
+          await mysql.rollback();
           return;
         }
 
         logger.error(`Transaction ${hash} already in the database and the daemon has not been recently restarted (uptime of ${daemonUptime} seconds). This is unexpected.`);
 
         // This might happen if the service has been recently restarted,
         // so we should raise the alert and just ignore the tx
+        await mysql.rollback();
         return;
       }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/daemon/src/services/index.ts` around lines 342 - 356, The early
returns in handleVertexAccepted after starting a DB transaction (when calling
beginTransaction and then checking dbTx from getTransactionById) can leave an
active transaction open; update the control flow so that before any early return
(e.g., the daemon uptime grace-period check and the duplicate-tx path where
logger.error is called) you explicitly call rollback() on the active
DbTransaction and then release() it (or ensure the finally block sees a closed
transaction) — locate the transaction created by beginTransaction in
handleVertexAccepted and add rollback calls for both the "daemonUptime <
DUPLICATE_TX_ALERT_GRACE_PERIOD" path and the duplicate transaction path to
ensure transactions are not returned to the pool as active.

968-990: ⚠️ Potential issue | 🟠 Major

Release these pooled connections in finally blocks.

If getLastSyncedEvent throws in updateLastSyncedEvent, the connection is never released. The same applies to dbUpdateLastSyncedEvent and the context validation. In fetchInitialState, if either getLastSyncedEvent or fetchMinRewardBlocks throws, the connection is leaked.

Wrap the function bodies in try-finally to ensure mysql.release() is always called, matching the pattern used elsewhere in this file (e.g., handleVoidedTx).

Proposed fix
 export const updateLastSyncedEvent = async (context: Context) => {
   const mysql = await getDbConnection();
 
-  const lastDbSyncedEvent: LastSyncedEvent | null = await getLastSyncedEvent(mysql);
+  try {
+    const lastDbSyncedEvent: LastSyncedEvent | null = await getLastSyncedEvent(mysql);
 
-  if (!context.event) {
-    throw new Error('Tried to update last synced event but no event in context');
-  }
+    if (!context.event) {
+      throw new Error('Tried to update last synced event but no event in context');
+    }
 
-  const lastEventId = context.event.event.id;
+    const lastEventId = context.event.event.id;
 
-  if (lastDbSyncedEvent
-    && lastDbSyncedEvent.last_event_id > lastEventId) {
-    logger.error('Tried to store an event lower than the one on the database', {
-      lastEventId,
-      lastDbSyncedEvent: JSONBigInt.stringify(lastDbSyncedEvent),
-    });
-    mysql.release();
-    throw new Error('Event lower than stored one.');
-  }
-  await dbUpdateLastSyncedEvent(mysql, lastEventId);
-
-  mysql.release();
+    if (lastDbSyncedEvent
+      && lastDbSyncedEvent.last_event_id > lastEventId) {
+      logger.error('Tried to store an event lower than the one on the database', {
+        lastEventId,
+        lastDbSyncedEvent: JSONBigInt.stringify(lastDbSyncedEvent),
+      });
+      throw new Error('Event lower than stored one.');
+    }
+    await dbUpdateLastSyncedEvent(mysql, lastEventId);
+  } finally {
+    mysql.release();
+  }
 };
 
 export const fetchInitialState = async () => {
   const mysql = await getDbConnection();
-  const lastEvent = await getLastSyncedEvent(mysql);
+  let lastEvent: LastSyncedEvent | null;
+  try {
+    lastEvent = await getLastSyncedEvent(mysql);
+  } finally {
+    mysql.release();
+  }
+
   const rewardMinBlocks = await fetchMinRewardBlocks();
-
-  mysql.release();
 
   return {
     lastEventId: lastEvent?.last_event_id,
     rewardMinBlocks,
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/daemon/src/services/index.ts` around lines 968 - 990, The connection
acquired via getDbConnection() in updateLastSyncedEvent (and similarly in
fetchInitialState) must be released even if any subsequent call throws; wrap the
logic after obtaining mysql in a try...finally and call mysql.release() in the
finally block so getLastSyncedEvent, dbUpdateLastSyncedEvent, context
validation, or fetchMinRewardBlocks cannot leak the pooled connection—follow the
same pattern used in handleVoidedTx by moving all work into try and placing
mysql.release() in finally while keeping existing error handling and rethrows
intact.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Outside diff comments:
In `@packages/daemon/__tests__/services/services.test.ts`:
- Around line 1764-1855: The tests for handleNcExecVoided currently mock
mockDb.release but never assert it; update each spec to assert resource cleanup
by adding expect(mockDb.release).toHaveBeenCalled() alongside the existing
commit/rollback assertions (e.g., in the "should not delete any tokens...",
"should delete only nano-created tokens...", "should NOT delete traditional...",
"should delete nano tokens...", and "should rollback on error and rethrow"
tests); ensure the error test asserts rollback and then
expect(mockDb.release).toHaveBeenCalled() as well so release is verified in both
success and failure paths.

In `@packages/daemon/src/services/index.ts`:
- Around line 342-356: The early returns in handleVertexAccepted after starting
a DB transaction (when calling beginTransaction and then checking dbTx from
getTransactionById) can leave an active transaction open; update the control
flow so that before any early return (e.g., the daemon uptime grace-period check
and the duplicate-tx path where logger.error is called) you explicitly call
rollback() on the active DbTransaction and then release() it (or ensure the
finally block sees a closed transaction) — locate the transaction created by
beginTransaction in handleVertexAccepted and add rollback calls for both the
"daemonUptime < DUPLICATE_TX_ALERT_GRACE_PERIOD" path and the duplicate
transaction path to ensure transactions are not returned to the pool as active.
- Around line 968-990: The connection acquired via getDbConnection() in
updateLastSyncedEvent (and similarly in fetchInitialState) must be released even
if any subsequent call throws; wrap the logic after obtaining mysql in a
try...finally and call mysql.release() in the finally block so
getLastSyncedEvent, dbUpdateLastSyncedEvent, context validation, or
fetchMinRewardBlocks cannot leak the pooled connection—follow the same pattern
used in handleVoidedTx by moving all work into try and placing mysql.release()
in finally while keeping existing error handling and rethrows intact.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: afabfc9d-34ea-427f-a83d-bc5d26db33bf

📥 Commits

Reviewing files that changed from the base of the PR and between 179abcd and 5b4cdf3.

📒 Files selected for processing (3)
  • packages/daemon/__tests__/services/services.test.ts
  • packages/daemon/src/db/index.ts
  • packages/daemon/src/services/index.ts

pedroferreira1
pedroferreira1 previously approved these changes Apr 20, 2026
});
};

export const updateLastSyncedEvent = async (context: Context) => {
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

updateLastSyncedEvent and fetchInitialState — connection leak on throw                                                                                                                       
                                                                                                                                                                                               
  // updateLastSyncedEvent, lines 968-991                                                                                                                                                      
  const mysql = await getDbConnection();                                                                                                                                                       
  const lastDbSyncedEvent = await getLastSyncedEvent(mysql);   // ← throws? leak                                                                                                               
  // ...                                                                                                                                                                                       
  await dbUpdateLastSyncedEvent(mysql, lastEventId);           // ← throws? leak                                                                                                               
  mysql.release();                                             // ← never runs                                                                                                                 
                                                                                                                                                                                               
  // fetchInitialState, lines 1010-1021                                                                                                                                                        
  const mysql = await getDbConnection();                                                                                                                                                       
  const lastEvent = await getLastSyncedEvent(mysql);           // ← throws? leak                                                                                                               
  const rewardMinBlocks = await fetchMinRewardBlocks();        // ← throws? leak (also holds DB conn during HTTP call)                                                                         
  mysql.release();                                             // ← never runs                                                                                                                 
                                                                                                                                                                                               
  No transaction state concern here (no beginTransaction), but leaking a released connection is strictly worse than leaking a destroyed one — the pool counts it as checked-out forever. Enough
   of these and the pool blocks on getDbConnection().

* @param context - The context containing the event and other metadata
* @param _event - The event being processed (unused, context.event is used instead)
*/
export const handleVertexAccepted = async (context: Context, _event: Event) => {
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

handleVertexAccepted — transaction leak on two paths                                                                                                                                         
                                                                                                                                                                                               
  Source at packages/daemon/src/services/index.ts:299-592:                                                                                                                                     
                                                                                                                                                                                               
  const mysql = await getDbConnection();                                                                                                                                                       
  await mysql.beginTransaction();   // ← line 302, BEGIN issued                                                                                                                                
  const { NETWORK, ... } = getConfig();                                                                                                                                                        
                                                                                                                                                                                               
  try {                                                                                                                                                                                        
    // ...                                                                                                                                                                                     
    if (dbTx) {                                                                                                                                                                                
      const daemonUptime = getDaemonUptime();                                                                                                                                                  
      if (daemonUptime < DUPLICATE_TX_ALERT_GRACE_PERIOD) {                                                                                                                                    
        return;                     // ← line 349, returns with open transaction                                                                                                               
      }                                                                                                                                                                                        
      logger.error(...);                                                                                                                                                                       
      return;                       // ← line 356, returns with open transaction                                                                                                               
    }                                                                                                                                                                                          
    // ...                                                                                                                                                                                     
    await mysql.commit();                                                                                                                                                                      
  } catch (e) {                                                                                                                                                                                
    await mysql.rollback();                                                                                                                                                                    
    throw e;                                                                                                                                                                                   
  } finally {                                                                                                                                                                                  
    mysql.destroy();                // ← after PR: mysql.release()                                                                                                                             
  }                                                                                                                                                                                            
                                                                                                                                                                                               
  Both return statements at lines 349 and 356 skip commit/rollback entirely. Under destroy() the socket is killed and the transaction dies with it — safe but wasteful. Under release() the    
  connection is put back in the pool with the BEGIN still active. The next handler to check out that connection inherits the open transaction. This is a real correctness bug introduced by the
   PR.                                                                                                                                                                                         
                                                                                                                                                                                               
  Bonus fragility: beginTransaction() is outside the try block (line 302 vs try at 310). If the BEGIN itself succeeds but something throws before entering try (unlikely with getConfig(), but 
  still), the connection leaks with no finally at all.

Copy link
Copy Markdown

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR fixes a performance bug in the daemon by returning MySQL pooled connections back to the pool (instead of destroying the underlying socket), enabling real connection reuse across handlers.

Changes:

  • Replaced mysql.destroy() with mysql.release() across service handlers to preserve connection pooling.
  • Updated getDbConnection() to return PoolConnection so .release() is available without casts.
  • Updated service tests to assert .release() is called instead of .destroy().

Reviewed changes

Copilot reviewed 3 out of 3 changed files in this pull request and generated 9 comments.

File Description
packages/daemon/src/services/index.ts Switches handlers to release() pooled connections instead of destroying sockets.
packages/daemon/src/db/index.ts Changes getDbConnection() return type to PoolConnection.
packages/daemon/__tests__/services/services.test.ts Updates mocks/expectations from destroy() to release().

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment thread packages/daemon/src/services/index.ts Outdated
Comment thread packages/daemon/src/services/index.ts Outdated
Comment thread packages/daemon/src/services/index.ts Outdated
Comment thread packages/daemon/src/services/index.ts Outdated
Comment thread packages/daemon/src/services/index.ts Outdated
Comment thread packages/daemon/src/services/index.ts
Comment thread packages/daemon/src/services/index.ts Outdated
Comment thread packages/daemon/src/services/index.ts
Comment thread packages/daemon/src/services/index.ts
andreabadesso and others added 2 commits April 21, 2026 18:15
With pooled connections, leaving a transaction open when release() is
called returns the connection to the pool with BEGIN still active. The
next handler that checks it out inherits the open transaction — a real
correctness hazard introduced by the destroy() → release() switch.

Wrap each handler in an outer try/finally that owns release(), and an
inner try/catch around the transaction body that guarantees
rollback() on error (with a nested try/catch so a rollback failure
doesn't mask the original throw). Moves beginTransaction() inside the
try so a failure there still reaches the finally.

handleVertexAccepted: the duplicate-tx check is now performed before
beginTransaction() — it's a read-only lookup, and keeping it outside
the transaction means its two early returns can no longer leave a
BEGIN open on a pooled connection.

updateLastSyncedEvent and fetchInitialState previously released only
on the happy path; both now use try/finally.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
… duplicate check

The duplicate-tx check now runs before beginTransaction(), so throwing
from getTransactionById no longer exercises the rollback path. Throw
from getLockedUtxoFromInputs instead (first DB call inside the
transaction for a non-block tx) and supply the metadata fields that
the handler now reads before the first DB call.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
@andreabadesso
Copy link
Copy Markdown
Collaborator Author

andreabadesso commented Apr 21, 2026

@pedroferreira1 @luislhl addressed in 5adc8dd + 0e7edad.

Each transactional handler is now wrapped in let mysql: PoolConnection | undefined + outer try/finally that owns release(), with an inner try/catch around the transaction body that guarantees rollback() on error (rollback itself wrapped in try/catch so a failure there can't mask the original throw). beginTransaction() moved inside the try so a failure there still hits the finally.

For handleVertexAccepted specifically: the duplicate-tx check is now done before beginTransaction() — it's a read-only lookup, and keeping it outside the transaction means the two early returns can no longer leave a BEGIN open on a pooled connection.

updateLastSyncedEvent and fetchInitialState now also use try/finally so release() runs on all paths (removed the manual release at the former line 985 to avoid a double-release).

Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

♻️ Duplicate comments (1)
packages/daemon/src/services/index.ts (1)

584-594: ⚠️ Potential issue | 🟠 Major

Swallowed rollback failure leaves an open transaction in the pool.

When rollback() fails in the inner catch block, the error is silently swallowed and the outer finally calls mysql.release(), returning the connection to the pool with an open BEGIN transaction still active. The next handler to obtain that connection inherits the pending transaction — a pool contamination risk. mysql2/promise does not auto-rollback on release().

Mark the connection as unusable when rollback fails so the outer finally doesn't return it to the pool. The same pattern appears in handleVertexRemoved (643), handleVoidedTx (837), handleUnvoidedTx (872), handleTxFirstBlock (920), handleNcExecVoided (975), and handleTokenCreated (1180).

🛡️ Proposed pattern
       } catch (e) {
-        try { await mysql.rollback(); } catch { /* swallow rollback error so the original is thrown */ }
+        try {
+          await mysql.rollback();
+        } catch (rollbackErr) {
+          // Rollback failed — connection may still have an open transaction.
+          // Destroy it so it isn't returned to the pool in a dirty state.
+          logger.error('Rollback failed; destroying connection to avoid pool contamination', rollbackErr);
+          try { await mysql.destroy(); } catch { /* ignore */ }
+          mysql = undefined;
+        }
         span.setStatus({ code: SpanStatusCode.ERROR, message: String(e) });
         span.recordException(e as Error);
         logger.error('Error handling vertex accepted', e);

         throw e;
       }
     } finally {
       if (mysql) mysql.release();
       span.end();
     }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/daemon/src/services/index.ts` around lines 584 - 594, The rollback
catch currently swallows rollback failures and then always calls
mysql.release(), risking returning a connection with an open transaction; change
the inner catch to mark/close the connection instead: when await
mysql.rollback() throws, call mysql.destroy() (or mysql.end()/close equivalent)
to remove the connection from the pool and set a local flag (e.g., closed=true)
so the outer finally only calls mysql.release() if not closed; apply the same
pattern in handleVertexRemoved, handleVoidedTx, handleUnvoidedTx,
handleTxFirstBlock, handleNcExecVoided, and handleTokenCreated, and keep logging
the rollback error via logger.error while preserving/throwing the original error
afterward.
🧹 Nitpick comments (1)
packages/daemon/src/services/index.ts (1)

1032-1046: fetchInitialState holds a pooled DB connection across an HTTP round-trip.

fetchMinRewardBlocks() performs an axios.get to the fullnode /version endpoint while the pooled connection is still checked out. Under the old destroy() model this was cheap because each call spun up its own socket; under pool semantics it pins one of a small number of connections for the duration of the HTTP request, which can starve other handlers if the fullnode is slow.

Release the DB connection before making the HTTP call:

♻️ Proposed refactor
 export const fetchInitialState = async () => {
-  let mysql: PoolConnection | undefined;
-  try {
-    mysql = await getDbConnection();
-    const lastEvent = await getLastSyncedEvent(mysql);
-    const rewardMinBlocks = await fetchMinRewardBlocks();
-
-    return {
-      lastEventId: lastEvent?.last_event_id,
-      rewardMinBlocks,
-    };
-  } finally {
-    if (mysql) mysql.release();
-  }
+  let lastEvent: LastSyncedEvent | null;
+  const mysql = await getDbConnection();
+  try {
+    lastEvent = await getLastSyncedEvent(mysql);
+  } finally {
+    mysql.release();
+  }
+  const rewardMinBlocks = await fetchMinRewardBlocks();
+  return {
+    lastEventId: lastEvent?.last_event_id,
+    rewardMinBlocks,
+  };
 };
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/daemon/src/services/index.ts` around lines 1032 - 1046,
fetchInitialState currently holds the pooled DB connection while calling
fetchMinRewardBlocks (which does an axios.get); change the flow to acquire the
connection with getDbConnection, call getLastSyncedEvent(mysql) to read
lastEvent, release the connection immediately via mysql.release() (ensure
release happens in a try/finally around the DB-only work so release always
occurs), then call fetchMinRewardBlocks() after the connection is released and
return the combined result; ensure you don't call release twice and that errors
during the HTTP call still propagate correctly.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Duplicate comments:
In `@packages/daemon/src/services/index.ts`:
- Around line 584-594: The rollback catch currently swallows rollback failures
and then always calls mysql.release(), risking returning a connection with an
open transaction; change the inner catch to mark/close the connection instead:
when await mysql.rollback() throws, call mysql.destroy() (or mysql.end()/close
equivalent) to remove the connection from the pool and set a local flag (e.g.,
closed=true) so the outer finally only calls mysql.release() if not closed;
apply the same pattern in handleVertexRemoved, handleVoidedTx, handleUnvoidedTx,
handleTxFirstBlock, handleNcExecVoided, and handleTokenCreated, and keep logging
the rollback error via logger.error while preserving/throwing the original error
afterward.

---

Nitpick comments:
In `@packages/daemon/src/services/index.ts`:
- Around line 1032-1046: fetchInitialState currently holds the pooled DB
connection while calling fetchMinRewardBlocks (which does an axios.get); change
the flow to acquire the connection with getDbConnection, call
getLastSyncedEvent(mysql) to read lastEvent, release the connection immediately
via mysql.release() (ensure release happens in a try/finally around the DB-only
work so release always occurs), then call fetchMinRewardBlocks() after the
connection is released and return the combined result; ensure you don't call
release twice and that errors during the HTTP call still propagate correctly.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: eff0350c-4aac-4921-9e29-08790b8d5aaf

📥 Commits

Reviewing files that changed from the base of the PR and between 5b4cdf3 and 0e7edad.

📒 Files selected for processing (2)
  • packages/daemon/__tests__/services/services.test.ts
  • packages/daemon/src/services/index.ts
🚧 Files skipped from review as they are similar to previous changes (1)
  • packages/daemon/tests/services/services.test.ts

@andreabadesso andreabadesso moved this from In Progress (Done) to In Review (WIP) in Hathor Network Apr 22, 2026
…HTTP in fetchInitialState

Two correctness items from CodeRabbit's review of PR #399:

- Swallowed rollback failure could pool-contaminate. If `mysql.rollback()`
  throws, the previous code swallowed the error and the outer `finally`
  still called `release()`, returning the connection to the pool with an
  open `BEGIN` still active. The next handler to check it out would
  inherit the pending transaction. Now when rollback throws we log,
  `destroy()` the connection, and null out the handle so `finally`
  skips `release()`. Same pattern applied to all 7 transactional
  handlers.

- `fetchInitialState` held the pooled connection across the axios call
  to the fullnode `/version` endpoint. Under `destroy()` that didn't
  matter (each call had its own socket), but under pool semantics a
  slow fullnode response pins a pool slot. Release the connection
  after `getLastSyncedEvent` and before `fetchMinRewardBlocks` so the
  HTTP call doesn't hold a slot.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
@andreabadesso
Copy link
Copy Markdown
Collaborator Author

Addressed the CodeRabbit findings from that review in 2f10685:

Swallowed rollback failure (applied to all 7 transactional handlers). When mysql.rollback() itself throws we now log, destroy() the connection, and null out the handle so the outer finally skips release(). A connection with a pending BEGIN can no longer re-enter the pool and contaminate the next caller.

fetchInitialState holding a pool slot across HTTP. Restructured to getDbConnectiongetLastSyncedEventrelease before fetchMinRewardBlocks. The axios call to the fullnode no longer pins a pool slot.

Type check clean, 59/59 services tests pass.

luislhl
luislhl previously approved these changes Apr 23, 2026
@andreabadesso andreabadesso moved this from In Review (WIP) to In Review (Done) in Hathor Network Apr 23, 2026
…ation test

da65de9 swapped `!maxAmongAddresses || !maxWalletIndex` for an explicit
`== null` check per @Copilot's suggestion. Correct in principle — index
0 is a legitimate first-derived address — but it opens up a code path
that was previously dead in the test fixture and causes
`balances.test.ts > VOIDED_TOKEN_AUTHORITY > full sync` to time out at
30s.

The correctness angle is real but out of scope for PR #399 (which only
touches the connection lifecycle). Reverting just this hunk; the other
changes in da65de9 (outer span error catch) and 2f10685 (rollback
destroy, fetchInitialState release-before-HTTP) stay.

Copilot's finding will be followed up separately with proper coverage
for the 0-index case.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Comment thread packages/daemon/src/services/index.ts Outdated
// Release the pooled DB connection before the HTTP call to the fullnode
// so a slow fullnode response doesn't pin a pool slot while we wait.
let lastEvent: LastSyncedEvent | null;
const mysql = await getDbConnection();
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should keep it inside the try block

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done in aa600eb — acquire and HTTP call both back inside the single try, release in finally. I'd split it off earlier addressing a CodeRabbit nitpick about holding a pool slot during the axios call, but you're right that the complexity isn't worth it here (fetchInitialState runs once at startup).

Comment thread packages/daemon/src/services/index.ts Outdated
mysql.release();
}

const rewardMinBlocks = await fetchMinRewardBlocks();
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why did you remove this from the try block?

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Restored in aa600eb — it's back inside the single try. I'd moved it out to release the pool connection before the axios call (CodeRabbit nitpick about holding a slot during HTTP), but agreed that's not worth the shape change for a once-per-startup function.

andreabadesso and others added 4 commits April 23, 2026 17:00
…on timeout

Reverting the earlier revert. Copilot's finding is correct: the old
`!maxAmongAddresses || !maxWalletIndex` guard silently skipped gap
discovery for any wallet whose highest touched/derived address index
was `0` — a valid state (first derived address), not "missing."

Investigation into why this broke `balances.test.ts > VOIDED_TOKEN_AUTHORITY`:

- The test fixture seeds `cafecafe` with a single address at index 0
  and `deafbeef` with addresses at 0/1/2.
- Old code: every tx touching those wallets hit the `!value` guard
  (treating 0 as "not loaded") and skipped gap discovery entirely.
- New code: correctly recognizes these as valid states and fires gap
  discovery, which derives up to maxGap (20) addresses via pure-JS ECC
  per affected wallet per tx.
- Cumulative effect across 66 events × multiple wallets blew past the
  30s test timeout — not a hang, just slower (correct) work.

Raised the timeout to 90s with an inline comment explaining why. The
underlying perf concern (amortize gap discovery instead of firing
per-tx) is pre-existing and orthogonal to #399; it'd just been masked
by the bug.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
…ngle try/finally

Per @pedroferreira1's review. The earlier split (release DB before the
axios call) was addressing a CodeRabbit nitpick about holding a pool
slot across HTTP, but fetchInitialState runs once at startup against a
warm pool — the complexity isn't worth the saving. Single try/finally
is cleaner.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Second investigation: the real reason `balances.test.ts >
VOIDED_TOKEN_AUTHORITY` fails under the `== null` check is that the
test fixture has two wallets (cafecafe, deafbeef) sharing the same
xpubkey. The `address` table's PRIMARY KEY is just `address`
(not `(address, wallet_id)`). When both wallets trigger gap discovery,
both derive the same addresses from the shared xpubkey, and the second
addNewAddresses INSERT hits `Duplicate entry ... for key
'address.PRIMARY'`. Handler rethrows, state machine retries, loop
until timeout.

Proper fix requires separate xpubkeys per fixture wallet and
re-deriving the seeded addresses, which is out of scope for #399
(connection lifecycle only). Reverting the null-check fix + timeout
bump. Copilot's finding is valid; will address in a follow-up with
proper fixture work.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (1)
packages/daemon/src/services/index.ts (1)

1052-1077: Nit: validate context.event before acquiring a pool connection.

getLastSyncedEvent is executed before the !context.event guard, so an invalid context still costs a round-trip and briefly pins a pool slot. Not a correctness issue (the finally releases), but since this function is also called on the hot state-machine path, the check is cheap to move up.

♻️ Optional reordering
 export const updateLastSyncedEvent = async (context: Context) => {
+  if (!context.event) {
+    throw new Error('Tried to update last synced event but no event in context');
+  }
   let mysql: PoolConnection | undefined;
   try {
     mysql = await getDbConnection();

     const lastDbSyncedEvent: LastSyncedEvent | null = await getLastSyncedEvent(mysql);

-    if (!context.event) {
-      throw new Error('Tried to update last synced event but no event in context');
-    }
-
     const lastEventId = context.event.event.id;
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/daemon/src/services/index.ts` around lines 1052 - 1077, Move the
cheap validation of context.event to the top of updateLastSyncedEvent so you
don't acquire a DB pool connection unnecessarily: check if context.event exists
and throw the same error before calling getDbConnection(), then proceed to call
getDbConnection(), getLastSyncedEvent, and dbUpdateLastSyncedEvent as before;
updateLastSyncedEvent, getDbConnection, getLastSyncedEvent and
dbUpdateLastSyncedEvent are the relevant symbols to change.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@packages/daemon/src/services/index.ts`:
- Around line 858-859: The handleVoidedTx flow currently calls
dbUpdateLastSyncedEvent after await mysql.commit(), which differs from sibling
handlers and can cause replay-driven corruption; move the await
dbUpdateLastSyncedEvent(mysql, fullNodeEvent.event.id) call to occur inside the
transaction immediately before await mysql.commit() (i.e., same placement
pattern used by handleVertexAccepted, handleVertexRemoved, handleTxFirstBlock,
handleNcExecVoided, handleTokenCreated), ensure the surrounding try/catch still
rolls back the transaction on dbUpdateLastSyncedEvent failure, and keep
voidAddressTransaction/voidWalletTransaction invocation and commit semantics
unchanged except for the reordered sync-pointer update.

---

Nitpick comments:
In `@packages/daemon/src/services/index.ts`:
- Around line 1052-1077: Move the cheap validation of context.event to the top
of updateLastSyncedEvent so you don't acquire a DB pool connection
unnecessarily: check if context.event exists and throw the same error before
calling getDbConnection(), then proceed to call getDbConnection(),
getLastSyncedEvent, and dbUpdateLastSyncedEvent as before;
updateLastSyncedEvent, getDbConnection, getLastSyncedEvent and
dbUpdateLastSyncedEvent are the relevant symbols to change.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: e041f7f9-3790-41ce-b592-0df48ddd3764

📥 Commits

Reviewing files that changed from the base of the PR and between da65de9 and 8845811.

📒 Files selected for processing (1)
  • packages/daemon/src/services/index.ts

Comment thread packages/daemon/src/services/index.ts Outdated
andreabadesso and others added 5 commits April 23, 2026 17:56
…tinct xpubs

Re-apply @Copilot's fix — `if (maxAmongAddresses == null || maxWalletIndex == null)` so
index 0 (a valid first-derived address) is no longer treated as "missing"
and silently skipping gap discovery.

Fixes the integration test fallout in the same commit:

The VOIDED_TOKEN_AUTHORITY fixture previously had both `cafecafe` and
`deafbeef` sharing a single xpubkey. The `address` table's primary key
is just `address` (not `(address, wallet_id)`), so when the corrected
check lets gap discovery run for both wallets, they both derive the
same addresses from the shared xpubkey, and the second INSERT hits
`Duplicate entry ... for key 'address.PRIMARY'`. The old `!value` guard
silently skipped gap discovery for these fixtures (their highest
touched/derived index was 0), which masked the duplicate xpubkey.

Switched `cafecafe` to the sibling xpub (path /7 of deafbeef's xpub)
so gap discovery derives a disjoint address set. The seeded address
strings (HFtz, HJQb, HRQe, HRXV) are unchanged — they're just fixture
pointers for the fullnode simulator's events and don't need to line up
with real xpub derivations.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
…lease

# Conflicts:
#	packages/daemon/__tests__/services/services.test.ts
Every other handler calls dbUpdateLastSyncedEvent before mysql.commit()
so the pointer advance is atomic with the void. handleVoidedTx was the
odd one out — it committed first, then updated the pointer outside the
transaction. If dbUpdateLastSyncedEvent threw (transient connection,
deadlock), voidTx would already be committed but the state machine
would see the pointer unchanged and re-deliver the VOIDED event. voidTx
isn't fully idempotent — voidAddressTransaction / voidWalletTransaction
recompute balances from current UTXO state, so a second application
against an already-voided tx can corrupt balances.

Swap the two calls so dbUpdateLastSyncedEvent runs inside the tx,
before commit, matching every other handler.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
@@ -298,16 +298,16 @@ export function isNanoContract(headers: EventTxHeader[]) {
*/
export const handleVertexAccepted = async (context: Context, _event: Event) => {
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a 300 line function that runs on all new transactions, maybe we should break it into smaller sections to improve readability.

I understand this is not in the scope of this PR, but it made reviewing harder.
Maybe we should mark this in a comment so we get around to it later as a KTLO?

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@github-project-automation github-project-automation Bot moved this from In Review (Done) to In Review (WIP) in Hathor Network Apr 24, 2026
@andreabadesso andreabadesso merged commit cd92d0f into master Apr 24, 2026
4 of 5 checks passed
@github-project-automation github-project-automation Bot moved this from In Review (WIP) to Waiting to be deployed in Hathor Network Apr 24, 2026
@andreabadesso andreabadesso deleted the perf/daemon-pool-release branch April 24, 2026 16:53
This was referenced May 4, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

enhancement New feature or request

Projects

Status: Waiting to be deployed

Development

Successfully merging this pull request may close these issues.

5 participants