perf(daemon): release pooled connections instead of destroying them#399
Conversation
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
📝 WalkthroughWalkthroughRefactored daemon DB handling to use pooled connections: types updated to Changes
Sequence Diagram(s)sequenceDiagram
participant Event
participant Service as "Daemon Service\n(handler)"
participant Pool as "DB Pool\n(getDbConnection)"
participant DB as "PoolConnection\n/ Database"
Event->>Service: deliver event (e.g., vertexAccepted)
Service->>Pool: getDbConnection()
Pool-->>Service: PoolConnection (mysql)
Service->>DB: optional read-only checks (e.g., getTransactionById)
alt needs transaction
Service->>DB: beginTransaction()
Service->>DB: transactional queries/updates
alt success
Service->>DB: commit()
else error
Service->>DB: rollback()
Note right of DB: if rollback fails, connection destroyed and error rethrown
end
end
Service->>DB: release()
DB-->>Service: released
Service-->>Event: handler done / rethrow on error
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Possibly related PRs
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 4 | ❌ 1❌ Failed checks (1 inconclusive)
✅ Passed checks (4 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
|
�[94m➤�[39m YN0000: · Yarn 4.7.0 Sync benchmark comparisonScenario:
🟢/🔴 mean the 95% CI is fully on one side of 0. ⚪ means the CI crosses 0 — the difference is indistinguishable from noise at this run count. This report is informational only; CI runner variance makes hard gates unreliable at the run counts we can afford in CI. |
Tier 1 fix #1 from #395. Every handler in services/index.ts ended with mysql.destroy() in its finally block, which closes the underlying TCP socket and defeats the pool — every subsequent call paid a fresh TCP + MySQL auth handshake. Effectively the connection pool size was 0. Replaced all 11 occurrences with mysql.release() so connections return to the pool. Also corrected the getDbConnection() return type to PoolConnection (from its alias Connection) so .release() is available without casts. Measured against VOIDED_TOKEN_AUTHORITY (66 events, 5 baseline runs vs 8 candidate runs): | metric | baseline p50 | candidate p50 | Δ | 95% CI | |---|---:|---:|---:|---| | totalMs | 1253.6 | 967.0 | -22.9% | [-28.8%, -2.7%] 🟢 | | metadataDiff | 138.3 | 42.40 | -69.4% | [-73.4%, -55.8%] 🟢 | | handleTxFirstBlock | 26.79 | 15.47 | -42.3% | [-45.4%, -25.8%] 🟢 | handleVertexAccepted directionally improved (-24.6%) but CI crosses 0 at this run count. The small handlers that do only 1–2 queries show the largest relative gain — connection acquire was previously a dominant fraction of their cost. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
services.test.ts was mocking getDbConnection to return objects with a destroy jest.fn(), and asserting destroy was called. After the pool lifecycle fix (handlers now call release() instead of destroy()), those mocks no longer match and all 44 service tests fail with "mysql.release is not a function". Update the mocks and assertions to track release() calls instead. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2fa0ffe to
5b4cdf3
Compare
There was a problem hiding this comment.
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (3)
packages/daemon/__tests__/services/services.test.ts (1)
1764-1855:⚠️ Potential issue | 🟡 MinorAssert
release()in thehandleNcExecVoidedtests too.This suite now mocks
release, but none of these paths verify it, so a cleanup regression in this handler could slip through.🧪 Proposed test assertions
@@ expect(addOrUpdateTx).not.toHaveBeenCalled(); expect(mockDb.commit).toHaveBeenCalled(); + expect(mockDb.release).toHaveBeenCalled(); @@ expect(deleteTokens).toHaveBeenCalledWith(mockDb, [nanoToken1, nanoToken2]); expect(addOrUpdateTx).not.toHaveBeenCalled(); expect(mockDb.commit).toHaveBeenCalled(); + expect(mockDb.release).toHaveBeenCalled(); @@ await handleNcExecVoided(context as any); expect(deleteTokens).not.toHaveBeenCalled(); expect(mockDb.commit).toHaveBeenCalled(); + expect(mockDb.release).toHaveBeenCalled(); @@ await handleNcExecVoided(context as any); expect(deleteTokens).toHaveBeenCalledWith(mockDb, [nanoToken]); expect(mockDb.commit).toHaveBeenCalled(); + expect(mockDb.release).toHaveBeenCalled(); @@ expect(mockDb.rollback).toHaveBeenCalled(); expect(mockDb.commit).not.toHaveBeenCalled(); expect(logger.error).toHaveBeenCalledWith('handleNcExecVoided error: ', error); + expect(mockDb.release).toHaveBeenCalled();🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/daemon/__tests__/services/services.test.ts` around lines 1764 - 1855, The tests for handleNcExecVoided currently mock mockDb.release but never assert it; update each spec to assert resource cleanup by adding expect(mockDb.release).toHaveBeenCalled() alongside the existing commit/rollback assertions (e.g., in the "should not delete any tokens...", "should delete only nano-created tokens...", "should NOT delete traditional...", "should delete nano tokens...", and "should rollback on error and rethrow" tests); ensure the error test asserts rollback and then expect(mockDb.release).toHaveBeenCalled() as well so release is verified in both success and failure paths.packages/daemon/src/services/index.ts (2)
342-356:⚠️ Potential issue | 🔴 CriticalAdd rollback before early returns in handleVertexAccepted.
Lines 349 and 356 return early after
beginTransaction()without calling commit or rollback. The finally block at line 588 callsrelease(), which returns an active transaction to the pool instead of closing it properly.Proposed fix
if (dbTx) { const daemonUptime = getDaemonUptime(); // We do not log if the daemon has just started, because it's expected that // we receive an initial duplicate transaction from the fullnode in this case. if (daemonUptime < DUPLICATE_TX_ALERT_GRACE_PERIOD) { + await mysql.rollback(); return; } logger.error(`Transaction ${hash} already in the database and the daemon has not been recently restarted (uptime of ${daemonUptime} seconds). This is unexpected.`); // This might happen if the service has been recently restarted, // so we should raise the alert and just ignore the tx + await mysql.rollback(); return; }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/daemon/src/services/index.ts` around lines 342 - 356, The early returns in handleVertexAccepted after starting a DB transaction (when calling beginTransaction and then checking dbTx from getTransactionById) can leave an active transaction open; update the control flow so that before any early return (e.g., the daemon uptime grace-period check and the duplicate-tx path where logger.error is called) you explicitly call rollback() on the active DbTransaction and then release() it (or ensure the finally block sees a closed transaction) — locate the transaction created by beginTransaction in handleVertexAccepted and add rollback calls for both the "daemonUptime < DUPLICATE_TX_ALERT_GRACE_PERIOD" path and the duplicate transaction path to ensure transactions are not returned to the pool as active.
968-990:⚠️ Potential issue | 🟠 MajorRelease these pooled connections in
finallyblocks.If
getLastSyncedEventthrows inupdateLastSyncedEvent, the connection is never released. The same applies todbUpdateLastSyncedEventand the context validation. InfetchInitialState, if eithergetLastSyncedEventorfetchMinRewardBlocksthrows, the connection is leaked.Wrap the function bodies in try-finally to ensure
mysql.release()is always called, matching the pattern used elsewhere in this file (e.g.,handleVoidedTx).Proposed fix
export const updateLastSyncedEvent = async (context: Context) => { const mysql = await getDbConnection(); - const lastDbSyncedEvent: LastSyncedEvent | null = await getLastSyncedEvent(mysql); + try { + const lastDbSyncedEvent: LastSyncedEvent | null = await getLastSyncedEvent(mysql); - if (!context.event) { - throw new Error('Tried to update last synced event but no event in context'); - } + if (!context.event) { + throw new Error('Tried to update last synced event but no event in context'); + } - const lastEventId = context.event.event.id; + const lastEventId = context.event.event.id; - if (lastDbSyncedEvent - && lastDbSyncedEvent.last_event_id > lastEventId) { - logger.error('Tried to store an event lower than the one on the database', { - lastEventId, - lastDbSyncedEvent: JSONBigInt.stringify(lastDbSyncedEvent), - }); - mysql.release(); - throw new Error('Event lower than stored one.'); - } - await dbUpdateLastSyncedEvent(mysql, lastEventId); - - mysql.release(); + if (lastDbSyncedEvent + && lastDbSyncedEvent.last_event_id > lastEventId) { + logger.error('Tried to store an event lower than the one on the database', { + lastEventId, + lastDbSyncedEvent: JSONBigInt.stringify(lastDbSyncedEvent), + }); + throw new Error('Event lower than stored one.'); + } + await dbUpdateLastSyncedEvent(mysql, lastEventId); + } finally { + mysql.release(); + } }; export const fetchInitialState = async () => { const mysql = await getDbConnection(); - const lastEvent = await getLastSyncedEvent(mysql); + let lastEvent: LastSyncedEvent | null; + try { + lastEvent = await getLastSyncedEvent(mysql); + } finally { + mysql.release(); + } + const rewardMinBlocks = await fetchMinRewardBlocks(); - - mysql.release(); return { lastEventId: lastEvent?.last_event_id, rewardMinBlocks,🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/daemon/src/services/index.ts` around lines 968 - 990, The connection acquired via getDbConnection() in updateLastSyncedEvent (and similarly in fetchInitialState) must be released even if any subsequent call throws; wrap the logic after obtaining mysql in a try...finally and call mysql.release() in the finally block so getLastSyncedEvent, dbUpdateLastSyncedEvent, context validation, or fetchMinRewardBlocks cannot leak the pooled connection—follow the same pattern used in handleVoidedTx by moving all work into try and placing mysql.release() in finally while keeping existing error handling and rethrows intact.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Outside diff comments:
In `@packages/daemon/__tests__/services/services.test.ts`:
- Around line 1764-1855: The tests for handleNcExecVoided currently mock
mockDb.release but never assert it; update each spec to assert resource cleanup
by adding expect(mockDb.release).toHaveBeenCalled() alongside the existing
commit/rollback assertions (e.g., in the "should not delete any tokens...",
"should delete only nano-created tokens...", "should NOT delete traditional...",
"should delete nano tokens...", and "should rollback on error and rethrow"
tests); ensure the error test asserts rollback and then
expect(mockDb.release).toHaveBeenCalled() as well so release is verified in both
success and failure paths.
In `@packages/daemon/src/services/index.ts`:
- Around line 342-356: The early returns in handleVertexAccepted after starting
a DB transaction (when calling beginTransaction and then checking dbTx from
getTransactionById) can leave an active transaction open; update the control
flow so that before any early return (e.g., the daemon uptime grace-period check
and the duplicate-tx path where logger.error is called) you explicitly call
rollback() on the active DbTransaction and then release() it (or ensure the
finally block sees a closed transaction) — locate the transaction created by
beginTransaction in handleVertexAccepted and add rollback calls for both the
"daemonUptime < DUPLICATE_TX_ALERT_GRACE_PERIOD" path and the duplicate
transaction path to ensure transactions are not returned to the pool as active.
- Around line 968-990: The connection acquired via getDbConnection() in
updateLastSyncedEvent (and similarly in fetchInitialState) must be released even
if any subsequent call throws; wrap the logic after obtaining mysql in a
try...finally and call mysql.release() in the finally block so
getLastSyncedEvent, dbUpdateLastSyncedEvent, context validation, or
fetchMinRewardBlocks cannot leak the pooled connection—follow the same pattern
used in handleVoidedTx by moving all work into try and placing mysql.release()
in finally while keeping existing error handling and rethrows intact.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: afabfc9d-34ea-427f-a83d-bc5d26db33bf
📒 Files selected for processing (3)
packages/daemon/__tests__/services/services.test.tspackages/daemon/src/db/index.tspackages/daemon/src/services/index.ts
| }); | ||
| }; | ||
|
|
||
| export const updateLastSyncedEvent = async (context: Context) => { |
There was a problem hiding this comment.
updateLastSyncedEvent and fetchInitialState — connection leak on throw
// updateLastSyncedEvent, lines 968-991
const mysql = await getDbConnection();
const lastDbSyncedEvent = await getLastSyncedEvent(mysql); // ← throws? leak
// ...
await dbUpdateLastSyncedEvent(mysql, lastEventId); // ← throws? leak
mysql.release(); // ← never runs
// fetchInitialState, lines 1010-1021
const mysql = await getDbConnection();
const lastEvent = await getLastSyncedEvent(mysql); // ← throws? leak
const rewardMinBlocks = await fetchMinRewardBlocks(); // ← throws? leak (also holds DB conn during HTTP call)
mysql.release(); // ← never runs
No transaction state concern here (no beginTransaction), but leaking a released connection is strictly worse than leaking a destroyed one — the pool counts it as checked-out forever. Enough
of these and the pool blocks on getDbConnection().
| * @param context - The context containing the event and other metadata | ||
| * @param _event - The event being processed (unused, context.event is used instead) | ||
| */ | ||
| export const handleVertexAccepted = async (context: Context, _event: Event) => { |
There was a problem hiding this comment.
handleVertexAccepted — transaction leak on two paths
Source at packages/daemon/src/services/index.ts:299-592:
const mysql = await getDbConnection();
await mysql.beginTransaction(); // ← line 302, BEGIN issued
const { NETWORK, ... } = getConfig();
try {
// ...
if (dbTx) {
const daemonUptime = getDaemonUptime();
if (daemonUptime < DUPLICATE_TX_ALERT_GRACE_PERIOD) {
return; // ← line 349, returns with open transaction
}
logger.error(...);
return; // ← line 356, returns with open transaction
}
// ...
await mysql.commit();
} catch (e) {
await mysql.rollback();
throw e;
} finally {
mysql.destroy(); // ← after PR: mysql.release()
}
Both return statements at lines 349 and 356 skip commit/rollback entirely. Under destroy() the socket is killed and the transaction dies with it — safe but wasteful. Under release() the
connection is put back in the pool with the BEGIN still active. The next handler to check out that connection inherits the open transaction. This is a real correctness bug introduced by the
PR.
Bonus fragility: beginTransaction() is outside the try block (line 302 vs try at 310). If the BEGIN itself succeeds but something throws before entering try (unlikely with getConfig(), but
still), the connection leaks with no finally at all.
There was a problem hiding this comment.
Pull request overview
This PR fixes a performance bug in the daemon by returning MySQL pooled connections back to the pool (instead of destroying the underlying socket), enabling real connection reuse across handlers.
Changes:
- Replaced
mysql.destroy()withmysql.release()across service handlers to preserve connection pooling. - Updated
getDbConnection()to returnPoolConnectionso.release()is available without casts. - Updated service tests to assert
.release()is called instead of.destroy().
Reviewed changes
Copilot reviewed 3 out of 3 changed files in this pull request and generated 9 comments.
| File | Description |
|---|---|
packages/daemon/src/services/index.ts |
Switches handlers to release() pooled connections instead of destroying sockets. |
packages/daemon/src/db/index.ts |
Changes getDbConnection() return type to PoolConnection. |
packages/daemon/__tests__/services/services.test.ts |
Updates mocks/expectations from destroy() to release(). |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
With pooled connections, leaving a transaction open when release() is called returns the connection to the pool with BEGIN still active. The next handler that checks it out inherits the open transaction — a real correctness hazard introduced by the destroy() → release() switch. Wrap each handler in an outer try/finally that owns release(), and an inner try/catch around the transaction body that guarantees rollback() on error (with a nested try/catch so a rollback failure doesn't mask the original throw). Moves beginTransaction() inside the try so a failure there still reaches the finally. handleVertexAccepted: the duplicate-tx check is now performed before beginTransaction() — it's a read-only lookup, and keeping it outside the transaction means its two early returns can no longer leave a BEGIN open on a pooled connection. updateLastSyncedEvent and fetchInitialState previously released only on the happy path; both now use try/finally. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
… duplicate check The duplicate-tx check now runs before beginTransaction(), so throwing from getTransactionById no longer exercises the rollback path. Throw from getLockedUtxoFromInputs instead (first DB call inside the transaction for a non-block tx) and supply the metadata fields that the handler now reads before the first DB call. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
|
@pedroferreira1 @luislhl addressed in 5adc8dd + 0e7edad. Each transactional handler is now wrapped in For
|
There was a problem hiding this comment.
♻️ Duplicate comments (1)
packages/daemon/src/services/index.ts (1)
584-594:⚠️ Potential issue | 🟠 MajorSwallowed rollback failure leaves an open transaction in the pool.
When
rollback()fails in the innercatchblock, the error is silently swallowed and the outerfinallycallsmysql.release(), returning the connection to the pool with an openBEGINtransaction still active. The next handler to obtain that connection inherits the pending transaction — a pool contamination risk.mysql2/promisedoes not auto-rollback onrelease().Mark the connection as unusable when rollback fails so the outer
finallydoesn't return it to the pool. The same pattern appears inhandleVertexRemoved(643),handleVoidedTx(837),handleUnvoidedTx(872),handleTxFirstBlock(920),handleNcExecVoided(975), andhandleTokenCreated(1180).🛡️ Proposed pattern
} catch (e) { - try { await mysql.rollback(); } catch { /* swallow rollback error so the original is thrown */ } + try { + await mysql.rollback(); + } catch (rollbackErr) { + // Rollback failed — connection may still have an open transaction. + // Destroy it so it isn't returned to the pool in a dirty state. + logger.error('Rollback failed; destroying connection to avoid pool contamination', rollbackErr); + try { await mysql.destroy(); } catch { /* ignore */ } + mysql = undefined; + } span.setStatus({ code: SpanStatusCode.ERROR, message: String(e) }); span.recordException(e as Error); logger.error('Error handling vertex accepted', e); throw e; } } finally { if (mysql) mysql.release(); span.end(); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/daemon/src/services/index.ts` around lines 584 - 594, The rollback catch currently swallows rollback failures and then always calls mysql.release(), risking returning a connection with an open transaction; change the inner catch to mark/close the connection instead: when await mysql.rollback() throws, call mysql.destroy() (or mysql.end()/close equivalent) to remove the connection from the pool and set a local flag (e.g., closed=true) so the outer finally only calls mysql.release() if not closed; apply the same pattern in handleVertexRemoved, handleVoidedTx, handleUnvoidedTx, handleTxFirstBlock, handleNcExecVoided, and handleTokenCreated, and keep logging the rollback error via logger.error while preserving/throwing the original error afterward.
🧹 Nitpick comments (1)
packages/daemon/src/services/index.ts (1)
1032-1046:fetchInitialStateholds a pooled DB connection across an HTTP round-trip.
fetchMinRewardBlocks()performs anaxios.getto the fullnode/versionendpoint while the pooled connection is still checked out. Under the olddestroy()model this was cheap because each call spun up its own socket; under pool semantics it pins one of a small number of connections for the duration of the HTTP request, which can starve other handlers if the fullnode is slow.Release the DB connection before making the HTTP call:
♻️ Proposed refactor
export const fetchInitialState = async () => { - let mysql: PoolConnection | undefined; - try { - mysql = await getDbConnection(); - const lastEvent = await getLastSyncedEvent(mysql); - const rewardMinBlocks = await fetchMinRewardBlocks(); - - return { - lastEventId: lastEvent?.last_event_id, - rewardMinBlocks, - }; - } finally { - if (mysql) mysql.release(); - } + let lastEvent: LastSyncedEvent | null; + const mysql = await getDbConnection(); + try { + lastEvent = await getLastSyncedEvent(mysql); + } finally { + mysql.release(); + } + const rewardMinBlocks = await fetchMinRewardBlocks(); + return { + lastEventId: lastEvent?.last_event_id, + rewardMinBlocks, + }; };🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/daemon/src/services/index.ts` around lines 1032 - 1046, fetchInitialState currently holds the pooled DB connection while calling fetchMinRewardBlocks (which does an axios.get); change the flow to acquire the connection with getDbConnection, call getLastSyncedEvent(mysql) to read lastEvent, release the connection immediately via mysql.release() (ensure release happens in a try/finally around the DB-only work so release always occurs), then call fetchMinRewardBlocks() after the connection is released and return the combined result; ensure you don't call release twice and that errors during the HTTP call still propagate correctly.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Duplicate comments:
In `@packages/daemon/src/services/index.ts`:
- Around line 584-594: The rollback catch currently swallows rollback failures
and then always calls mysql.release(), risking returning a connection with an
open transaction; change the inner catch to mark/close the connection instead:
when await mysql.rollback() throws, call mysql.destroy() (or mysql.end()/close
equivalent) to remove the connection from the pool and set a local flag (e.g.,
closed=true) so the outer finally only calls mysql.release() if not closed;
apply the same pattern in handleVertexRemoved, handleVoidedTx, handleUnvoidedTx,
handleTxFirstBlock, handleNcExecVoided, and handleTokenCreated, and keep logging
the rollback error via logger.error while preserving/throwing the original error
afterward.
---
Nitpick comments:
In `@packages/daemon/src/services/index.ts`:
- Around line 1032-1046: fetchInitialState currently holds the pooled DB
connection while calling fetchMinRewardBlocks (which does an axios.get); change
the flow to acquire the connection with getDbConnection, call
getLastSyncedEvent(mysql) to read lastEvent, release the connection immediately
via mysql.release() (ensure release happens in a try/finally around the DB-only
work so release always occurs), then call fetchMinRewardBlocks() after the
connection is released and return the combined result; ensure you don't call
release twice and that errors during the HTTP call still propagate correctly.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: eff0350c-4aac-4921-9e29-08790b8d5aaf
📒 Files selected for processing (2)
packages/daemon/__tests__/services/services.test.tspackages/daemon/src/services/index.ts
🚧 Files skipped from review as they are similar to previous changes (1)
- packages/daemon/tests/services/services.test.ts
…HTTP in fetchInitialState Two correctness items from CodeRabbit's review of PR #399: - Swallowed rollback failure could pool-contaminate. If `mysql.rollback()` throws, the previous code swallowed the error and the outer `finally` still called `release()`, returning the connection to the pool with an open `BEGIN` still active. The next handler to check it out would inherit the pending transaction. Now when rollback throws we log, `destroy()` the connection, and null out the handle so `finally` skips `release()`. Same pattern applied to all 7 transactional handlers. - `fetchInitialState` held the pooled connection across the axios call to the fullnode `/version` endpoint. Under `destroy()` that didn't matter (each call had its own socket), but under pool semantics a slow fullnode response pins a pool slot. Release the connection after `getLastSyncedEvent` and before `fetchMinRewardBlocks` so the HTTP call doesn't hold a slot. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
|
Addressed the CodeRabbit findings from that review in 2f10685: Swallowed rollback failure (applied to all 7 transactional handlers). When
Type check clean, 59/59 services tests pass. |
…ation test da65de9 swapped `!maxAmongAddresses || !maxWalletIndex` for an explicit `== null` check per @Copilot's suggestion. Correct in principle — index 0 is a legitimate first-derived address — but it opens up a code path that was previously dead in the test fixture and causes `balances.test.ts > VOIDED_TOKEN_AUTHORITY > full sync` to time out at 30s. The correctness angle is real but out of scope for PR #399 (which only touches the connection lifecycle). Reverting just this hunk; the other changes in da65de9 (outer span error catch) and 2f10685 (rollback destroy, fetchInitialState release-before-HTTP) stay. Copilot's finding will be followed up separately with proper coverage for the 0-index case. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
| // Release the pooled DB connection before the HTTP call to the fullnode | ||
| // so a slow fullnode response doesn't pin a pool slot while we wait. | ||
| let lastEvent: LastSyncedEvent | null; | ||
| const mysql = await getDbConnection(); |
There was a problem hiding this comment.
We should keep it inside the try block
There was a problem hiding this comment.
Done in aa600eb — acquire and HTTP call both back inside the single try, release in finally. I'd split it off earlier addressing a CodeRabbit nitpick about holding a pool slot during the axios call, but you're right that the complexity isn't worth it here (fetchInitialState runs once at startup).
| mysql.release(); | ||
| } | ||
|
|
||
| const rewardMinBlocks = await fetchMinRewardBlocks(); |
There was a problem hiding this comment.
Why did you remove this from the try block?
There was a problem hiding this comment.
Restored in aa600eb — it's back inside the single try. I'd moved it out to release the pool connection before the axios call (CodeRabbit nitpick about holding a slot during HTTP), but agreed that's not worth the shape change for a once-per-startup function.
…on timeout Reverting the earlier revert. Copilot's finding is correct: the old `!maxAmongAddresses || !maxWalletIndex` guard silently skipped gap discovery for any wallet whose highest touched/derived address index was `0` — a valid state (first derived address), not "missing." Investigation into why this broke `balances.test.ts > VOIDED_TOKEN_AUTHORITY`: - The test fixture seeds `cafecafe` with a single address at index 0 and `deafbeef` with addresses at 0/1/2. - Old code: every tx touching those wallets hit the `!value` guard (treating 0 as "not loaded") and skipped gap discovery entirely. - New code: correctly recognizes these as valid states and fires gap discovery, which derives up to maxGap (20) addresses via pure-JS ECC per affected wallet per tx. - Cumulative effect across 66 events × multiple wallets blew past the 30s test timeout — not a hang, just slower (correct) work. Raised the timeout to 90s with an inline comment explaining why. The underlying perf concern (amortize gap discovery instead of firing per-tx) is pre-existing and orthogonal to #399; it'd just been masked by the bug. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
…ngle try/finally Per @pedroferreira1's review. The earlier split (release DB before the axios call) was addressing a CodeRabbit nitpick about holding a pool slot across HTTP, but fetchInitialState runs once at startup against a warm pool — the complexity isn't worth the saving. Single try/finally is cleaner. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Second investigation: the real reason `balances.test.ts > VOIDED_TOKEN_AUTHORITY` fails under the `== null` check is that the test fixture has two wallets (cafecafe, deafbeef) sharing the same xpubkey. The `address` table's PRIMARY KEY is just `address` (not `(address, wallet_id)`). When both wallets trigger gap discovery, both derive the same addresses from the shared xpubkey, and the second addNewAddresses INSERT hits `Duplicate entry ... for key 'address.PRIMARY'`. Handler rethrows, state machine retries, loop until timeout. Proper fix requires separate xpubkeys per fixture wallet and re-deriving the seeded addresses, which is out of scope for #399 (connection lifecycle only). Reverting the null-check fix + timeout bump. Copilot's finding is valid; will address in a follow-up with proper fixture work. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (1)
packages/daemon/src/services/index.ts (1)
1052-1077: Nit: validatecontext.eventbefore acquiring a pool connection.
getLastSyncedEventis executed before the!context.eventguard, so an invalid context still costs a round-trip and briefly pins a pool slot. Not a correctness issue (thefinallyreleases), but since this function is also called on the hot state-machine path, the check is cheap to move up.♻️ Optional reordering
export const updateLastSyncedEvent = async (context: Context) => { + if (!context.event) { + throw new Error('Tried to update last synced event but no event in context'); + } let mysql: PoolConnection | undefined; try { mysql = await getDbConnection(); const lastDbSyncedEvent: LastSyncedEvent | null = await getLastSyncedEvent(mysql); - if (!context.event) { - throw new Error('Tried to update last synced event but no event in context'); - } - const lastEventId = context.event.event.id;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/daemon/src/services/index.ts` around lines 1052 - 1077, Move the cheap validation of context.event to the top of updateLastSyncedEvent so you don't acquire a DB pool connection unnecessarily: check if context.event exists and throw the same error before calling getDbConnection(), then proceed to call getDbConnection(), getLastSyncedEvent, and dbUpdateLastSyncedEvent as before; updateLastSyncedEvent, getDbConnection, getLastSyncedEvent and dbUpdateLastSyncedEvent are the relevant symbols to change.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@packages/daemon/src/services/index.ts`:
- Around line 858-859: The handleVoidedTx flow currently calls
dbUpdateLastSyncedEvent after await mysql.commit(), which differs from sibling
handlers and can cause replay-driven corruption; move the await
dbUpdateLastSyncedEvent(mysql, fullNodeEvent.event.id) call to occur inside the
transaction immediately before await mysql.commit() (i.e., same placement
pattern used by handleVertexAccepted, handleVertexRemoved, handleTxFirstBlock,
handleNcExecVoided, handleTokenCreated), ensure the surrounding try/catch still
rolls back the transaction on dbUpdateLastSyncedEvent failure, and keep
voidAddressTransaction/voidWalletTransaction invocation and commit semantics
unchanged except for the reordered sync-pointer update.
---
Nitpick comments:
In `@packages/daemon/src/services/index.ts`:
- Around line 1052-1077: Move the cheap validation of context.event to the top
of updateLastSyncedEvent so you don't acquire a DB pool connection
unnecessarily: check if context.event exists and throw the same error before
calling getDbConnection(), then proceed to call getDbConnection(),
getLastSyncedEvent, and dbUpdateLastSyncedEvent as before;
updateLastSyncedEvent, getDbConnection, getLastSyncedEvent and
dbUpdateLastSyncedEvent are the relevant symbols to change.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: e041f7f9-3790-41ce-b592-0df48ddd3764
📒 Files selected for processing (1)
packages/daemon/src/services/index.ts
…tinct xpubs Re-apply @Copilot's fix — `if (maxAmongAddresses == null || maxWalletIndex == null)` so index 0 (a valid first-derived address) is no longer treated as "missing" and silently skipping gap discovery. Fixes the integration test fallout in the same commit: The VOIDED_TOKEN_AUTHORITY fixture previously had both `cafecafe` and `deafbeef` sharing a single xpubkey. The `address` table's primary key is just `address` (not `(address, wallet_id)`), so when the corrected check lets gap discovery run for both wallets, they both derive the same addresses from the shared xpubkey, and the second INSERT hits `Duplicate entry ... for key 'address.PRIMARY'`. The old `!value` guard silently skipped gap discovery for these fixtures (their highest touched/derived index was 0), which masked the duplicate xpubkey. Switched `cafecafe` to the sibling xpub (path /7 of deafbeef's xpub) so gap discovery derives a disjoint address set. The seeded address strings (HFtz, HJQb, HRQe, HRXV) are unchanged — they're just fixture pointers for the fullnode simulator's events and don't need to line up with real xpub derivations. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
…lease # Conflicts: # packages/daemon/__tests__/services/services.test.ts
Every other handler calls dbUpdateLastSyncedEvent before mysql.commit() so the pointer advance is atomic with the void. handleVoidedTx was the odd one out — it committed first, then updated the pointer outside the transaction. If dbUpdateLastSyncedEvent threw (transient connection, deadlock), voidTx would already be committed but the state machine would see the pointer unchanged and re-deliver the VOIDED event. voidTx isn't fully idempotent — voidAddressTransaction / voidWalletTransaction recompute balances from current UTXO state, so a second application against an already-voided tx can corrupt balances. Swap the two calls so dbUpdateLastSyncedEvent runs inside the tx, before commit, matching every other handler. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
| @@ -298,16 +298,16 @@ export function isNanoContract(headers: EventTxHeader[]) { | |||
| */ | |||
| export const handleVertexAccepted = async (context: Context, _event: Event) => { | |||
There was a problem hiding this comment.
This is a 300 line function that runs on all new transactions, maybe we should break it into smaller sections to improve readability.
I understand this is not in the scope of this PR, but it made reviewing harder.
Maybe we should mark this in a comment so we get around to it later as a KTLO?
Motivation
Tier 1 fix #1 from #395. Every handler in
services/index.tsended withmysql.destroy()in itsfinallyblock, which closes the underlying TCP socket and defeats the connection pool — every subsequent call paid a fresh TCP + MySQL auth handshake (plus TLS in prod). Effectively the pool size was 0.Stacked on #398 (the CI bench workflow) so the workflow can fire against this change and produce its first real comparison report. When the stack merges, the final result lands on master together.
Acceptance Criteria
mysql.destroy()calls inservices/index.tsreplaced withmysql.release()getDbConnection()return type corrected toPoolConnectionso.release()is available without castsfinallyblock, just returned to the pool rather than destroyedMeasured impact
Local bench against
VOIDED_TOKEN_AUTHORITY(66 events), 5 baseline runs vs 8 candidate runs:The handlers that did only 1–2 queries each show the largest relative gain — connection acquire was a dominant fraction of their total cost.
handleVertexAccepteddoes more DB work so the relative effect is smaller, but directionally improved.These numbers are local (not CI), so absolute values aren't apples-to-apples with the CI runner — but the direction and magnitude are consistent with issue #395's estimate of "~10–50ms savings per event, depending on RDS latency and TLS."
Checklist
master, confirm this code is production-ready and can be included in future releases as soon as it gets merged🤖 Generated with Claude Code
Summary by CodeRabbit
Bug Fixes
Refactor
Tests