Skip to content

feat: shielded outputs integration#1059

Open
pedroferreira1 wants to merge 38 commits intomasterfrom
feat/shielded-outputs-integration
Open

feat: shielded outputs integration#1059
pedroferreira1 wants to merge 38 commits intomasterfrom
feat/shielded-outputs-integration

Conversation

@pedroferreira1
Copy link
Copy Markdown
Member

@pedroferreira1 pedroferreira1 commented Apr 9, 2026

Acceptance Criteria

  • Include here all things that this PR should solve

Security Checklist

  • Make sure you do not include new dependencies in the project unless strictly necessary and do not include dev-dependencies as production ones. More dependencies increase the possibility of one of them being hijacked and affecting us.

Summary by CodeRabbit

  • New Features

    • Full user-facing shielded transactions: shielded address format, send/receive flows, wallet integration (scan/spend keys), per-shielded-output fees, auto-conversion when sending, unshielding, and recovery after restart.
  • Tests

    • Extensive new unit and integration suites validating shielded address encoding, output creation/serialization/round-trip, end-to-end shielded send/receive, wallet behavior, and error cases.
  • Chores

    • Added shielded constants, types, crypto provider wiring, storage/wallet APIs, and optional PIN support for history processing.

@pedroferreira1 pedroferreira1 self-assigned this Apr 9, 2026
@pedroferreira1 pedroferreira1 moved this from Todo to In Progress (Done) in Hathor Network Apr 9, 2026
@pedroferreira1 pedroferreira1 moved this from In Progress (Done) to In Progress (WIP) in Hathor Network Apr 9, 2026
@coderabbitai
Copy link
Copy Markdown

coderabbitai bot commented Apr 9, 2026

Note

Reviews paused

It looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the reviews.auto_review.auto_pause_after_reviewed_commits setting.

Use the following commands to manage reviews:

  • @coderabbitai resume to resume automatic reviews.
  • @coderabbitai review to trigger a single review.

Use the checkboxes below for quick actions:

  • ▶️ Resume reviews
  • 🔍 Trigger review
📝 Walkthrough

Walkthrough

Adds end-to-end shielded transaction support: new shielded types, crypto provider and creation/processing logic, header serialization/deserialization, wallet/storage chain support and PIN propagation, send/prepare/sign changes, history/schema updates, many tests, and integration/docker adjustments.

Changes

Cohort / File(s) Summary
Shielded types & entry
src/shielded/types.ts, src/shielded/index.ts
New ShieldedOutputMode enum and comprehensive shielded interfaces; module entrypoint exports types and helpers.
Shielded crypto provider
src/shielded/provider.ts, package.json
Adds createDefaultShieldedCryptoProvider() wrapping native @hathor/ct-crypto-node; dependency added (local file: reference).
Shielded creation & proofs
src/shielded/creation.ts, __tests__/shielded/creation.test.ts
createShieldedOutputs() implements blinding, balancing, surjection proof generation; tests for happy/failure paths.
Shielded processing & rewind
src/shielded/processing.ts, __tests__/shielded/processing.test.ts
resolveTokenUid() and processShieldedOutputs() handle token resolution, scan-key derivation, rewind/decrypt, cross-checks, and logging/skip semantics.
Model, header, transaction integration
src/models/shielded_output.ts, src/headers/shielded_outputs.ts, src/headers/types.ts, src/models/transaction.ts
New ShieldedOutput model (serialize/serializeSighash), ShieldedOutputsHeader (serialize/deserialize), header id added, Transaction.shieldedOutputs integrated and validated.
Wallet send flow & types
src/new/sendTransaction.ts, src/new/types.ts
New ISendShieldedOutput, type guard, phantom transparent outputs for UTXO selection, shielded fee accounting, createShieldedOutputs() call integration.
Wallet APIs, PIN & provider wiring
src/new/wallet.ts, src/wallet/types.ts
Address-chain options (shielded vs legacy) in address APIs, PIN propagation into history/process flows, dynamic shielded provider wiring, new setShieldedCryptoProvider().
Storage & memory store (chain-aware)
src/storage/storage.ts, src/storage/memory_store.ts
Added shieldedCryptoProvider, shielded key accessors, per-chain address indexes/cursors, getUtxo/selectUtxos shielded filtering, separate cursor updates and persistence changes.
Address utilities & Base58 shielded
src/utils/shieldedAddress.ts, src/utils/address.ts, src/models/address.ts
Encode/derive shielded Base58 addresses, deriveShieldedAddressFromStorage, Address gains isShielded/getScanPubkey/getSpendPubkey/getSpendAddress and related logic.
Schemas & history processing
src/schemas.ts, src/utils/storage.ts, src/utils/transaction.ts, src/sync/stream.ts
History schemas accept shielded variants; processNewTx/processHistory accept optional pinCode and decrypt/process shielded outputs when provider+pin available; stream sync updated.
Headers parser & exports
src/headers/parser.ts, src/lib.ts
HeaderParser supports new shielded outputs header id; src/lib.ts exports shielded namespace and re-exports shielded types.
Tests & mocks
many __tests__/** (new/updated)
Extensive unit and integration tests added/updated for shielded outputs, headers, creation, processing, wallet flows, and storage; mocks extended.
Integration config & docker
__tests__/integration/configuration/docker-compose.yml, __tests__/integration/configuration/privnet.yml
Updated images/commands, added cpuminer service, attached tx-mining-service to privnet, added ENABLE_SHIELDED_TRANSACTIONS flag and port changes.
Small robustness tweaks
src/utils/bigint.ts, setupTests-integration.js
Minor internal robustness and whitespace tweak.

Sequence Diagram(s)

sequenceDiagram
    participant Wallet as HathorWallet
    participant Storage as Storage
    participant Crypto as ShieldedCryptoProvider
    participant Network as Fullnode

    rect rgba(100,200,100,0.5)
    Note over Wallet,Network: Construct & send shielded transaction
    Wallet->>Storage: getScanXPubKey/getSpendXPubKey / selectUtxos
    Storage-->>Wallet: xpubs / UTXOs
    Wallet->>Crypto: createShieldedOutputs(defs, inputBlinding...)
    Crypto-->>Crypto: generate blinding factors, commitments, proofs
    Crypto-->>Wallet: created outputs (commitments, proofs, ephemeralPubkey)
    Wallet->>Network: sendTransaction(tx with ShieldedOutputsHeader)
    Network-->>Wallet: tx accepted
    end

    rect rgba(100,100,200,0.5)
    Note over Network,Wallet: Receive & process shielded transaction
    Network-->>Wallet: onNewTx(tx with shielded header)
    Wallet->>Storage: getScanXPrivKey(pinCode?)
    Storage-->>Wallet: scan xpriv (or error)
    Wallet->>Crypto: rewindAmount/rewindFull(scanPriv, ephemeralPubkey, ...)
    Crypto-->>Wallet: decrypted value, tokenUid, blinding factors
    Wallet->>Storage: processShieldedOutputs -> save UTXO/update balances
    Storage-->>Wallet: processed outputs merged into history
    end
Loading

Estimated code review effort

🎯 5 (Critical) | ⏱️ ~120 minutes

Possibly related PRs

Suggested labels

enhancement, tests

Suggested reviewers

  • raul-oliveira
  • tuliomir
  • r4mmer

Poem

🐰 I nibble keys and hide my hops,

commitments snug in secret crops,
proofs and blinding stitched so neat,
wallets hum and ledgers beat,
hop—shielded bytes make privacy hops.

🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 57.89% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The PR title 'feat: shielded outputs integration' directly describes the main change: adding comprehensive shielded outputs support throughout the codebase, including new models, cryptographic operations, address handling, transaction processing, and integration tests.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch feat/shielded-outputs-integration

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 16

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (3)
src/types.ts (1)

277-291: ⚠️ Potential issue | 🟠 Major

Split IHistoryOutput into separate transparent and shielded shapes.

The interface declares value, token_data, script, decoded, token, and spent_by as required, but the comment correctly states that shielded entries (where type === 'shielded') have these "instead of value/token"—meaning they lack these fields entirely. This creates a type safety gap: TypeScript won't prevent code from accessing output.value on a shielded entry.

The codebase works around this with runtime guards (isShieldedOutputEntry() checks) and unsafe casts (as any). Instead, use a discriminated union keyed by type, or separate the types in IHistoryTx.outputs to reflect the actual mixed transparent/shielded reality until they're split (as already happens in new/wallet.ts). The Zod schema already makes these fields optional as a workaround; the TypeScript types should match.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/types.ts` around lines 277 - 291, The IHistoryOutput should be split into
a discriminated union so shielded entries are typed separately: create two
interfaces (e.g., TransparentHistoryOutput with required fields value,
token_data, script, decoded, token, spent_by and maybe selected_as_input, and
ShieldedHistoryOutput with type: 'shielded' and the shielded-only fields
commitment, range_proof, ephemeral_pubkey, asset_commitment, surjection_proof
and without value/token fields), then change IHistoryOutput to
TransparentHistoryOutput | ShieldedHistoryOutput and update usages such as
IHistoryTx.outputs and any code using isShieldedOutputEntry() to rely on the
discriminant instead of unsafe casts; also align the new Zod schema shape in
new/wallet.ts to these types so TypeScript and runtime validation match.
src/models/address.ts (1)

94-119: ⚠️ Potential issue | 🟠 Major

Correlate the payload length with the version byte.

validateAddress() currently accepts any known version byte with either 25-byte or 71-byte payloads. That lets a checksummed 71-byte payload with the p2pkh/p2sh byte, or a 25-byte payload with the shielded byte, pass validation, and getType() will then classify it from the version byte alone.

🛡️ Suggested guard
     const firstByte = addressBytes[0];
     if (!this.network.isVersionByteValid(firstByte)) {
       throw new AddressError(
         `${errorMessage} Invalid network byte. Expected: ${this.network.versionBytes.p2pkh}, ${this.network.versionBytes.p2sh}, or ${this.network.versionBytes.shielded} and received ${firstByte}.`
       );
     }
+
+    const isLegacyVersion =
+      firstByte === this.network.versionBytes.p2pkh ||
+      firstByte === this.network.versionBytes.p2sh;
+    const isShieldedVersion = firstByte === this.network.versionBytes.shielded;
+
+    if (
+      (addressBytes.length === LEGACY_ADDR_LENGTH && !isLegacyVersion) ||
+      (addressBytes.length === SHIELDED_ADDR_LENGTH && !isShieldedVersion)
+    ) {
+      throw new AddressError(`${errorMessage} Address length does not match its version byte.`);
+    }

Also applies to: 136-149

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/models/address.ts` around lines 94 - 119, validateAddress() currently
accepts any known version byte with either payload length; enforce that the
version byte and payload length correlate: after computing firstByte (version
byte) and addressBytes.length, check network.versionBytes.p2pkh and .p2sh
require LEGACY_ADDR_LENGTH, and network.versionBytes.shielded requires
SHIELDED_ADDR_LENGTH (or vice-versa if other version bytes exist), and if
mismatch throw AddressError using the existing errorMessage; update the same
guard in the later block that also validates version bytes (the code paths
referenced by validateAddress, getType, LEGACY_ADDR_LENGTH,
SHIELDED_ADDR_LENGTH, network.versionBytes, and AddressError).
src/utils/storage.ts (1)

327-346: ⚠️ Potential issue | 🔴 Critical

Guard index-limit mode when shielded derivation is unavailable.

lastLoaded now always uses shieldedLastLoadedAddressIndex, even for wallets where deriveShieldedAddressFromStorage() will always return null. In that case the shielded cursor never advances from its sentinel value, so checkIndexLimit() keeps returning the same range and sync can loop forever.

Suggested fix
-  // If either chain is below the end index, load more.
-  // Use min so the lagging chain gets its addresses loaded.
-  const lastLoaded = Math.min(lastLoadedAddressIndex, shieldedLastLoadedAddressIndex);
+  const hasShieldedKeys = !!(await storage.getAccessData())?.spendXpubkey;
+  // If shielded derivation is unavailable, only the legacy cursor can advance.
+  const lastLoaded = hasShieldedKeys
+    ? Math.min(lastLoadedAddressIndex, shieldedLastLoadedAddressIndex)
+    : lastLoadedAddressIndex;
   if (lastLoaded < limits.endIndex) {
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/utils/storage.ts` around lines 327 - 346, checkIndexLimit() uses
shieldedLastLoadedAddressIndex unconditionally which causes an infinite loop
when deriveShieldedAddressFromStorage() is unavailable; update the logic in
checkIndexLimit (where storage.getWalletData(), isIndexLimitScanPolicy() and
storage.getIndexLimit() are used) to only include shieldedLastLoadedAddressIndex
when shielded derivation is supported (e.g., detect via
deriveShieldedAddressFromStorage() or a flag from storage), otherwise fall back
to using only lastLoadedAddressIndex so the nextIndex/count are computed from
the non-shielded cursor and the scan advances correctly.
🧹 Nitpick comments (7)
src/new/types.ts (1)

330-344: Model shielded outputs as a separate variant.

shielded?: ShieldedOutputMode documents the invariant, but it still allows a legacy/base58 address plus shielded: FULLY_SHIELDED to type-check. Since this is a public API, a discriminated union would catch that misuse earlier.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/new/types.ts` around lines 330 - 344, ProposedOutput currently allows any
address with an optional shielded flag so code can wrongly pair a legacy/base58
address with shielded modes; replace ProposedOutput with a discriminated union
(e.g. LegacyOutput and ShieldedOutput) so the discriminator is the shielded
field: LegacyOutput has shielded?: undefined and accepts legacy Address type,
while ShieldedOutput has shielded: ShieldedOutputMode (or a specific literal
like "AMOUNT_SHIELDED"|"FULLY_SHIELDED") and its address type is the
ShieldedAddress type (or otherwise constrained); update usages expecting
ProposedOutput to accept the union and ensure ShieldedOutput includes the same
value/timelock/token properties.
src/schemas.ts (1)

87-111: Consider using a discriminated union for stricter validation.

Making all fields optional accommodates both shapes but loses validation strictness. A shielded entry could pass without required shielded fields (e.g., commitment), or a transparent output could pass without value.

Consider using z.discriminatedUnion on the type field to enforce shape-specific requirements:

♻️ Suggested approach using discriminated union
const TransparentOutputSchema = z.object({
  value: bigIntCoercibleSchema,
  token_data: z.number(),
  script: z.string(),
  decoded: IHistoryOutputDecodedSchema,
  token: z.string(),
  spent_by: z.string().nullable().optional(),
  selected_as_input: z.boolean().optional(),
}).passthrough();

const ShieldedOutputSchema = z.object({
  type: z.literal('shielded'),
  commitment: z.string(),
  range_proof: z.string(),
  ephemeral_pubkey: z.string(),
  asset_commitment: z.string(),
  surjection_proof: z.string(),
}).passthrough();

export const IHistoryOutputSchema = z.union([
  TransparentOutputSchema,
  ShieldedOutputSchema,
]) as ZodSchema<IHistoryOutput>;

If the fullnode can return partially populated shielded entries or mixed shapes that don't fit a discriminated union, the current permissive approach is acceptable.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/schemas.ts` around lines 87 - 111, The schema IHistoryOutputSchema
currently marks all fields optional which weakens validation and allows
malformed transparent or shielded entries; update it to a discriminated union
based on the type field by creating two schemas (e.g., TransparentOutputSchema
and ShieldedOutputSchema) where TransparentOutputSchema requires fields like
value (bigIntCoercibleSchema) and related transparent properties and
ShieldedOutputSchema requires type: z.literal('shielded') and required shielded
fields such as commitment, range_proof, ephemeral_pubkey, asset_commitment,
surjection_proof, then export IHistoryOutputSchema as
z.union([TransparentOutputSchema, ShieldedOutputSchema]) (retain .passthrough()
if needed) so each shape is strictly validated.
src/utils/transaction.ts (1)

83-84: Prefer undefined over null for optional state here.

These new null sentinel types go against the repo-wide TS migration rule and make the helper/cache state less consistent with the rest of the module.

Based on learnings, avoid using null to indicate optional/unprovided parameters in TypeScript; use undefined and the ? optional syntax instead.

Also applies to: 193-194

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/utils/transaction.ts` around lines 83 - 84, The helper is using null as a
sentinel for optional state; update the parameter and checks to use undefined
instead: change the type annotations from `{ type?: string | null }` to `{
type?: string }` in isShieldedOutputEntry and the other occurrence around the
code flagged (lines ~193-194), and update any strict equality checks or guards
that compare to null to instead check for undefined or use truthiness
(`output.type !== undefined` or `!!output.type`) so the module follows the
repo-wide TS convention of using undefined for optional/unprovided values.
__tests__/integration/configuration/docker-compose.yml (1)

11-11: Pin these shielded test images by digest to ensure reproducible integration environments.

Both new tags are mutable prerelease-style labels. If either image is republished, the integration environment can change underneath the same commit and make shielded test failures harder to reproduce locally or in CI. The images can be pinned by their digests instead:

  • hathornetwork/hathor-core:experimental-shielded-outputs-alpha-v1@sha256:c1d372698f0b549dbb2e2e6955ae27ec1f9cf728948c6d758651636b4c3ff5f5
  • hathornetwork/tx-mining-service:shielded-outputs-v1@sha256:9e78ce25741fa07b092c5ab55ad18d6f76f40a28d1b28e6580bb7fd133d511b4

This applies to lines 11 and 45.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@__tests__/integration/configuration/docker-compose.yml` at line 11, Replace
the mutable image tags with their content-addressable digests to make
integration tests reproducible: update the value currently using
"hathornetwork/hathor-core:experimental-shielded-outputs-alpha-v1" (referenced
via ${HATHOR_LIB_INTEGRATION_TESTS_FULLNODE_IMAGE:-...}) to
"hathornetwork/hathor-core:experimental-shielded-outputs-alpha-v1@sha256:c1d372698f0b549dbb2e2e6955ae27ec1f9cf728948c6d758651636b4c3ff5f5"
and likewise pin "hathornetwork/tx-mining-service:shielded-outputs-v1" to
"hathornetwork/tx-mining-service:shielded-outputs-v1@sha256:9e78ce25741fa07b092c5ab55ad18d6f76f40a28d1b28e6580bb7fd133d511b4"
so the Docker Compose entries for those images are fixed by digest.
src/storage/memory_store.ts (1)

469-476: Consider defining a proper type for shielded output detection.

The cast (output as any).type !== 'shielded' works but bypasses type safety. Consider extending the output type in IHistoryTx or using a type guard function for cleaner detection.

♻️ Example type guard
// In a types or utils file
function isShieldedHistoryOutput(output: unknown): boolean {
  return (output as { type?: string }).type === 'shielded';
}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/storage/memory_store.ts` around lines 469 - 476, The code uses an unsafe
cast "(output as any).type !== 'shielded'" to detect shielded outputs; instead
add a proper type or type guard and use it: extend the output type in IHistoryTx
(or create a dedicated IHistoryOutput with optional "type?: string") and replace
the any-cast check with a type guard function (e.g.,
isShieldedHistoryOutput(output): boolean) or a typed property check so the
condition in the loop uses the guard (or the typed property) to determine
shielded outputs rather than casting to any; update references in the loop
around the variables "output", "IHistoryTx", and the file "memory_store.ts" to
call the new guard/typed property.
src/models/shielded_output.ts (1)

44-64: Consider using an options object for constructor parameters.

The constructor has 9 positional parameters with value positioned between assetCommitment and surjectionProof. This ordering is error-prone and the current signature makes it easy to accidentally swap assetCommitment and value. Consider using a named options object pattern.

♻️ Example refactor using options object
-  constructor(
-    mode: ShieldedOutputMode,
-    commitment: Buffer,
-    rangeProof: Buffer,
-    tokenData: number,
-    script: Buffer,
-    ephemeralPubkey: Buffer,
-    assetCommitment?: Buffer,
-    value: OutputValueType = 0n,
-    surjectionProof?: Buffer,
-  ) {
+  constructor(opts: {
+    mode: ShieldedOutputMode;
+    commitment: Buffer;
+    rangeProof: Buffer;
+    tokenData: number;
+    script: Buffer;
+    ephemeralPubkey: Buffer;
+    assetCommitment?: Buffer;
+    surjectionProof?: Buffer;
+    value?: OutputValueType;
+  }) {
+    this.mode = opts.mode;
+    this.commitment = opts.commitment;
+    // ... etc
+    this.value = opts.value ?? 0n;
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/models/shielded_output.ts` around lines 44 - 64, The constructor on class
ShieldedOutput accepts many positional args (mode, commitment, rangeProof,
tokenData, script, ephemeralPubkey, assetCommitment, value, surjectionProof)
which is error-prone (especially assetCommitment vs value); refactor the
constructor to accept a single options object (e.g., { mode, commitment,
rangeProof, tokenData, script, ephemeralPubkey, assetCommitment?, value?,
surjectionProof? }) and update the constructor body to pull values from that
object, update any call sites to pass a named options object, and keep the same
property names on the instance (this.mode, this.commitment,
this.assetCommitment, this.value, this.surjectionProof) to preserve behavior.
src/storage/storage.ts (1)

83-91: Consider clearing pinCode after use for better security hygiene.

Storing pinCode in memory on the Storage instance means it persists for the lifetime of the wallet session. While this may be intentional for convenience during shielded output decryption, consider whether the pin should be cleared after processing completes or after a timeout, to reduce the window of exposure if the process memory is compromised.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/storage/storage.ts` around lines 83 - 91, The Storage class currently
keeps the pinCode property in memory for the wallet lifetime; update the code so
any method that consumes pinCode (e.g., the shielded output decryption routines
that use shieldedCryptoProvider / the promise used to chain processing of
unlocked UTXOs) clears this.pinCode when finished — use a try/finally around the
decryption/processing logic to set this.pinCode = undefined (or null) in the
finally block, and optionally add an async timeout-based clear (clear after X
ms) where pinCode is set to reduce exposure.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@__tests__/integration/hathorwallet_facade.test.ts`:
- Around line 77-78: The test currently asserts
expect(tx.nonce).toBeGreaterThan(0) which is flaky; change it to assert a
guaranteed mining side effect (for example, assert tx.hash is truthy or the
transaction status/confirmation exists) and only perform a type check on the
nonce. Specifically, update the assertions after sendTx.runFromMining() to
something like: assert tx.hash (or tx.status/confirmation) is present to prove
it was mined, and replace the strict numeric assertion with a type check like
typeof tx.nonce === 'number' (referencing sendTx.runFromMining and tx.nonce).

In `@__tests__/shielded/processing.test.ts`:
- Around line 156-176: The test currently mocks getScanXPrivKey to reject, which
prevents calling provider.rewindAmountShieldedOutput; change the storage mock
for getScanXPrivKey in this test to resolve with a valid scan xpriv (e.g. a
dummy string) so processShieldedOutputs will invoke rewindAmountShieldedOutput,
then assert that storage.logger.debug (or the logger used inside
processShieldedOutputs) was called when rewind throws; update the expectations
to check for the debug log instead of relying on an empty branch that never
executes.
- Around line 179-210: The test currently has both wallet-owned addresses fail
by making storage.getScanXPrivKey always reject, so the provider decryption path
is never exercised; update the test to have getScanXPrivKey resolve for one
owned address (e.g., return a mock xpriv for bip32AddressIndex 0) and continue
rejecting for the other (index 2), and mock the provider's rewind/decryption
method (the provider used in processShieldedOutputs) to return a successful
rewind result for the address that gets a valid key so the test exercises mixed
filtering and decryption behavior; keep storage.getAddressInfo as-is and ensure
expectations assert the successful decrypted output alongside excluded/failed
ones accordingly.

In `@src/headers/shielded_outputs.ts`:
- Around line 92-105: If `mode` is not one of the handled enums, stop parsing
instead of falling through: in the ShieldedOutput parsing branch (where
ShieldedOutputMode is checked and tokenData, assetCommitment and surjectionProof
are set) add an explicit else that throws an Error (e.g. `throw new
Error(\`Unknown ShieldedOutputMode: ${mode}\`)`) so you don't consume the
ephemeral pubkey or misalign `buf` and you fail fast when an unexpected mode is
encountered; ensure this change is made alongside the existing checks that set
tokenData, assetCommitment, and surjectionProof before constructing the
ShieldedOutput.
- Around line 53-125: The deserialize method for ShieldedOutputsHeader accepts
truncated buffers because it uses buf.subarray() without length checks; update
ShieldedOutputsHeader.deserialize to validate buffer length before every
fixed-size read (33-byte commitments, 33-byte ephemeralPubkey, 33-byte
assetCommitment) and after each unpackToInt for length-prefixed fields (range
proof rpLen, scriptLen, surjection proof spLen) ensure rpLen/scriptLen/spLen <=
buf.length before slicing, and throw an Error on insufficient bytes; also
validate that numOutputs is feasible given remaining buf length (or check
minimal per-output remaining bytes inside the loop) so ShieldedOutput
construction only happens when all required bytes are present.

In `@src/models/transaction.ts`:
- Around line 368-371: The loop summing shielded outputs uses the mutable field
this.shieldedOutputs which isn’t rebuilt during deserialization; update
deserialization or the summation to use the canonical shielded headers instead.
Specifically, ensure createFromBytes() and getHeadersFromBytes() populate
this.shieldedOutputs (or remove reliance on it) so methods like the
weight/validation loop that uses sumOutputs and the shieldedOutputs array (and
the similar block around lines 417-420) either derive values directly from the
parsed shielded header structures or have the array hydrated from the serialized
headers before being read.

In `@src/new/wallet.ts`:
- Around line 837-845: getCurrentAddress returns shielded addresses when
opts?.legacy === false but still always calls getAddressPathForIndex(), which
builds transparent P2PKH/P2SH paths; update the call so the returned addressPath
matches the address type: detect legacy vs shielded via opts?.legacy (or by
checking the address format after getCurrentAddress), and call a shielded-path
builder instead of getAddressPathForIndex for non-legacy addresses — e.g., add
or use a method like getShieldedAddressPathForIndex(index) or extend
getAddressPathForIndex to accept a flag (legacy: boolean) and return the correct
chain path for shielded addresses, then return that path in getCurrentAddress
(same change required for the other occurrence at lines 852-855).
- Around line 903-923: Add an early-exit check in _drainLiveQueue to avoid
processing after the wallet has been closed: at the start of _drainLiveQueue
verify walletStopped (or state === CLOSED) and return immediately if true so you
never call onNewTx on a closed wallet, and perform the same check before calling
_scheduleLiveQueueDrain() (or before rescheduling) to avoid restarting drains
after stop() has been called; ensure _inLiveDrain/_liveQueueDraining state is
set appropriately when exiting early to keep invariants consistent.
- Around line 1538-1563: The code extracts shielded entries into shieldedEntries
and then replaces newTx.outputs with transparentOutputs, which shifts on-chain
output indexes; instead, keep newTx.outputs unchanged and only assign
newTx.shielded_outputs = shieldedEntries (i.e., remove the reassignment
newTx.outputs = transparentOutputs), or if you must mark removed entries
preserve array length by leaving placeholders so indices remain stable; update
the block that builds shieldedEntries (uses
transactionUtils.isShieldedOutputEntry, shieldedEntries, transparentOutputs,
newTx.shielded_outputs, newTx.outputs) to stop compacting/reindexing the
outputs.

In `@src/shielded/provider.ts`:
- Around line 29-35: The createAmountShieldedOutput function (and the similar
call sites mentioned) currently only checks value > Number.MAX_SAFE_INTEGER but
still allows negative or out-of-range BigInt values that will be rounded when
cast to Number; add a shared helper (e.g., assertBigIntInSafeRange or
ensureSafeBigIntToNumber) and use it wherever you call Number(value) — the
helper should verify the input is a BigInt, is >= 0 (reject negatives), and <=
Number.MAX_SAFE_INTEGER, and throw a clear error otherwise; replace the inlined
check in createAmountShieldedOutput and the other instances (the
ct.createAmountShieldedOutput callers at the noted locations) to call this
helper and then cast with Number(...) only after validation.

In `@src/types.ts`:
- Around line 652-654: The interface exposes pinCode?: string but
setPinCode(pinCode: string) forces a value; change setPinCode to accept an
optional parameter (setPinCode(pinCode?: string): void) so callers can clear the
cached PIN by calling setPinCode() with no args; update any implementing
classes/functions that declare setPinCode to match the signature (and ensure
they handle undefined to clear the stored pin) — target the setPinCode method
and any implementations that reference pinCode.
- Around line 715-719: Change the getSpendXPrivKey signature to return
Promise<string | undefined> to reflect optional spendMainKey in
IWalletAccessData; update the declaration of getSpendXPrivKey() in the interface
alongside the existing getScanXPrivKey(), getScanXPubKey(), and
getSpendXPubKey() declarations so callers can statically handle missing keys
(ensure any implementing classes and usages are updated to satisfy the new
Promise<string | undefined> return type).

In `@src/utils/bigint.ts`:
- Around line 34-38: When the reviver runs in Hermes and context?.source is
missing, do not silently return numeric values; instead detect numeric values
that are outside Number.isSafeInteger and immediately throw a descriptive error
to fail fast. Update the reviver in src/utils/bigint.ts (the branch that
currently does "if (!context?.source) { return value; }") to check if typeof
value === 'number' and !Number.isSafeInteger(value) and throw a clear error
mentioning the unsafe integer and the reviver/context, otherwise return the
value as before. This ensures bigint-valued fields are not silently corrupted
when the reviver context is unavailable.

In `@src/utils/storage.ts`:
- Around line 377-400: The code still uses shieldedLastLoadedAddressIndex in
calculations even when shielded keys are disabled, causing minLastLoaded and
shieldedTarget to be wrong (e.g., -1) and forcing re-fetch from index 0; fix by
conditioning those calculations on hasShieldedKeys: when hasShieldedKeys is
false, treat shieldedLastLoadedAddressIndex as equal to lastLoadedAddressIndex
(or simply ignore the shielded chain) so shieldedTarget and minLastLoaded only
reflect active chains; update the computations for shieldedTarget,
minLastLoaded, nextIndex, and count (used by loadAddresses) to use the adjusted
value so gap-extension starts at the correct suffix rather than at 0.

In `@src/utils/transaction.ts`:
- Around line 668-695: The shielded output builder in the
txData.shieldedOutputs.map block currently only validates common crypto fields;
update the validation to also check mode-specific fields before constructing
ShieldedOutput: if so.mode === 'FULLY_SHIELDED' (or the enum value used) ensure
so.assetCommitment and so.surjectionProof are present and throw a clear Error if
missing. Modify the map in the same function where ShieldedOutput is
instantiated to perform this additional check (alongside the existing checks for
commitment/rangeProof/script/ephemeralPubkey) so malformed FULLY_SHIELDED
entries fail fast with a descriptive message.

In `@src/utils/wallet.ts`:
- Around line 623-631: The code is deriving and persisting single-sig shielded
keys even for multisig wallets; update the guard around the derivation (the
block using argXpriv.deriveNonCompliantChild(SHIELDED_SPEND_ACCT_PATH) that sets
accessData.spendXpubkey and accessData.spendMainKey via encryptData) to skip
this entire derivation when walletType === WalletType.MULTISIG (or equivalent
multisig indicator), so multisig wallets never get single-sig shielded keys
stored; apply the same check to the other similar branch mentioned (the other
place that derives shielded keys).

---

Outside diff comments:
In `@src/models/address.ts`:
- Around line 94-119: validateAddress() currently accepts any known version byte
with either payload length; enforce that the version byte and payload length
correlate: after computing firstByte (version byte) and addressBytes.length,
check network.versionBytes.p2pkh and .p2sh require LEGACY_ADDR_LENGTH, and
network.versionBytes.shielded requires SHIELDED_ADDR_LENGTH (or vice-versa if
other version bytes exist), and if mismatch throw AddressError using the
existing errorMessage; update the same guard in the later block that also
validates version bytes (the code paths referenced by validateAddress, getType,
LEGACY_ADDR_LENGTH, SHIELDED_ADDR_LENGTH, network.versionBytes, and
AddressError).

In `@src/types.ts`:
- Around line 277-291: The IHistoryOutput should be split into a discriminated
union so shielded entries are typed separately: create two interfaces (e.g.,
TransparentHistoryOutput with required fields value, token_data, script,
decoded, token, spent_by and maybe selected_as_input, and ShieldedHistoryOutput
with type: 'shielded' and the shielded-only fields commitment, range_proof,
ephemeral_pubkey, asset_commitment, surjection_proof and without value/token
fields), then change IHistoryOutput to TransparentHistoryOutput |
ShieldedHistoryOutput and update usages such as IHistoryTx.outputs and any code
using isShieldedOutputEntry() to rely on the discriminant instead of unsafe
casts; also align the new Zod schema shape in new/wallet.ts to these types so
TypeScript and runtime validation match.

In `@src/utils/storage.ts`:
- Around line 327-346: checkIndexLimit() uses shieldedLastLoadedAddressIndex
unconditionally which causes an infinite loop when
deriveShieldedAddressFromStorage() is unavailable; update the logic in
checkIndexLimit (where storage.getWalletData(), isIndexLimitScanPolicy() and
storage.getIndexLimit() are used) to only include shieldedLastLoadedAddressIndex
when shielded derivation is supported (e.g., detect via
deriveShieldedAddressFromStorage() or a flag from storage), otherwise fall back
to using only lastLoadedAddressIndex so the nextIndex/count are computed from
the non-shielded cursor and the scan advances correctly.

---

Nitpick comments:
In `@__tests__/integration/configuration/docker-compose.yml`:
- Line 11: Replace the mutable image tags with their content-addressable digests
to make integration tests reproducible: update the value currently using
"hathornetwork/hathor-core:experimental-shielded-outputs-alpha-v1" (referenced
via ${HATHOR_LIB_INTEGRATION_TESTS_FULLNODE_IMAGE:-...}) to
"hathornetwork/hathor-core:experimental-shielded-outputs-alpha-v1@sha256:c1d372698f0b549dbb2e2e6955ae27ec1f9cf728948c6d758651636b4c3ff5f5"
and likewise pin "hathornetwork/tx-mining-service:shielded-outputs-v1" to
"hathornetwork/tx-mining-service:shielded-outputs-v1@sha256:9e78ce25741fa07b092c5ab55ad18d6f76f40a28d1b28e6580bb7fd133d511b4"
so the Docker Compose entries for those images are fixed by digest.

In `@src/models/shielded_output.ts`:
- Around line 44-64: The constructor on class ShieldedOutput accepts many
positional args (mode, commitment, rangeProof, tokenData, script,
ephemeralPubkey, assetCommitment, value, surjectionProof) which is error-prone
(especially assetCommitment vs value); refactor the constructor to accept a
single options object (e.g., { mode, commitment, rangeProof, tokenData, script,
ephemeralPubkey, assetCommitment?, value?, surjectionProof? }) and update the
constructor body to pull values from that object, update any call sites to pass
a named options object, and keep the same property names on the instance
(this.mode, this.commitment, this.assetCommitment, this.value,
this.surjectionProof) to preserve behavior.

In `@src/new/types.ts`:
- Around line 330-344: ProposedOutput currently allows any address with an
optional shielded flag so code can wrongly pair a legacy/base58 address with
shielded modes; replace ProposedOutput with a discriminated union (e.g.
LegacyOutput and ShieldedOutput) so the discriminator is the shielded field:
LegacyOutput has shielded?: undefined and accepts legacy Address type, while
ShieldedOutput has shielded: ShieldedOutputMode (or a specific literal like
"AMOUNT_SHIELDED"|"FULLY_SHIELDED") and its address type is the ShieldedAddress
type (or otherwise constrained); update usages expecting ProposedOutput to
accept the union and ensure ShieldedOutput includes the same
value/timelock/token properties.

In `@src/schemas.ts`:
- Around line 87-111: The schema IHistoryOutputSchema currently marks all fields
optional which weakens validation and allows malformed transparent or shielded
entries; update it to a discriminated union based on the type field by creating
two schemas (e.g., TransparentOutputSchema and ShieldedOutputSchema) where
TransparentOutputSchema requires fields like value (bigIntCoercibleSchema) and
related transparent properties and ShieldedOutputSchema requires type:
z.literal('shielded') and required shielded fields such as commitment,
range_proof, ephemeral_pubkey, asset_commitment, surjection_proof, then export
IHistoryOutputSchema as z.union([TransparentOutputSchema, ShieldedOutputSchema])
(retain .passthrough() if needed) so each shape is strictly validated.

In `@src/storage/memory_store.ts`:
- Around line 469-476: The code uses an unsafe cast "(output as any).type !==
'shielded'" to detect shielded outputs; instead add a proper type or type guard
and use it: extend the output type in IHistoryTx (or create a dedicated
IHistoryOutput with optional "type?: string") and replace the any-cast check
with a type guard function (e.g., isShieldedHistoryOutput(output): boolean) or a
typed property check so the condition in the loop uses the guard (or the typed
property) to determine shielded outputs rather than casting to any; update
references in the loop around the variables "output", "IHistoryTx", and the file
"memory_store.ts" to call the new guard/typed property.

In `@src/storage/storage.ts`:
- Around line 83-91: The Storage class currently keeps the pinCode property in
memory for the wallet lifetime; update the code so any method that consumes
pinCode (e.g., the shielded output decryption routines that use
shieldedCryptoProvider / the promise used to chain processing of unlocked UTXOs)
clears this.pinCode when finished — use a try/finally around the
decryption/processing logic to set this.pinCode = undefined (or null) in the
finally block, and optionally add an async timeout-based clear (clear after X
ms) where pinCode is set to reduce exposure.

In `@src/utils/transaction.ts`:
- Around line 83-84: The helper is using null as a sentinel for optional state;
update the parameter and checks to use undefined instead: change the type
annotations from `{ type?: string | null }` to `{ type?: string }` in
isShieldedOutputEntry and the other occurrence around the code flagged (lines
~193-194), and update any strict equality checks or guards that compare to null
to instead check for undefined or use truthiness (`output.type !== undefined` or
`!!output.type`) so the module follows the repo-wide TS convention of using
undefined for optional/unprovided values.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 480066df-3fde-4f7a-b670-fd2088d6a737

📥 Commits

Reviewing files that changed from the base of the PR and between 0ccc910 and f208a71.

⛔ Files ignored due to path filters (1)
  • package-lock.json is excluded by !**/package-lock.json
📒 Files selected for processing (43)
  • __tests__/headers/shielded_outputs.test.ts
  • __tests__/integration/configuration/docker-compose.yml
  • __tests__/integration/configuration/privnet.yml
  • __tests__/integration/hathorwallet_facade.test.ts
  • __tests__/integration/shielded_transactions.test.ts
  • __tests__/models/address.test.ts
  • __tests__/models/network.test.ts
  • __tests__/models/shielded_output.test.ts
  • __tests__/shielded/processing.test.ts
  • __tests__/storage/storage.test.ts
  • __tests__/wallet/walletServiceStorageProxy.test.ts
  • package.json
  • setupTests-integration.js
  • src/constants.ts
  • src/headers/parser.ts
  • src/headers/shielded_outputs.ts
  • src/headers/types.ts
  • src/lib.ts
  • src/models/address.ts
  • src/models/network.ts
  • src/models/shielded_output.ts
  • src/models/transaction.ts
  • src/new/sendTransaction.ts
  • src/new/types.ts
  • src/new/wallet.ts
  • src/schemas.ts
  • src/shielded/index.ts
  • src/shielded/processing.ts
  • src/shielded/provider.ts
  • src/shielded/types.ts
  • src/storage/memory_store.ts
  • src/storage/storage.ts
  • src/sync/stream.ts
  • src/types.ts
  • src/utils/address.ts
  • src/utils/bigint.ts
  • src/utils/shieldedAddress.ts
  • src/utils/storage.ts
  • src/utils/transaction.ts
  • src/utils/wallet.ts
  • src/wallet/types.ts
  • src/wallet/wallet.ts
  • src/wallet/walletServiceStorageProxy.ts
💤 Files with no reviewable changes (1)
  • setupTests-integration.js

Comment on lines 77 to +78
await sendTx.runFromMining();
// After mining and pushing, the hash should be set (it was null before mining)
expect(tx.hash).not.toBeNull();
expect(typeof tx.nonce).toBe('number');
expect(tx.nonce).toBeGreaterThan(0);
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Line 78: avoid asserting nonce > 0.

A mined transaction can legitimately end up with nonce 0, so this makes the test flaky. Assert a guaranteed mining side effect instead and keep nonce as a type check only.

Suggested change
     await sendTx.runFromMining();
-    expect(tx.nonce).toBeGreaterThan(0);
+    expect(tx.nonce).toEqual(expect.any(Number));
+    expect(tx.parents.length).toBeGreaterThan(0);
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
await sendTx.runFromMining();
// After mining and pushing, the hash should be set (it was null before mining)
expect(tx.hash).not.toBeNull();
expect(typeof tx.nonce).toBe('number');
expect(tx.nonce).toBeGreaterThan(0);
await sendTx.runFromMining();
expect(tx.nonce).toEqual(expect.any(Number));
expect(tx.parents.length).toBeGreaterThan(0);
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@__tests__/integration/hathorwallet_facade.test.ts` around lines 77 - 78, The
test currently asserts expect(tx.nonce).toBeGreaterThan(0) which is flaky;
change it to assert a guaranteed mining side effect (for example, assert tx.hash
is truthy or the transaction status/confirmation exists) and only perform a type
check on the nonce. Specifically, update the assertions after
sendTx.runFromMining() to something like: assert tx.hash (or
tx.status/confirmation) is present to prove it was mined, and replace the strict
numeric assertion with a type check like typeof tx.nonce === 'number'
(referencing sendTx.runFromMining and tx.nonce).

Comment thread __tests__/shielded/processing.test.ts
Comment thread __tests__/shielded/processing.test.ts Outdated
Comment thread src/headers/shielded_outputs.ts
Comment thread src/headers/shielded_outputs.ts
Comment thread src/types.ts
Comment thread src/utils/bigint.ts Outdated
Comment on lines +34 to +38
// Hermes (React Native) doesn't support the context parameter in JSON.parse reviver.
// Fall back to returning the value as-is when context is unavailable.
if (!context?.source) {
return value;
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

# First, let's see the full bigint.ts file to understand the implementation
cat -n src/utils/bigint.ts

Repository: HathorNetwork/hathor-wallet-lib

Length of output: 6318


🏁 Script executed:

# Search for usage of JSONBigInt to understand how it's called
rg "JSONBigInt|bigIntReviver" --type ts --type tsx -A 3 -B 1

Repository: HathorNetwork/hathor-wallet-lib

Length of output: 102


🏁 Script executed:

# Look for tests related to this utility to understand expected behavior
fd -e test.ts -e spec.ts | xargs rg "bigint|JSONBigInt" -l

Repository: HathorNetwork/hathor-wallet-lib

Length of output: 466


🏁 Script executed:

# Search for tests specifically related to bigint utilities
rg "bigint|bigIntReviver" __tests__/ -A 5 -B 1

Repository: HathorNetwork/hathor-wallet-lib

Length of output: 12239


🏁 Script executed:

# Look for any Hermes or React Native references in the codebase
rg -i "hermes|react.native|context" --type ts -B 2 -A 2 | head -100

Repository: HathorNetwork/hathor-wallet-lib

Length of output: 5568


🏁 Script executed:

# Check if there's any documentation or comments about unsafe integers
rg -i "unsafe|precision|MAX_SAFE|MIN_SAFE" --type ts -B 1 -A 1

Repository: HathorNetwork/hathor-wallet-lib

Length of output: 6048


🏁 Script executed:

cat -n __tests__/utils/bigint.test.js.ts

Repository: HathorNetwork/hathor-wallet-lib

Length of output: 5313


🏁 Script executed:

# Search for any Hermes/React Native specific test or handling
rg -i "hermes|react.native|fallback" --type ts

Repository: HathorNetwork/hathor-wallet-lib

Length of output: 729


Fail fast on unsafe integers when reviver context is missing.

On line 36, Hermes will return the already-parsed number unchanged. Any integer outside the safe range loses precision during JSON.parse before the reviver runs, so bigint-valued fields become silently corrupted instead of failing deterministically.

💡 Minimal fix
     // Hermes (React Native) doesn't support the context parameter in JSON.parse reviver.
     // Fall back to returning the value as-is when context is unavailable.
     if (!context?.source) {
+      if (Number.isInteger(value) && !Number.isSafeInteger(value)) {
+        throw new Error('JSONBigInt.parse cannot safely recover an unsafe integer in this runtime');
+      }
       return value;
     }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
// Hermes (React Native) doesn't support the context parameter in JSON.parse reviver.
// Fall back to returning the value as-is when context is unavailable.
if (!context?.source) {
return value;
}
// Hermes (React Native) doesn't support the context parameter in JSON.parse reviver.
// Fall back to returning the value as-is when context is unavailable.
if (!context?.source) {
if (Number.isInteger(value) && !Number.isSafeInteger(value)) {
throw new Error('JSONBigInt.parse cannot safely recover an unsafe integer in this runtime');
}
return value;
}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/utils/bigint.ts` around lines 34 - 38, When the reviver runs in Hermes
and context?.source is missing, do not silently return numeric values; instead
detect numeric values that are outside Number.isSafeInteger and immediately
throw a descriptive error to fail fast. Update the reviver in
src/utils/bigint.ts (the branch that currently does "if (!context?.source) {
return value; }") to check if typeof value === 'number' and
!Number.isSafeInteger(value) and throw a clear error mentioning the unsafe
integer and the reviver/context, otherwise return the value as before. This
ensures bigint-valued fields are not silently corrupted when the reviver context
is unavailable.

Comment thread src/utils/storage.ts
Comment thread src/utils/transaction.ts
Comment on lines +668 to +695
if (txData.shieldedOutputs && txData.shieldedOutputs.length > 0) {
const shieldedModels = txData.shieldedOutputs.map(so => {
if (!so.commitment || !so.rangeProof || !so.script || !so.ephemeralPubkey) {
throw new Error('Shielded output missing required crypto fields (commitment, rangeProof, script, ephemeralPubkey)');
}
const tokenData = this.getTokenDataFromOutput(
{
type: 'p2pkh',
token: so.token,
value: so.value,
authorities: 0n,
address: so.address,
timelock: null,
},
txData.tokens,
);

return new ShieldedOutput(
so.mode,
so.commitment,
so.rangeProof,
tokenData,
Buffer.from(so.script, 'hex'),
so.ephemeralPubkey,
so.assetCommitment,
so.value,
so.surjectionProof,
);
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Validate FULLY_SHIELDED-specific fields before building the model.

Only the shared fields are checked here. A FULLY_SHIELDED entry with missing assetCommitment or surjectionProof will still construct a ShieldedOutput and fail later when the tx is serialized or processed.

🧪 Suggested fix
         const shieldedModels = txData.shieldedOutputs.map(so => {
           if (!so.commitment || !so.rangeProof || !so.script || !so.ephemeralPubkey) {
             throw new Error('Shielded output missing required crypto fields (commitment, rangeProof, script, ephemeralPubkey)');
           }
+          if (
+            so.mode === ShieldedOutputMode.FULLY_SHIELDED &&
+            (!so.assetCommitment || !so.surjectionProof)
+          ) {
+            throw new Error(
+              'Fully shielded output missing required crypto fields (assetCommitment, surjectionProof)'
+            );
+          }
           const tokenData = this.getTokenDataFromOutput(
             {
               type: 'p2pkh',
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
if (txData.shieldedOutputs && txData.shieldedOutputs.length > 0) {
const shieldedModels = txData.shieldedOutputs.map(so => {
if (!so.commitment || !so.rangeProof || !so.script || !so.ephemeralPubkey) {
throw new Error('Shielded output missing required crypto fields (commitment, rangeProof, script, ephemeralPubkey)');
}
const tokenData = this.getTokenDataFromOutput(
{
type: 'p2pkh',
token: so.token,
value: so.value,
authorities: 0n,
address: so.address,
timelock: null,
},
txData.tokens,
);
return new ShieldedOutput(
so.mode,
so.commitment,
so.rangeProof,
tokenData,
Buffer.from(so.script, 'hex'),
so.ephemeralPubkey,
so.assetCommitment,
so.value,
so.surjectionProof,
);
if (txData.shieldedOutputs && txData.shieldedOutputs.length > 0) {
const shieldedModels = txData.shieldedOutputs.map(so => {
if (!so.commitment || !so.rangeProof || !so.script || !so.ephemeralPubkey) {
throw new Error('Shielded output missing required crypto fields (commitment, rangeProof, script, ephemeralPubkey)');
}
if (
so.mode === ShieldedOutputMode.FULLY_SHIELDED &&
(!so.assetCommitment || !so.surjectionProof)
) {
throw new Error(
'Fully shielded output missing required crypto fields (assetCommitment, surjectionProof)'
);
}
const tokenData = this.getTokenDataFromOutput(
{
type: 'p2pkh',
token: so.token,
value: so.value,
authorities: 0n,
address: so.address,
timelock: null,
},
txData.tokens,
);
return new ShieldedOutput(
so.mode,
so.commitment,
so.rangeProof,
tokenData,
Buffer.from(so.script, 'hex'),
so.ephemeralPubkey,
so.assetCommitment,
so.value,
so.surjectionProof,
);
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/utils/transaction.ts` around lines 668 - 695, The shielded output builder
in the txData.shieldedOutputs.map block currently only validates common crypto
fields; update the validation to also check mode-specific fields before
constructing ShieldedOutput: if so.mode === 'FULLY_SHIELDED' (or the enum value
used) ensure so.assetCommitment and so.surjectionProof are present and throw a
clear Error if missing. Modify the map in the same function where ShieldedOutput
is instantiated to perform this additional check (alongside the existing checks
for commitment/rangeProof/script/ephemeralPubkey) so malformed FULLY_SHIELDED
entries fail fast with a descriptive message.

Comment thread src/utils/wallet.ts Outdated
Comment on lines +623 to +631
// Derive shielded spend key if root key is available
// Scan key uses the same account as legacy (m/44'/280'/0'/0), so no separate derivation needed.
if (argXpriv.depth === 0) {
const spendAcctXpriv = argXpriv.deriveNonCompliantChild(SHIELDED_SPEND_ACCT_PATH);
const spendXpriv2 = spendAcctXpriv.deriveNonCompliantChild(0);

accessData.spendXpubkey = spendXpriv2.xpubkey;
accessData.spendMainKey = encryptData(spendXpriv2.xprivkey, pin);
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Do not derive single-sig shielded keys for multisig wallets.

These branches also run for WalletType.MULTISIG, but m/44'/280'/1'/0 is a plain single-sig path unrelated to the wallet’s m/45' multisig account. Persisting it makes a multisig wallet look shielded-capable with keys that one signer controls alone, which breaks the wallet’s signing model if those paths are ever used.

🔒 Safer guard
-    if (argXpriv.depth === 0) {
+    if (argXpriv.depth === 0 && walletType === WalletType.P2PKH) {
       const spendAcctXpriv = argXpriv.deriveNonCompliantChild(SHIELDED_SPEND_ACCT_PATH);
       const spendXpriv2 = spendAcctXpriv.deriveNonCompliantChild(0);

       accessData.spendXpubkey = spendXpriv2.xpubkey;
       accessData.spendMainKey = encryptData(spendXpriv2.xprivkey, pin);
     }
-    const spendAcctXpriv = rootXpriv.deriveNonCompliantChild(SHIELDED_SPEND_ACCT_PATH);
-    const spendXpriv = spendAcctXpriv.deriveNonCompliantChild(0);
+    let spendXpriv: HDPrivateKey | undefined;
+    if (walletType === WalletType.P2PKH) {
+      const spendAcctXpriv = rootXpriv.deriveNonCompliantChild(SHIELDED_SPEND_ACCT_PATH);
+      spendXpriv = spendAcctXpriv.deriveNonCompliantChild(0);
+    }
...
-      spendXpubkey: spendXpriv.xpubkey,
-      spendMainKey: encryptData(spendXpriv.xprivkey, pin),
+      ...(spendXpriv && {
+        spendXpubkey: spendXpriv.xpubkey,
+        spendMainKey: encryptData(spendXpriv.xprivkey, pin),
+      }),

Also applies to: 704-719

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/utils/wallet.ts` around lines 623 - 631, The code is deriving and
persisting single-sig shielded keys even for multisig wallets; update the guard
around the derivation (the block using
argXpriv.deriveNonCompliantChild(SHIELDED_SPEND_ACCT_PATH) that sets
accessData.spendXpubkey and accessData.spendMainKey via encryptData) to skip
this entire derivation when walletType === WalletType.MULTISIG (or equivalent
multisig indicator), so multisig wallets never get single-sig shielded keys
stored; apply the same check to the other similar branch mentioned (the other
place that derives shielded keys).

@@ -4,18 +4,16 @@ services:
# All the following services are related to the core of the Private Network
Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

TODO we must make it work without cpuminer, using the dev miner

  - Fix addTx normalization so shielded outputs are processed during wallet
    restart/history sync (not just websocket path)
  - Split MAX_OUTPUTS validation: transparent (255) and shielded (32)
    limits are independent, matching hathor-core
  - Fix FullShielded serialization to throw on missing required fields
    instead of silently using zero buffers
  - Revert unnecessary enqueueOnNewTx live queue rewrite back to original
    promise chain
  - Revert unnecessary bigIntReviver Hermes workaround
  - Extract shielded output creation logic into src/shielded/creation.ts
  - Extract publicKeyToP2PKH helper, reuse in shieldedAddress.ts
  - Deduplicate header serialization via serializeWith helper
  - Reuse tokenUtils.getTokenIndexFromData in resolveTokenUid
  - Split Zod output schema into Transparent | AmountShielded | FullShielded
  - Add integration tests for wallet restart, FullShielded balance, mixed
    modes, fee validation, and custom token shielded outputs (skipped with
    known bug documented)
  - Add TODO comments for IHistoryOutput type refactor, pinCode clearing,
    and other follow-up items
  - Improve code comments throughout for clarity

  Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
@pedroferreira1 pedroferreira1 force-pushed the feat/shielded-outputs-integration branch from 89a0a76 to 62612cc Compare April 14, 2026 13:15
Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (2)
src/new/sendTransaction.ts (1)

314-330: ⚠️ Potential issue | 🟠 Major

Use the final fee when validating extra HTR inputs.

Now that shielded fees are added here, the requiresFees guard just above still checks the transparent-only fee. A custom-token shielded send with user-supplied HTR fee inputs will still raise INVALID_INPUT even when totalFee > 0.

🛠️ Proposed fix
-    if (requiresFees.length > 0 && fee === 0n) {
-      const err = new SendTxError(ErrorMessages.INVALID_INPUT);
-      err.errorData = requiresFees;
-      throw err;
-    }
-
     // Calculate shielded output fee
     let shieldedFee = 0n;
     for (const def of shieldedOutputDefs) {
       if (def.shieldedMode === ShieldedOutputMode.FULLY_SHIELDED) {
         shieldedFee += FEE_PER_FULL_SHIELDED_OUTPUT;
@@
 
     const totalFee = fee + shieldedFee;
+
+    if (requiresFees.length > 0 && totalFee === 0n) {
+      const err = new SendTxError(ErrorMessages.INVALID_INPUT);
+      err.errorData = requiresFees;
+      throw err;
+    }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/new/sendTransaction.ts` around lines 314 - 330, The current validation
uses the transparent-only fee variable (`fee`) to decide if HTR-paying inputs
are required (`requiresFees`), but shielded output fees (computed via
`shieldedOutputDefs` into `shieldedFee`) are added afterwards to produce
`totalFee`; update the guard that throws
`SendTxError(ErrorMessages.INVALID_INPUT)` to consider the combined `totalFee`
instead of `fee` (i.e., compute `shieldedFee` first or compute `totalFee` before
checking `requiresFees`) so that a nonzero `totalFee` prevents the invalid-input
error when `requiresFees.length > 0`; keep the `err.errorData = requiresFees`
behavior and the `SendTxError`/`ErrorMessages.INVALID_INPUT` usage.
src/types.ts (1)

277-297: ⚠️ Potential issue | 🟠 Major

IHistoryOutput still encodes impossible required fields for shielded entries.

Line 283 through Line 288 keep transparent-only fields required, but shielded entries can omit them. This leaves the public type unsound and encourages unsafe field access.

Proposed typing fix (discriminated union)
-export interface IHistoryOutput {
-  value: OutputValueType;
-  token_data: number;
-  script: string;
-  decoded: IHistoryOutputDecoded;
-  token: string;
-  spent_by: string | null;
-  selected_as_input?: boolean;
-  // Shielded output entries (appended by fullnode's to_json_extended) have these instead of value/token:
-  type?: string;           // 'shielded' for shielded output entries
-  commitment?: string;     // hex
-  range_proof?: string;    // hex
-  ephemeral_pubkey?: string; // hex
-  asset_commitment?: string; // hex (FullShielded only)
-  surjection_proof?: string; // hex (FullShielded only)
-}
+export interface IHistoryTransparentOutput {
+  type?: undefined;
+  value: OutputValueType;
+  token_data: number;
+  script: string;
+  decoded: IHistoryOutputDecoded;
+  token: string;
+  spent_by: string | null;
+  selected_as_input?: boolean;
+}
+
+export interface IHistoryShieldedOutputEntry {
+  type: 'shielded';
+  commitment: string;
+  range_proof: string;
+  ephemeral_pubkey: string;
+  asset_commitment?: string;
+  surjection_proof?: string;
+  // explicitly absent on shielded entries from to_json_extended
+  value?: never;
+  token_data?: never;
+  script?: never;
+  decoded?: never;
+  token?: never;
+  spent_by?: never;
+}
+
+export type IHistoryOutput = IHistoryTransparentOutput | IHistoryShieldedOutputEntry;
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/types.ts` around lines 277 - 297, The current IHistoryOutput mixes
required transparent-only fields with shielded entries; change it to a
discriminated union to make the types sound: define a common base (e.g.,
HistoryOutputBase) for shared optional fields, then create ITransparentOutput
(no "type" field or type?: undefined) with required fields value, token_data,
script, decoded, token, spent_by, and create IShieldedOutputEntry with type:
'shielded' and the shielded-specific fields (commitment, range_proof,
ephemeral_pubkey, asset_commitment, surjection_proof) and omit the
transparent-only fields; finally set IHistoryOutput = ITransparentOutput |
IShieldedOutputEntry and ensure callers narrow on the discriminant (type ===
'shielded') before accessing shielded-only fields or assume transparent fields
otherwise.
♻️ Duplicate comments (5)
src/headers/shielded_outputs.ts (1)

53-109: ⚠️ Potential issue | 🟠 Major

Reject malformed shielded headers instead of slicing through them.

Every field here is read with subarray() and no bounds checks, and unsupported mode values still fall through to the ephemeral-pubkey read. A truncated or unknown payload can desync buf and deserialize as a bogus header instead of failing fast. Apply the same guard to every fixed-size and length-prefixed read.

🛠️ Suggested direction
       let mode: number;
       [mode, buf] = unpackToInt(1, false, buf);
+      if (
+        mode !== ShieldedOutputMode.AMOUNT_SHIELDED &&
+        mode !== ShieldedOutputMode.FULLY_SHIELDED
+      ) {
+        throw new Error(`Unsupported shielded output mode: ${mode}`);
+      }
 
       // Commitment (33 bytes)
+      if (buf.length < 33) {
+        throw new Error('Truncated shielded output commitment');
+      }
       const commitment = Buffer.from(buf.subarray(0, 33));
       buf = buf.subarray(33);
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/headers/shielded_outputs.ts` around lines 53 - 109, The deserialize
function is vulnerable to out-of-bounds reads and silent desyncs; before every
fixed-size read (33-byte commitments, ephemeralPubkey, assetCommitment) and
before consuming variable-length fields (after unpackToInt for rpLen, scriptLen,
spLen) assert buf.length >= requiredSize and throw an Error if not, and
explicitly validate the mode read from unpackToInt against ShieldedOutputMode
(handle AMOUNT_SHIELDED and FULLY_SHIELDED, otherwise throw) so unsupported
modes do not fall through; update logic in deserialize, referencing
getVertexHeaderIdFromBuffer, unpackToInt, ShieldedOutputMode, and the local
variables commitment, rangeProof, script, assetCommitment, surjectionProof,
ephemeralPubkey to perform these length checks and fail fast on malformed input.
src/new/wallet.ts (1)

1477-1502: ⚠️ Potential issue | 🔴 Critical

Preserve on-chain output indexes.

Replacing newTx.outputs with a compacted transparent-only array renumbers every transparent output after the first shielded entry. The rest of the wallet still uses array position as the canonical output index, so mixed transactions will save/spend the wrong UTXOs.

🛠️ Suggested fix
     if (!newTx.shielded_outputs) {
       const shieldedEntries: IHistoryShieldedOutput[] = [];
-      const transparentOutputs: IHistoryOutput[] = [];
       for (const output of newTx.outputs) {
         if (transactionUtils.isShieldedOutputEntry(output)) {
           shieldedEntries.push({
             mode: output.asset_commitment ? 2 : 1, // FullShielded=2, AmountShielded=1
             commitment: output.commitment!, // already hex
@@
             surjection_proof: output.surjection_proof
               ? Buffer.from(output.surjection_proof, 'base64').toString('hex')
               : undefined,
           });
-        } else {
-          transparentOutputs.push(output);
         }
       }
       if (shieldedEntries.length > 0) {
         newTx.shielded_outputs = shieldedEntries;
-        newTx.outputs = transparentOutputs;
       }
     }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/new/wallet.ts` around lines 1477 - 1502, The current code compacts
newTx.outputs into transparentOutputs, which shifts array indexes and breaks
UTXO indexing; instead, keep newTx.outputs intact and only extract shielded
entries into newTx.shielded_outputs without reassigning newTx.outputs. Locate
the block using newTx, shielded_outputs, outputs and
transactionUtils.isShieldedOutputEntry and change the logic so you push shielded
entries into shieldedEntries but do not replace newTx.outputs (or if you must
remove shielded entries, replace them with a stable placeholder object that
preserves the original array length/indexes) and then set newTx.shielded_outputs
= shieldedEntries when non-empty.
src/utils/storage.ts (1)

328-347: ⚠️ Potential issue | 🟠 Major

Ignore shielded cursors when shielded derivation is disabled.

These calculations still fold shieldedLastLoadedAddressIndex into Math.min(...) / target selection even when the wallet has no shielded keys. With the default -1, both policies restart from index 0 and re-fetch the whole prefix instead of only the missing suffix.

🛠️ Suggested direction
-  const lastLoaded = Math.min(lastLoadedAddressIndex, shieldedLastLoadedAddressIndex);
+  const hasShieldedKeys = !!(await storage.getAccessData())?.spendXpubkey;
+  const effectiveShieldedLastLoaded = hasShieldedKeys
+    ? shieldedLastLoadedAddressIndex
+    : lastLoadedAddressIndex;
+  const lastLoaded = Math.min(lastLoadedAddressIndex, effectiveShieldedLastLoaded);

Use the same guard for shieldedTarget / minLastLoaded in checkGapLimit().

Also applies to: 378-399

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/utils/storage.ts` around lines 328 - 347, The code currently uses
Math.min(lastLoadedAddressIndex, shieldedLastLoadedAddressIndex) even when
shielded derivation is disabled (shieldedLastLoadedAddressIndex is -1), causing
unnecessary re-fetches; update the logic where lastLoaded/minLastLoaded and
shieldedTarget are computed to ignore shieldedLastLoadedAddressIndex when it's
negative (treat <0 as "no shielded keys") — e.g., in the block using
lastLoadedAddressIndex and shieldedLastLoadedAddressIndex replace the Math.min
call with a guarded computation that uses shieldedLastLoadedAddressIndex only if
shieldedLastLoadedAddressIndex >= 0, and apply the same guard in checkGapLimit()
where shieldedTarget/minLastLoaded are calculated so shielded cursors are
skipped when shielded derivation is disabled.
src/types.ts (2)

657-660: ⚠️ Potential issue | 🟠 Major

Allow clearing cached PIN via the interface.

Line 658 declares pinCode?: string, but Line 659 requires a string argument. This prevents explicit in-memory secret clearing through the API.

Minimal fix
-  setPinCode(pinCode: string): void;
+  setPinCode(pinCode?: string): void;
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/types.ts` around lines 657 - 660, The interface currently defines
pinCode?: string but setPinCode(pinCode: string): void requires a string,
preventing callers from clearing the cached PIN; change the setPinCode signature
to accept an optional/nullable value (e.g. setPinCode(pinCode?: string | null):
void) and update any implementations of setPinCode and callers to handle
undefined/null as “clear the in-memory PIN” while preserving existing behavior
when a string is passed; update types in src/types.ts for the pinCode and
setPinCode symbols accordingly.

720-724: ⚠️ Potential issue | 🟠 Major

getSpendXPrivKey return type should model old-wallet absence.

spendMainKey is optional in IWalletAccessData (Line 446), so Line 722 should not guarantee Promise<string>.

Minimal fix
-  getSpendXPrivKey(pinCode: string): Promise<string>;
+  getSpendXPrivKey(pinCode: string): Promise<string | undefined>;
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/types.ts` around lines 720 - 724, The TypeScript interface declares
getSpendXPrivKey(pinCode: string): Promise<string> but IWalletAccessData marks
spendMainKey as optional, so update the getSpendXPrivKey signature to return
Promise<string | undefined> to model absence of the old-wallet key; locate and
change the method declaration for getSpendXPrivKey in the interface in
src/types.ts (and adjust any dependent types/usages to handle undefined) to keep
signatures consistent with IWalletAccessData.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@src/utils/storage.ts`:
- Around line 840-847: processShieldedOutputs(...) returns an object containing
an outputs array, but the code currently iterates the wrapper (shieldedResults)
which causes a runtime error; change the handling after calling
processShieldedOutputs in src/utils/storage.ts (the variable shieldedResults) to
access and iterate the outputs array (e.g., shieldedResults.outputs or
destructure { outputs }) and then loop over those decrypted outputs to record
shielded UTXOs and balances instead of iterating shieldedResults directly.
- Around line 752-760: The code checks addressInfo.addressType for
'shielded-spend' but deriveShieldedAddressFromStorage() persists the
discriminator as 'shielded_spend', causing shielded addresses to be
misclassified and shieldedMaxIndexUsed to never advance; update the comparisons
that test addressInfo.addressType (the occurrences around the block that updates
shieldedMaxIndexUsed and legacyMaxIndexUsed and the other two occurrences
mentioned) to use 'shielded_spend' instead of 'shielded-spend' so shielded
addresses are recognized consistently; verify the three locations (the branch
updating shieldedMaxIndexUsed, and the two similar branches around lines
~891-899 and ~920-928) are changed to the underscore form.

---

Outside diff comments:
In `@src/new/sendTransaction.ts`:
- Around line 314-330: The current validation uses the transparent-only fee
variable (`fee`) to decide if HTR-paying inputs are required (`requiresFees`),
but shielded output fees (computed via `shieldedOutputDefs` into `shieldedFee`)
are added afterwards to produce `totalFee`; update the guard that throws
`SendTxError(ErrorMessages.INVALID_INPUT)` to consider the combined `totalFee`
instead of `fee` (i.e., compute `shieldedFee` first or compute `totalFee` before
checking `requiresFees`) so that a nonzero `totalFee` prevents the invalid-input
error when `requiresFees.length > 0`; keep the `err.errorData = requiresFees`
behavior and the `SendTxError`/`ErrorMessages.INVALID_INPUT` usage.

In `@src/types.ts`:
- Around line 277-297: The current IHistoryOutput mixes required
transparent-only fields with shielded entries; change it to a discriminated
union to make the types sound: define a common base (e.g., HistoryOutputBase)
for shared optional fields, then create ITransparentOutput (no "type" field or
type?: undefined) with required fields value, token_data, script, decoded,
token, spent_by, and create IShieldedOutputEntry with type: 'shielded' and the
shielded-specific fields (commitment, range_proof, ephemeral_pubkey,
asset_commitment, surjection_proof) and omit the transparent-only fields;
finally set IHistoryOutput = ITransparentOutput | IShieldedOutputEntry and
ensure callers narrow on the discriminant (type === 'shielded') before accessing
shielded-only fields or assume transparent fields otherwise.

---

Duplicate comments:
In `@src/headers/shielded_outputs.ts`:
- Around line 53-109: The deserialize function is vulnerable to out-of-bounds
reads and silent desyncs; before every fixed-size read (33-byte commitments,
ephemeralPubkey, assetCommitment) and before consuming variable-length fields
(after unpackToInt for rpLen, scriptLen, spLen) assert buf.length >=
requiredSize and throw an Error if not, and explicitly validate the mode read
from unpackToInt against ShieldedOutputMode (handle AMOUNT_SHIELDED and
FULLY_SHIELDED, otherwise throw) so unsupported modes do not fall through;
update logic in deserialize, referencing getVertexHeaderIdFromBuffer,
unpackToInt, ShieldedOutputMode, and the local variables commitment, rangeProof,
script, assetCommitment, surjectionProof, ephemeralPubkey to perform these
length checks and fail fast on malformed input.

In `@src/new/wallet.ts`:
- Around line 1477-1502: The current code compacts newTx.outputs into
transparentOutputs, which shifts array indexes and breaks UTXO indexing;
instead, keep newTx.outputs intact and only extract shielded entries into
newTx.shielded_outputs without reassigning newTx.outputs. Locate the block using
newTx, shielded_outputs, outputs and transactionUtils.isShieldedOutputEntry and
change the logic so you push shielded entries into shieldedEntries but do not
replace newTx.outputs (or if you must remove shielded entries, replace them with
a stable placeholder object that preserves the original array length/indexes)
and then set newTx.shielded_outputs = shieldedEntries when non-empty.

In `@src/types.ts`:
- Around line 657-660: The interface currently defines pinCode?: string but
setPinCode(pinCode: string): void requires a string, preventing callers from
clearing the cached PIN; change the setPinCode signature to accept an
optional/nullable value (e.g. setPinCode(pinCode?: string | null): void) and
update any implementations of setPinCode and callers to handle undefined/null as
“clear the in-memory PIN” while preserving existing behavior when a string is
passed; update types in src/types.ts for the pinCode and setPinCode symbols
accordingly.
- Around line 720-724: The TypeScript interface declares
getSpendXPrivKey(pinCode: string): Promise<string> but IWalletAccessData marks
spendMainKey as optional, so update the getSpendXPrivKey signature to return
Promise<string | undefined> to model absence of the old-wallet key; locate and
change the method declaration for getSpendXPrivKey in the interface in
src/types.ts (and adjust any dependent types/usages to handle undefined) to keep
signatures consistent with IWalletAccessData.

In `@src/utils/storage.ts`:
- Around line 328-347: The code currently uses Math.min(lastLoadedAddressIndex,
shieldedLastLoadedAddressIndex) even when shielded derivation is disabled
(shieldedLastLoadedAddressIndex is -1), causing unnecessary re-fetches; update
the logic where lastLoaded/minLastLoaded and shieldedTarget are computed to
ignore shieldedLastLoadedAddressIndex when it's negative (treat <0 as "no
shielded keys") — e.g., in the block using lastLoadedAddressIndex and
shieldedLastLoadedAddressIndex replace the Math.min call with a guarded
computation that uses shieldedLastLoadedAddressIndex only if
shieldedLastLoadedAddressIndex >= 0, and apply the same guard in checkGapLimit()
where shieldedTarget/minLastLoaded are calculated so shielded cursors are
skipped when shielded derivation is disabled.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 9ff1cdb9-b037-47d6-a154-6121c002c565

📥 Commits

Reviewing files that changed from the base of the PR and between f208a71 and 89a0a76.

📒 Files selected for processing (20)
  • __tests__/integration/hathorwallet_facade.test.ts
  • __tests__/integration/shielded_transactions.test.ts
  • __tests__/models/network.test.ts
  • __tests__/models/shielded_output.test.ts
  • src/constants.ts
  • src/headers/shielded_outputs.ts
  • src/models/shielded_output.ts
  • src/models/transaction.ts
  • src/new/sendTransaction.ts
  • src/new/wallet.ts
  • src/schemas.ts
  • src/shielded/creation.ts
  • src/shielded/processing.ts
  • src/storage/memory_store.ts
  • src/storage/storage.ts
  • src/types.ts
  • src/utils/address.ts
  • src/utils/bigint.ts
  • src/utils/shieldedAddress.ts
  • src/utils/storage.ts
✅ Files skipped from review due to trivial changes (2)
  • src/models/shielded_output.ts
  • tests/models/shielded_output.test.ts
🚧 Files skipped from review as they are similar to previous changes (10)
  • tests/models/network.test.ts
  • src/schemas.ts
  • tests/integration/hathorwallet_facade.test.ts
  • src/utils/bigint.ts
  • src/shielded/processing.ts
  • src/utils/address.ts
  • src/storage/memory_store.ts
  • tests/integration/shielded_transactions.test.ts
  • src/storage/storage.ts
  • src/utils/shieldedAddress.ts

Comment thread src/utils/storage.ts
Comment thread src/utils/storage.ts Outdated
Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

♻️ Duplicate comments (10)
src/types.ts (2)

737-741: ⚠️ Potential issue | 🟠 Major

Return type mismatch for getSpendXPrivKey.

The spendMainKey field is optional in IWalletAccessData, meaning older wallets won't have it. However, getSpendXPrivKey() returns Promise<string> rather than Promise<string | undefined>, which doesn't encode this possibility. The public key accessors (getScanXPubKey, getSpendXPubKey) correctly return Promise<string | undefined>.

-  getSpendXPrivKey(pinCode: string): Promise<string>;
+  getSpendXPrivKey(pinCode: string): Promise<string | undefined>;

Also apply the same fix to getScanXPrivKey for consistency.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/types.ts` around lines 737 - 741, The return types for the shielded
private-key accessors are incorrect: change getScanXPrivKey(pinCode: string) and
getSpendXPrivKey(pinCode: string) to return Promise<string | undefined>
(matching getScanXPubKey/getSpendXPubKey and the optional spendMainKey in
IWalletAccessData) so callers can handle missing keys; update the method
signatures in src/types.ts accordingly (keep the parameter names the same).

674-676: ⚠️ Potential issue | 🟠 Major

setPinCode should accept optional parameter to allow clearing.

The pinCode property is optional (pinCode?: string), but setPinCode(pinCode: string) requires a value, providing no interface-level way to clear the cached PIN. This unnecessarily extends the secret's lifetime in memory.

-  setPinCode(pinCode: string): void;
+  setPinCode(pinCode?: string): void;
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/types.ts` around lines 674 - 676, The setter signature for pinCode is too
strict: the interface has pinCode?: string but setPinCode(pinCode: string)
requires a non-optional value, preventing callers from clearing the PIN; change
setPinCode to accept an optional value (e.g., pinCode?: string or pinCode:
string | undefined) on the interface/type that declares pinCode and setPinCode
(referencing pinCode and setPinCode in src/types.ts), and then update any
implementing classes/functions and their callers to pass undefined or omit the
argument when clearing the cached PIN.
src/utils/transaction.ts (1)

668-696: ⚠️ Potential issue | 🟠 Major

Missing validation for FULLY_SHIELDED-specific fields.

The validation only checks common fields (commitment, rangeProof, script, ephemeralPubkey). A FULLY_SHIELDED entry missing assetCommitment or surjectionProof will pass validation here but fail during serialization.

Add mode-specific validation:

         if (!so.commitment || !so.rangeProof || !so.script || !so.ephemeralPubkey) {
           throw new Error('Shielded output missing required crypto fields (commitment, rangeProof, script, ephemeralPubkey)');
         }
+        if (
+          so.mode === ShieldedOutputMode.FULLY_SHIELDED &&
+          (!so.assetCommitment || !so.surjectionProof)
+        ) {
+          throw new Error(
+            'Fully shielded output missing required crypto fields (assetCommitment, surjectionProof)'
+          );
+        }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/utils/transaction.ts` around lines 668 - 696, The current validation in
the txData.shieldedOutputs mapping only enforces common crypto fields and misses
mode-specific requirements; update the validation inside the
txData.shieldedOutputs.map callback (where ShieldedOutput is constructed) to
additionally check when so.mode === 'FULLY_SHIELDED' that so.assetCommitment and
so.surjectionProof are present (non-null/defined) and throw a clear Error if
they are missing so construction/serialization won't fail later.
src/shielded/provider.ts (2)

29-35: ⚠️ Potential issue | 🟠 Major

Missing validation for negative bigint values.

The range check only rejects values above MAX_SAFE_INTEGER but allows negative values through. Since these represent output amounts, negative values should also be rejected to prevent unexpected behavior.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/shielded/provider.ts` around lines 29 - 35, The
createAmountShieldedOutput function currently only checks for values above
Number.MAX_SAFE_INTEGER but allows negative BigInt inputs; add a validation to
reject value < 0n and throw a clear Error (e.g., "Shielded output value must be
non-negative") before calling ct.createAmountShieldedOutput. Update the guard
around value (the parameter named value used in createAmountShieldedOutput and
passed to ct.createAmountShieldedOutput) to enforce 0n <= value <=
BigInt(Number.MAX_SAFE_INTEGER) and keep the existing error behavior for
out-of-range values.

87-92: ⚠️ Potential issue | 🟠 Major

toSafeNumber helper also misses negative check.

The helper function defined in computeBalancingBlindingFactor has the same issue - it should also reject negative values for inputs/outputs.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/shielded/provider.ts` around lines 87 - 92, The helper toSafeNumber
currently only checks for values greater than Number.MAX_SAFE_INTEGER but misses
negative values; update the toSafeNumber used in the shielded code (and the
duplicate helper inside computeBalancingBlindingFactor) to also validate that v
is >= 0 and throw a clear Error (e.g., 'Shielded output value out of safe
range') when v is negative or exceeds MAX_SAFE_INTEGER so both conversion sites
reject negative bigint inputs.
src/utils/wallet.ts (2)

704-719: ⚠️ Potential issue | 🟠 Major

Same multisig concern applies to generateAccessDataFromSeed.

This function unconditionally derives shielded spend keys even when walletType === WalletType.MULTISIG. Apply the same guard as suggested for generateAccessDataFromXpriv.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/utils/wallet.ts` around lines 704 - 719, In generateAccessDataFromSeed,
avoid unconditionally deriving shielded spend keys when walletType ===
WalletType.MULTISIG: wrap the spendAcctXpriv/spendXpriv derivation and the use
of spendXpubkey and spendMainKey (and encryptData(spendXpriv.xprivkey, pin))
behind a guard that skips derivation for multisig wallets and instead returns
null/undefined (or an appropriate empty value) for spendXpubkey and
spendMainKey; reference the existing symbols spendAcctXpriv, spendXpriv,
spendXpubkey, spendMainKey, walletType and WalletType.MULTISIG to locate and
change the logic.

623-631: ⚠️ Potential issue | 🟠 Major

Shielded keys are derived for all wallet types including multisig.

This code derives single-sig shielded keys (m/44'/280'/1'/0) even for WalletType.MULTISIG wallets. This could break the multisig signing model if these keys are ever used, as one signer would control them alone.

Consider guarding against multisig:

-    if (argXpriv.depth === 0) {
+    if (argXpriv.depth === 0 && walletType === WalletType.P2PKH) {
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/utils/wallet.ts` around lines 623 - 631, The current block
unconditionally derives shielded spend keys from argXpriv (using
SHIELDED_SPEND_ACCT_PATH and storing into accessData.spendXpubkey /
accessData.spendMainKey), which will produce single-sig shielded keys even for
multisig wallets; update the logic to skip deriving/encrypting these keys when
the wallet type is WalletType.MULTISIG (or any non-single-sig type) by checking
the wallet type before calling deriveNonCompliantChild and encryptData, so
multisig flows do not get private shielded keys created for a single signer.
src/models/transaction.ts (1)

369-372: ⚠️ Potential issue | 🟠 Major

shieldedOutputs is not populated during deserialization.

The getOutputsSum() method now includes this.shieldedOutputs, but createFromBytes() and getHeadersFromBytes() never rebuild this array from the serialized ShieldedOutputsHeader. Transactions parsed from bytes will have an empty shieldedOutputs array, causing incorrect weight calculations and validation.

Either populate shieldedOutputs from the header during deserialization, or derive the sum directly from the parsed header.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/models/transaction.ts` around lines 369 - 372, createFromBytes() and
getHeadersFromBytes() currently ignore the parsed ShieldedOutputsHeader so
this.shieldedOutputs stays empty; update those deserializers to rebuild/populate
the shieldedOutputs array from the parsed ShieldedOutputsHeader (or at minimum
compute and set the shielded outputs sum) so getOutputsSum() sees the real
shielded values. Specifically, in getHeadersFromBytes()/createFromBytes(),
locate where ShieldedOutputsHeader is parsed and either (a) iterate the header
entries to create shielded output objects (with .value populated) and assign
them to this.shieldedOutputs, or (b) derive and store the shielded sum from the
header into a field used by getOutputsSum(); ensure the produced shielded output
objects match the shape expected by getOutputsSum().
src/new/wallet.ts (1)

864-872: ⚠️ Potential issue | 🟠 Major

Return a shielded-chain path for shielded current addresses.

When opts?.legacy === false, this method can return a shielded address but still derives addressPath through getAddressPathForIndex(), which only builds transparent P2PKH/P2SH paths. Any caller that displays or re-derives from the returned path gets the wrong chain. getNextAddress() inherits the same bug.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/new/wallet.ts` around lines 864 - 872, The current getCurrentAddress (and
likewise getNextAddress) always computes addressPath via
getAddressPathForIndex(), which only builds transparent P2PKH/P2SH paths and
thus returns incorrect paths for shielded addresses when opts?.legacy === false;
update both methods to detect shielded addresses (e.g., via opts?.legacy flag or
the address format returned from storage.getCurrentAddress), and when a shielded
address is returned call the appropriate shielded-path builder (e.g.,
getShieldedAddressPathForIndex or equivalent) or otherwise compute a shielded
path instead of getAddressPathForIndex; also handle null/undefined index from
getAddressIndex() gracefully and keep transparent flows unchanged.
src/headers/shielded_outputs.ts (1)

67-109: ⚠️ Potential issue | 🟠 Major

Reject truncated and unsupported shielded entries during parsing.

This loop still slices fixed-size and length-prefixed fields without checking the remaining bytes, and it still accepts unknown mode values by falling through to the ephemeral-pubkey read. A short or invalid header can therefore build undersized ShieldedOutputs or misalign the buffer for the next entry instead of failing fast.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/headers/shielded_outputs.ts` around lines 67 - 109, The parsing loop for
shielded outputs must reject truncated or unknown entries: before each
fixed-size slice (commitment, assetCommitment, ephemeralPubkey) and before each
length-prefixed read (rangeProof, script, surjectionProof) check that buf.length
is >= the required bytes and that unpackToInt returned a sane length
(non-negative and <= buf.length); if not, throw a parse error. Also validate the
mode value read via unpackToInt against known ShieldedOutputMode values
(ShieldedOutputMode.AMOUNT_SHIELDED and ShieldedOutputMode.FULLY_SHIELDED) and
throw on unknown modes instead of falling through. Apply these checks around the
unpackToInt calls and the subsequent Buffer.subarray uses (the loop iterating
numOutputs, variables: mode, rpLen, scriptLen, spLen, commitment, rangeProof,
script, assetCommitment, surjectionProof, ephemeralPubkey) so you fail fast on
truncation or unsupported modes and avoid misaligning subsequent entries.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@__tests__/integration/configuration/docker-compose.yml`:
- Around line 240-241: Update the hardcoded port numbers from 3000/3001 to
3100/3101 where the test helpers construct service URLs (the variables/constants
that build the localhost URLs in the service-facade helper, e.g., the SERVICE_*
or base URL variables used on lines that create "http://localhost:3000" and
"http://localhost:3001") and change the documentation comment in the adapters
types file that mentions the old ports to reflect 3100/3101; ensure any tests or
helper utilities that derive ports from these values use the updated ports and
run the integration tests to verify connectivity.

In `@src/models/shielded_output.ts`:
- Around line 100-105: The current serializers in the ShieldedOutput
serialization code silently replace a missing or wrong-sized ephemeralPubkey
with 33 zero bytes; instead, validate that this.ephemeralPubkey exists and its
length === EPHEMERAL_PUBKEY_SIZE (using the EPHEMERAL_PUBKEY_SIZE constant) and
fail fast (throw an Error or return a serialization error) if validation fails
rather than pushing Buffer.alloc(...). Apply this change to the two
serialization sites that reference ephemeralPubkey (the shown arr.push branch
and the similar block at the 130-135 location) so malformed keys are rejected
instead of being zero-padded.

In `@src/new/sendTransaction.ts`:
- Around line 369-395: The filtered-out phantom outputs removed the only
references to custom-token amounts for shielded outputs so fullTxData.tokens no
longer contains those token ids; fix by ensuring tokens in fullTxData include
both tokenMap keys and any token ids used by shieldedOutputs (created by
createShieldedOutputs) — e.g. after shieldedOutputs is built, merge its token
ids into the token set used for fullTxData.tokens (or update tokenMap
accordingly) so the IDataTx produced in this.fullTxData contains the
custom-token identifiers referenced by shieldedOutputs and avoids melt/authority
mismatches.

In `@src/storage/storage.ts`:
- Around line 186-192: Storage currently only exposes setPinCode(pinCode:
string) and retains plaintext PIN; add a way to clear the cached PIN and ensure
teardown clears it: modify the Storage class by either (a) adding a
clearPinCode() method that nulls/undefines this.pinCode, or (b) change
setPinCode to accept null/undefined to clear the field; then call the new clear
method (or setPinCode(undefined)) from handleStop to remove the plaintext PIN.
Update any callers/tests that assume only a setter so the PIN is reliably
cleared on shutdown/rebind.

---

Duplicate comments:
In `@src/headers/shielded_outputs.ts`:
- Around line 67-109: The parsing loop for shielded outputs must reject
truncated or unknown entries: before each fixed-size slice (commitment,
assetCommitment, ephemeralPubkey) and before each length-prefixed read
(rangeProof, script, surjectionProof) check that buf.length is >= the required
bytes and that unpackToInt returned a sane length (non-negative and <=
buf.length); if not, throw a parse error. Also validate the mode value read via
unpackToInt against known ShieldedOutputMode values
(ShieldedOutputMode.AMOUNT_SHIELDED and ShieldedOutputMode.FULLY_SHIELDED) and
throw on unknown modes instead of falling through. Apply these checks around the
unpackToInt calls and the subsequent Buffer.subarray uses (the loop iterating
numOutputs, variables: mode, rpLen, scriptLen, spLen, commitment, rangeProof,
script, assetCommitment, surjectionProof, ephemeralPubkey) so you fail fast on
truncation or unsupported modes and avoid misaligning subsequent entries.

In `@src/models/transaction.ts`:
- Around line 369-372: createFromBytes() and getHeadersFromBytes() currently
ignore the parsed ShieldedOutputsHeader so this.shieldedOutputs stays empty;
update those deserializers to rebuild/populate the shieldedOutputs array from
the parsed ShieldedOutputsHeader (or at minimum compute and set the shielded
outputs sum) so getOutputsSum() sees the real shielded values. Specifically, in
getHeadersFromBytes()/createFromBytes(), locate where ShieldedOutputsHeader is
parsed and either (a) iterate the header entries to create shielded output
objects (with .value populated) and assign them to this.shieldedOutputs, or (b)
derive and store the shielded sum from the header into a field used by
getOutputsSum(); ensure the produced shielded output objects match the shape
expected by getOutputsSum().

In `@src/new/wallet.ts`:
- Around line 864-872: The current getCurrentAddress (and likewise
getNextAddress) always computes addressPath via getAddressPathForIndex(), which
only builds transparent P2PKH/P2SH paths and thus returns incorrect paths for
shielded addresses when opts?.legacy === false; update both methods to detect
shielded addresses (e.g., via opts?.legacy flag or the address format returned
from storage.getCurrentAddress), and when a shielded address is returned call
the appropriate shielded-path builder (e.g., getShieldedAddressPathForIndex or
equivalent) or otherwise compute a shielded path instead of
getAddressPathForIndex; also handle null/undefined index from getAddressIndex()
gracefully and keep transparent flows unchanged.

In `@src/shielded/provider.ts`:
- Around line 29-35: The createAmountShieldedOutput function currently only
checks for values above Number.MAX_SAFE_INTEGER but allows negative BigInt
inputs; add a validation to reject value < 0n and throw a clear Error (e.g.,
"Shielded output value must be non-negative") before calling
ct.createAmountShieldedOutput. Update the guard around value (the parameter
named value used in createAmountShieldedOutput and passed to
ct.createAmountShieldedOutput) to enforce 0n <= value <=
BigInt(Number.MAX_SAFE_INTEGER) and keep the existing error behavior for
out-of-range values.
- Around line 87-92: The helper toSafeNumber currently only checks for values
greater than Number.MAX_SAFE_INTEGER but misses negative values; update the
toSafeNumber used in the shielded code (and the duplicate helper inside
computeBalancingBlindingFactor) to also validate that v is >= 0 and throw a
clear Error (e.g., 'Shielded output value out of safe range') when v is negative
or exceeds MAX_SAFE_INTEGER so both conversion sites reject negative bigint
inputs.

In `@src/types.ts`:
- Around line 737-741: The return types for the shielded private-key accessors
are incorrect: change getScanXPrivKey(pinCode: string) and
getSpendXPrivKey(pinCode: string) to return Promise<string | undefined>
(matching getScanXPubKey/getSpendXPubKey and the optional spendMainKey in
IWalletAccessData) so callers can handle missing keys; update the method
signatures in src/types.ts accordingly (keep the parameter names the same).
- Around line 674-676: The setter signature for pinCode is too strict: the
interface has pinCode?: string but setPinCode(pinCode: string) requires a
non-optional value, preventing callers from clearing the PIN; change setPinCode
to accept an optional value (e.g., pinCode?: string or pinCode: string |
undefined) on the interface/type that declares pinCode and setPinCode
(referencing pinCode and setPinCode in src/types.ts), and then update any
implementing classes/functions and their callers to pass undefined or omit the
argument when clearing the cached PIN.

In `@src/utils/transaction.ts`:
- Around line 668-696: The current validation in the txData.shieldedOutputs
mapping only enforces common crypto fields and misses mode-specific
requirements; update the validation inside the txData.shieldedOutputs.map
callback (where ShieldedOutput is constructed) to additionally check when
so.mode === 'FULLY_SHIELDED' that so.assetCommitment and so.surjectionProof are
present (non-null/defined) and throw a clear Error if they are missing so
construction/serialization won't fail later.

In `@src/utils/wallet.ts`:
- Around line 704-719: In generateAccessDataFromSeed, avoid unconditionally
deriving shielded spend keys when walletType === WalletType.MULTISIG: wrap the
spendAcctXpriv/spendXpriv derivation and the use of spendXpubkey and
spendMainKey (and encryptData(spendXpriv.xprivkey, pin)) behind a guard that
skips derivation for multisig wallets and instead returns null/undefined (or an
appropriate empty value) for spendXpubkey and spendMainKey; reference the
existing symbols spendAcctXpriv, spendXpriv, spendXpubkey, spendMainKey,
walletType and WalletType.MULTISIG to locate and change the logic.
- Around line 623-631: The current block unconditionally derives shielded spend
keys from argXpriv (using SHIELDED_SPEND_ACCT_PATH and storing into
accessData.spendXpubkey / accessData.spendMainKey), which will produce
single-sig shielded keys even for multisig wallets; update the logic to skip
deriving/encrypting these keys when the wallet type is WalletType.MULTISIG (or
any non-single-sig type) by checking the wallet type before calling
deriveNonCompliantChild and encryptData, so multisig flows do not get private
shielded keys created for a single signer.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 8882e195-138b-4dec-83a6-1d7c58d503a7

📥 Commits

Reviewing files that changed from the base of the PR and between 89a0a76 and aac3e62.

⛔ Files ignored due to path filters (1)
  • package-lock.json is excluded by !**/package-lock.json
📒 Files selected for processing (44)
  • __tests__/headers/shielded_outputs.test.ts
  • __tests__/integration/configuration/docker-compose.yml
  • __tests__/integration/configuration/privnet.yml
  • __tests__/integration/hathorwallet_facade.test.ts
  • __tests__/integration/shielded_transactions.test.ts
  • __tests__/models/address.test.ts
  • __tests__/models/network.test.ts
  • __tests__/models/shielded_output.test.ts
  • __tests__/shielded/processing.test.ts
  • __tests__/storage/storage.test.ts
  • __tests__/wallet/walletServiceStorageProxy.test.ts
  • package.json
  • setupTests-integration.js
  • src/constants.ts
  • src/headers/parser.ts
  • src/headers/shielded_outputs.ts
  • src/headers/types.ts
  • src/lib.ts
  • src/models/address.ts
  • src/models/network.ts
  • src/models/shielded_output.ts
  • src/models/transaction.ts
  • src/new/sendTransaction.ts
  • src/new/types.ts
  • src/new/wallet.ts
  • src/schemas.ts
  • src/shielded/creation.ts
  • src/shielded/index.ts
  • src/shielded/processing.ts
  • src/shielded/provider.ts
  • src/shielded/types.ts
  • src/storage/memory_store.ts
  • src/storage/storage.ts
  • src/sync/stream.ts
  • src/types.ts
  • src/utils/address.ts
  • src/utils/bigint.ts
  • src/utils/shieldedAddress.ts
  • src/utils/storage.ts
  • src/utils/transaction.ts
  • src/utils/wallet.ts
  • src/wallet/types.ts
  • src/wallet/wallet.ts
  • src/wallet/walletServiceStorageProxy.ts
💤 Files with no reviewable changes (1)
  • setupTests-integration.js
✅ Files skipped from review due to trivial changes (8)
  • package.json
  • tests/integration/hathorwallet_facade.test.ts
  • tests/wallet/walletServiceStorageProxy.test.ts
  • tests/integration/configuration/privnet.yml
  • src/shielded/index.ts
  • tests/shielded/processing.test.ts
  • tests/models/address.test.ts
  • src/wallet/wallet.ts
🚧 Files skipped from review as they are similar to previous changes (16)
  • src/headers/types.ts
  • src/headers/parser.ts
  • src/lib.ts
  • src/wallet/walletServiceStorageProxy.ts
  • tests/models/network.test.ts
  • src/utils/bigint.ts
  • src/sync/stream.ts
  • src/models/network.ts
  • tests/headers/shielded_outputs.test.ts
  • src/schemas.ts
  • tests/storage/storage.test.ts
  • src/wallet/types.ts
  • src/new/types.ts
  • src/shielded/processing.ts
  • tests/models/shielded_output.test.ts
  • src/storage/memory_store.ts

Comment thread src/models/shielded_output.ts Outdated
Comment thread src/new/sendTransaction.ts
Comment thread src/storage/storage.ts Outdated
Comment on lines +186 to +192
/**
* Set the pin code for shielded output decryption during tx processing.
* @param pinCode The pin code
*/
setPinCode(pinCode: string): void {
this.pinCode = pinCode;
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Provide a way to clear the cached PIN.

Storage now keeps the plaintext PIN on a long-lived object, but this API only ever sets it. Because handleStop() does not clear the field, a reused Storage instance can retain a usable PIN across wallet shutdown or rebinding.

Suggested direction
-  setPinCode(pinCode: string): void {
+  setPinCode(pinCode?: string): void {
     this.pinCode = pinCode;
   }

Then call setPinCode() during teardown to clear it.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/storage/storage.ts` around lines 186 - 192, Storage currently only
exposes setPinCode(pinCode: string) and retains plaintext PIN; add a way to
clear the cached PIN and ensure teardown clears it: modify the Storage class by
either (a) adding a clearPinCode() method that nulls/undefines this.pinCode, or
(b) change setPinCode to accept null/undefined to clear the field; then call the
new clear method (or setPinCode(undefined)) from handleStop to remove the
plaintext PIN. Update any callers/tests that assume only a setter so the PIN is
reliably cleared on shutdown/rebind.

Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
src/utils/storage.ts (1)

376-421: ⚠️ Potential issue | 🟡 Minor

Consider whether shieldedLastLoadedAddressIndex should be synchronized with lastLoadedAddressIndex when shielded keys are unavailable.

The minLastLoaded calculation correctly gates the shieldedNeedMore check on hasShieldedKeys, but when no shielded keys exist, shieldedLastLoadedAddressIndex remains at 0. This causes minLastLoaded = 0 even if lastLoadedAddressIndex is 19+, resulting in nextIndex = 1. While loadAddresses safely deduplicates by reusing existing addresses, this forces unnecessary re-validation of already-generated addresses. If a wallet lacks shielded keys from the start, consider initializing shieldedLastLoadedAddressIndex to match lastLoadedAddressIndex to avoid redundant checks.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/utils/storage.ts` around lines 376 - 421, The checkGapLimit function can
force redundant revalidation when shielded keys are absent because
shieldedLastLoadedAddressIndex stays at 0; update the logic in checkGapLimit to
treat shieldedLastLoadedAddressIndex (and shieldedLastUsedAddressIndex) as equal
to lastLoadedAddressIndex (and lastUsedAddressIndex) when hasShieldedKeys is
false: detect hasShieldedKeys from storage.getAccessData(), and if false, assign
shieldedLastLoadedAddressIndex = lastLoadedAddressIndex and
shieldedLastUsedAddressIndex = lastUsedAddressIndex before computing
legacyNeedMore/shieldedNeedMore and minLastLoaded so minLastLoaded reflects the
actual legacy progress and avoids unnecessary reloads.
♻️ Duplicate comments (6)
src/headers/shielded_outputs.ts (2)

93-106: ⚠️ Potential issue | 🟠 Major

Fail on unknown shielded output modes.

If mode is neither AMOUNT_SHIELDED nor FULLY_SHIELDED, this parser still consumes an ephemeral pubkey and constructs a ShieldedOutput. That desynchronizes the remaining buffer for the next entry instead of rejecting the header immediately.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/headers/shielded_outputs.ts` around lines 93 - 106, The parser currently
only handles ShieldedOutputMode.AMOUNT_SHIELDED and FULLY_SHIELDED and silently
proceeds for other values, causing it to consume ephemeralPubkey and
desynchronize the buffer; update the logic around the mode switch (the code
handling mode, tokenData, assetCommitment, surjectionProof, and ephemeralPubkey)
to explicitly reject unknown modes by throwing an error (or returning a failure)
as soon as mode is not one of ShieldedOutputMode.AMOUNT_SHIELDED or
ShieldedOutputMode.FULLY_SHIELDED, ensuring no buffer consumption occurs and
preventing construction of a ShieldedOutput for invalid modes.

53-125: ⚠️ Potential issue | 🟠 Major

Reject truncated shielded headers during parsing.

All fixed-size and length-prefixed reads here slice buf without checking that enough bytes remain first. A truncated payload can therefore produce short commitments/proofs/pubkeys instead of failing fast, which makes malformed wire data look valid.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/headers/shielded_outputs.ts` around lines 53 - 125, The deserialize
method in static deserialize (shielded_outputs.ts) reads fixed-size fields and
length-prefixed blobs without verifying remaining buffer length, so truncated
inputs can produce short slices instead of errors; update static deserialize to
validate buf.length before every fixed-size read (33-byte
commitments/ephemeralPubkey, 1/2-byte header fields) and before each
variable-length read returned by unpackToInt (rpLen, scriptLen, spLen) and throw
a clear Error if insufficient bytes remain; ensure you also validate numOutputs
and any unpacked lengths are non-negative and within remaining buffer bounds
before slicing so ShieldedOutput construction only receives complete buffers.
src/utils/transaction.ts (1)

668-697: ⚠️ Potential issue | 🟠 Major

Validate FULLY_SHIELDED-specific fields before building the model.

This block only checks the shared crypto fields. A FULLY_SHIELDED entry with missing assetCommitment or surjectionProof still becomes a ShieldedOutput and only blows up later when it is serialized or signed.

🛠️ Suggested fix
         const shieldedModels = txData.shieldedOutputs.map(so => {
           if (!so.commitment || !so.rangeProof || !so.script || !so.ephemeralPubkey) {
             throw new Error(
               'Shielded output missing required crypto fields (commitment, rangeProof, script, ephemeralPubkey)'
             );
           }
+          if (
+            so.mode === ShieldedOutputMode.FULLY_SHIELDED &&
+            (!so.assetCommitment || !so.surjectionProof)
+          ) {
+            throw new Error(
+              'Fully shielded output missing required crypto fields (assetCommitment, surjectionProof)'
+            );
+          }
           const tokenData = this.getTokenDataFromOutput(
             {
               type: 'p2pkh',
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/utils/transaction.ts` around lines 668 - 697, The shielded output
constructor currently only checks shared crypto fields; add a mode-specific
validation so that for any txData.shieldedOutputs entry where so.mode ===
'FULLY_SHIELDED' you also verify presence (and non-empty values) of
so.assetCommitment and so.surjectionProof before creating a new ShieldedOutput;
if either is missing throw a clear Error referencing FULLY_SHIELDED,
assetCommitment, and surjectionProof to fail fast (update the validation block
around txData.shieldedOutputs and ShieldedOutput construction).
src/models/shielded_output.ts (1)

101-106: ⚠️ Potential issue | 🟠 Major

Do not serialize an invalid ephemeral pubkey as zeros.

Zero-padding here turns a missing or wrong-sized ephemeral key into a different payload/sighash instead of surfacing the bug. Please fail fast when ephemeralPubkey.length !== EPHEMERAL_PUBKEY_SIZE.

Also applies to: 131-136

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/models/shielded_output.ts` around lines 101 - 106, The serializer
currently silently zero-pads an invalid ephemeralPubkey which hides bugs; update
the serialization logic around ephemeralPubkey and EPHEMERAL_PUBKEY_SIZE so that
if this.ephemeralPubkey is null/undefined you still push
Buffer.alloc(EPHEMERAL_PUBKEY_SIZE) for backward-compat, but if
this.ephemeralPubkey is present and this.ephemeralPubkey.length !==
EPHEMERAL_PUBKEY_SIZE throw a descriptive Error (or assert) instead of
zero-padding; make the same change in the second occurrence handling the
ephemeralPubkey (lines 131-136 equivalent) so both code paths fail fast on
wrong-sized keys.
src/new/wallet.ts (2)

1526-1551: ⚠️ Potential issue | 🔴 Critical

Do not compact newTx.outputs; it breaks output indexes.

Replacing newTx.outputs with only transparent entries renumbers every output after the first shielded one. The rest of the wallet still uses array position as the on-chain output index, so spends and UTXO bookkeeping will point at the wrong output on shielded transactions.

🛠️ Suggested fix
     if (!newTx.shielded_outputs) {
       const shieldedEntries: IHistoryShieldedOutput[] = [];
-      const transparentOutputs: IHistoryOutput[] = [];
       for (const output of newTx.outputs) {
         if (transactionUtils.isShieldedOutputEntry(output)) {
           shieldedEntries.push({
             mode: output.asset_commitment ? 2 : 1, // FullShielded=2, AmountShielded=1
             commitment: output.commitment!, // already hex
@@
             surjection_proof: output.surjection_proof
               ? Buffer.from(output.surjection_proof, 'base64').toString('hex')
               : undefined,
           });
-        } else {
-          transparentOutputs.push(output);
         }
       }
       if (shieldedEntries.length > 0) {
         newTx.shielded_outputs = shieldedEntries;
-        newTx.outputs = transparentOutputs;
       }
     }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/new/wallet.ts` around lines 1526 - 1551, The current code compacts
newTx.outputs removing shielded entries which shifts on-chain output indexes;
instead, build shielded_entries by iterating newTx.outputs via
transactionUtils.isShieldedOutputEntry (as is done) and assign
newTx.shielded_outputs = shieldedEntries but do NOT reassign or filter
newTx.outputs (leave the original outputs array intact so array indices remain
stable); if you need to mark shielded items for later logic, add a
non-index-affecting marker (e.g., a boolean property) to the output objects
rather than removing them.

875-899: ⚠️ Potential issue | 🟠 Major

Return a chain-correct addressPath for shielded addresses.

When opts?.legacy === false, these methods can return a shielded address, but addressPath still comes from getAddressPathForIndex(), which always builds the legacy P2PKH/P2SH path. Any caller that uses the returned path to display or derive the address will get the wrong chain.

🧹 Nitpick comments (1)
src/shielded/processing.ts (1)

77-104: Consider adding debug logging when outputs are silently skipped.

Outputs are silently skipped when:

  • address is missing (line 91)
  • addressInfo is null (line 95)
  • privkey derivation fails (line 104)

While the last case logs at warn level, the first two cases might benefit from debug-level logging to aid troubleshooting when shielded outputs unexpectedly don't appear in wallet balances.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/shielded/processing.ts` around lines 77 - 104, In processShieldedOutputs,
add debug-level logging where outputs are currently silently skipped: when
address is falsy after shieldedOutput.decoded?.address (log shielded output
index idx and any decoded info), and when storage.getAddressInfo(address)
returns null (log address and idx); keep the existing warn for failed privkey
derivation (privkey) but ensure its message includes address, idx and tx
identifier for context. Use the storage.logger (or existing logger) and the same
surrounding variables (shieldedOutput, idx, address, addressInfo, privkey,
transparentCount) so skipped cases are traceable without changing control flow.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@__tests__/shielded/creation.test.ts`:
- Around line 1-145: The test file containing the
describe("createShieldedOutputs") suite (which defines makeMockProvider,
makeDef, TEST_ADDRESS, and imports createShieldedOutputs) is not
Prettier-formatted; run the project's Prettier formatter (e.g. npm/yarn script
or npx prettier --write) on that test file, commit the resulting formatting
changes, and ensure the linter job passes before merging.

In `@src/shielded/processing.ts`:
- Around line 118-147: The code uses a non-null assertion on
shieldedOutput.asset_commitment when isFullShielded is true which can crash on
malformed responses; before calling
Buffer.from(shieldedOutput.asset_commitment!, 'hex') (and before invoking
cryptoProvider.rewindFullShieldedOutput), explicitly validate that
shieldedOutput.asset_commitment exists and is a non-empty string, and if missing
log a clear warning via storage.logger.warn (or throw a descriptive Error) and
skip processing that output (continue) so subsequent calls to
rewindFullShieldedOutput, deriveTag, and createAssetCommitment are never
executed with undefined data.

In `@src/wallet/types.ts`:
- Around line 363-371: Update the IHathorWallet type so getNextAddress mirrors
the implementation by adding the optional opts parameter: change the
getNextAddress signature to getNextAddress(opts?: IAddressChainOptions):
AddressInfoObject | Promise<AddressInfoObject>; so callers can pass
shielded-chain progression options (refer to IAddressChainOptions and
getNextAddress in src/new/wallet.ts for parity).

---

Outside diff comments:
In `@src/utils/storage.ts`:
- Around line 376-421: The checkGapLimit function can force redundant
revalidation when shielded keys are absent because
shieldedLastLoadedAddressIndex stays at 0; update the logic in checkGapLimit to
treat shieldedLastLoadedAddressIndex (and shieldedLastUsedAddressIndex) as equal
to lastLoadedAddressIndex (and lastUsedAddressIndex) when hasShieldedKeys is
false: detect hasShieldedKeys from storage.getAccessData(), and if false, assign
shieldedLastLoadedAddressIndex = lastLoadedAddressIndex and
shieldedLastUsedAddressIndex = lastUsedAddressIndex before computing
legacyNeedMore/shieldedNeedMore and minLastLoaded so minLastLoaded reflects the
actual legacy progress and avoids unnecessary reloads.

---

Duplicate comments:
In `@src/headers/shielded_outputs.ts`:
- Around line 93-106: The parser currently only handles
ShieldedOutputMode.AMOUNT_SHIELDED and FULLY_SHIELDED and silently proceeds for
other values, causing it to consume ephemeralPubkey and desynchronize the
buffer; update the logic around the mode switch (the code handling mode,
tokenData, assetCommitment, surjectionProof, and ephemeralPubkey) to explicitly
reject unknown modes by throwing an error (or returning a failure) as soon as
mode is not one of ShieldedOutputMode.AMOUNT_SHIELDED or
ShieldedOutputMode.FULLY_SHIELDED, ensuring no buffer consumption occurs and
preventing construction of a ShieldedOutput for invalid modes.
- Around line 53-125: The deserialize method in static deserialize
(shielded_outputs.ts) reads fixed-size fields and length-prefixed blobs without
verifying remaining buffer length, so truncated inputs can produce short slices
instead of errors; update static deserialize to validate buf.length before every
fixed-size read (33-byte commitments/ephemeralPubkey, 1/2-byte header fields)
and before each variable-length read returned by unpackToInt (rpLen, scriptLen,
spLen) and throw a clear Error if insufficient bytes remain; ensure you also
validate numOutputs and any unpacked lengths are non-negative and within
remaining buffer bounds before slicing so ShieldedOutput construction only
receives complete buffers.

In `@src/models/shielded_output.ts`:
- Around line 101-106: The serializer currently silently zero-pads an invalid
ephemeralPubkey which hides bugs; update the serialization logic around
ephemeralPubkey and EPHEMERAL_PUBKEY_SIZE so that if this.ephemeralPubkey is
null/undefined you still push Buffer.alloc(EPHEMERAL_PUBKEY_SIZE) for
backward-compat, but if this.ephemeralPubkey is present and
this.ephemeralPubkey.length !== EPHEMERAL_PUBKEY_SIZE throw a descriptive Error
(or assert) instead of zero-padding; make the same change in the second
occurrence handling the ephemeralPubkey (lines 131-136 equivalent) so both code
paths fail fast on wrong-sized keys.

In `@src/new/wallet.ts`:
- Around line 1526-1551: The current code compacts newTx.outputs removing
shielded entries which shifts on-chain output indexes; instead, build
shielded_entries by iterating newTx.outputs via
transactionUtils.isShieldedOutputEntry (as is done) and assign
newTx.shielded_outputs = shieldedEntries but do NOT reassign or filter
newTx.outputs (leave the original outputs array intact so array indices remain
stable); if you need to mark shielded items for later logic, add a
non-index-affecting marker (e.g., a boolean property) to the output objects
rather than removing them.

In `@src/utils/transaction.ts`:
- Around line 668-697: The shielded output constructor currently only checks
shared crypto fields; add a mode-specific validation so that for any
txData.shieldedOutputs entry where so.mode === 'FULLY_SHIELDED' you also verify
presence (and non-empty values) of so.assetCommitment and so.surjectionProof
before creating a new ShieldedOutput; if either is missing throw a clear Error
referencing FULLY_SHIELDED, assetCommitment, and surjectionProof to fail fast
(update the validation block around txData.shieldedOutputs and ShieldedOutput
construction).

---

Nitpick comments:
In `@src/shielded/processing.ts`:
- Around line 77-104: In processShieldedOutputs, add debug-level logging where
outputs are currently silently skipped: when address is falsy after
shieldedOutput.decoded?.address (log shielded output index idx and any decoded
info), and when storage.getAddressInfo(address) returns null (log address and
idx); keep the existing warn for failed privkey derivation (privkey) but ensure
its message includes address, idx and tx identifier for context. Use the
storage.logger (or existing logger) and the same surrounding variables
(shieldedOutput, idx, address, addressInfo, privkey, transparentCount) so
skipped cases are traceable without changing control flow.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 55e764f0-8be6-4684-b0e9-8c02093c1f2b

📥 Commits

Reviewing files that changed from the base of the PR and between aac3e62 and 30f4078.

📒 Files selected for processing (27)
  • __tests__/headers/shielded_outputs.test.ts
  • __tests__/integration/shielded_transactions.test.ts
  • __tests__/models/network.test.ts
  • __tests__/models/shielded_output.test.ts
  • __tests__/models/transaction.test.ts
  • __tests__/shielded/creation.test.ts
  • __tests__/shielded/processing.test.ts
  • src/headers/shielded_outputs.ts
  • src/models/address.ts
  • src/models/network.ts
  • src/models/shielded_output.ts
  • src/new/sendTransaction.ts
  • src/new/wallet.ts
  • src/shielded/creation.ts
  • src/shielded/processing.ts
  • src/shielded/provider.ts
  • src/shielded/types.ts
  • src/storage/memory_store.ts
  • src/storage/storage.ts
  • src/sync/stream.ts
  • src/types.ts
  • src/utils/address.ts
  • src/utils/bigint.ts
  • src/utils/shieldedAddress.ts
  • src/utils/storage.ts
  • src/utils/transaction.ts
  • src/wallet/types.ts
✅ Files skipped from review due to trivial changes (2)
  • tests/models/network.test.ts
  • tests/models/shielded_output.test.ts
🚧 Files skipped from review as they are similar to previous changes (10)
  • src/utils/bigint.ts
  • src/sync/stream.ts
  • src/models/network.ts
  • tests/headers/shielded_outputs.test.ts
  • src/models/address.ts
  • src/utils/shieldedAddress.ts
  • tests/shielded/processing.test.ts
  • src/types.ts
  • src/new/sendTransaction.ts
  • src/shielded/types.ts

Comment thread __tests__/shielded/creation.test.ts
Comment thread src/shielded/processing.ts
Comment thread src/wallet/types.ts Outdated
  Part 1: Decoded shielded outputs are now merged into tx.outputs after decryption, allowing the existing transparent output loop to handle UTXO creation, balance updates, and metadata
  uniformly. This removes the separate shielded UTXO/metadata block and fixes a bug where processMetadataChanged skipped shielded outputs (spent_by updates were never processed for shielded
  UTXOs).

  Part 2: IHistoryOutput is now a discriminated union of ITransparentOutput and IShieldedOutputEntry. Both share value/token/decoded/spent_by fields. The discriminant is type: 'shielded'.
  isShieldedOutputEntry() is now a proper type guard. Removed all isShieldedOutputEntry skip patterns and unsafe (output as any).type casts across the codebase.

  Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
  The surjection proof was created with only the output's token generator
  in the domain, but the fullnode verifies against ALL input generators.
  Fixed createShieldedOutputs to accept inputTokenUids and build the full
  domain from all transparent inputs.

  Also unskipped custom token integration tests now that hathor-core
  includes the to_json_extended tokens fix.

  Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
src/utils/storage.ts (1)

342-363: ⚠️ Potential issue | 🟠 Major

Ignore the shielded cursor in index-limit mode when the wallet has no shielded keys.

On legacy-only wallets, shieldedLastLoadedAddressIndex stays at its default, so Math.min(...) pulls nextIndex back to 1 and re-fetches the prefix on every extension. This calculation needs the same key-availability gate used in the gap-limit path.

Suggested fix
-  const lastLoaded = Math.min(lastLoadedAddressIndex, shieldedLastLoadedAddressIndex);
+  const hasShieldedKeys = !!(await storage.getAccessData())?.spendXpubkey;
+  const lastLoaded = hasShieldedKeys
+    ? Math.min(lastLoadedAddressIndex, shieldedLastLoadedAddressIndex)
+    : lastLoadedAddressIndex;
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/utils/storage.ts` around lines 342 - 363, The Math.min-based calculation
for nextIndex incorrectly uses shieldedLastLoadedAddressIndex even when the
wallet has no shielded keys, causing nextIndex to jump back; update the logic in
the index-limit branch (after storage.getWalletData and isIndexLimitScanPolicy)
to only consider shieldedLastLoadedAddressIndex when the wallet actually has
shielded keys (i.e., use the same key-availability gate used in the gap-limit
path), otherwise use lastLoadedAddressIndex alone before comparing to
limits.endIndex so nextIndex/count are computed from the correct cursor.
♻️ Duplicate comments (3)
src/shielded/processing.ts (1)

117-125: ⚠️ Potential issue | 🟠 Major

Validate asset_commitment before rewinding FullShielded outputs.

IHistoryShieldedOutput.asset_commitment is optional, so this still treats malformed fullnode data as a generic rewind failure and silently drops the output. Please warn and continue explicitly before calling rewindFullShieldedOutput().

Suggested fix
       if (isFullShielded) {
         // FullShielded: rewind recovers token UID and asset blinding factor
-        const assetCommitment = Buffer.from(shieldedOutput.asset_commitment!, 'hex');
+        if (!shieldedOutput.asset_commitment) {
+          storage.logger.warn(
+            'FullShielded output missing asset_commitment',
+            tx.tx_id,
+            transparentCount + idx
+          );
+          continue;
+        }
+        const assetCommitment = Buffer.from(shieldedOutput.asset_commitment, 'hex');
         const result = await cryptoProvider.rewindFullShieldedOutput(
           privkey,
           ephPk,
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/shielded/processing.ts` around lines 117 - 125, The code calls
cryptoProvider.rewindFullShieldedOutput for FullShielded outputs without
validating shieldedOutput.asset_commitment; check that
shieldedOutput.asset_commitment exists and is a valid hex string before doing
Buffer.from(...) and before calling rewindFullShieldedOutput(privkey, ephPk,
commitment, rangeProof, assetCommitment); if it's missing or malformed, emit a
warning (e.g., using your logger) that includes identifying context and continue
to the next output instead of attempting the rewind.
src/storage/storage.ts (1)

186-192: ⚠️ Potential issue | 🟠 Major

Allow the cached PIN to be cleared.

This setter only ever writes a plaintext PIN, so a reused Storage instance can retain a usable PIN after shutdown. Please make the argument optional and clear it from handleStop().

Suggested direction
-  setPinCode(pinCode: string): void {
+  setPinCode(pinCode?: string): void {
     this.pinCode = pinCode;
   }

Then call this.setPinCode() during teardown.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/storage/storage.ts` around lines 186 - 192, Change setPinCode(pinCode:
string) to accept an optional value (e.g., setPinCode(pinCode?: string)) and
have it clear the stored PIN when called with undefined (set this.pinCode =
pinCode). Update the Storage instance state so the pinCode field can be
undefined/nullable. In handleStop(), call this.setPinCode() with no argument to
clear the cached PIN during teardown. Ensure references that read this.pinCode
handle the nullable/undefined case appropriately.
src/utils/storage.ts (1)

395-420: ⚠️ Potential issue | 🟠 Major

Gap-limit expansion still regresses to the prefix when shielded is disabled.

shieldedNeedMore is gated correctly, but shieldedTarget and minLastLoaded still consume the inactive shielded cursor. On wallets without shielded keys this makes suffix loading restart from the prefix instead of only fetching new addresses.

Suggested fix
   const legacyTarget = legacyNeedMore ? lastUsedAddressIndex + gapLimit : lastLoadedAddressIndex;
-  const shieldedTarget = shieldedNeedMore
-    ? shieldedLastUsedAddressIndex + gapLimit
-    : shieldedLastLoadedAddressIndex;
+  const shieldedTarget = hasShieldedKeys
+    ? (shieldedNeedMore
+        ? shieldedLastUsedAddressIndex + gapLimit
+        : shieldedLastLoadedAddressIndex)
+    : legacyTarget;
   const maxTarget = Math.max(legacyTarget, shieldedTarget);
-  const minLastLoaded = Math.min(lastLoadedAddressIndex, shieldedLastLoadedAddressIndex);
+  const minLastLoaded = hasShieldedKeys
+    ? Math.min(lastLoadedAddressIndex, shieldedLastLoadedAddressIndex)
+    : lastLoadedAddressIndex;
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/utils/storage.ts` around lines 395 - 420, The code uses
shieldedLastLoadedAddressIndex even when shielded keys are absent, causing
minLastLoaded and shieldedTarget to pull the inactive shielded cursor and reset
suffix loading; update the logic to ignore shielded values when hasShieldedKeys
is false: compute shieldedTarget using hasShieldedKeys (if false, set
shieldedTarget to lastLoadedAddressIndex) and compute minLastLoaded as
hasShieldedKeys ? Math.min(lastLoadedAddressIndex,
shieldedLastLoadedAddressIndex) : lastLoadedAddressIndex, so
shieldedLastLoadedAddressIndex is only considered when hasShieldedKeys is true
(affecting shieldedNeedMore, shieldedTarget, and minLastLoaded).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@src/utils/storage.ts`:
- Around line 764-803: Make the shielded-output merge idempotent and
index-stable by capturing the original tx.outputs length before appending and by
skipping already-merged shielded entries: compute const originalOutputsLen =
tx.outputs.length before the for-loop and use const so =
tx.shielded_outputs[result.index - originalOutputsLen] so index math doesn't
shift as you push; before pushing, check whether an equivalent decoded shielded
output already exists (e.g. tx.outputs.some(o => o.type === 'shielded' &&
o.commitment === so?.commitment && o.decoded?.address === result.address &&
o.token === (result.tokenUid === NATIVE_TOKEN_UID_HEX ? NATIVE_TOKEN_UID :
result.tokenUid))) and only push if not found; finally call await
store.saveTx(tx) only when you actually appended any outputs.

---

Outside diff comments:
In `@src/utils/storage.ts`:
- Around line 342-363: The Math.min-based calculation for nextIndex incorrectly
uses shieldedLastLoadedAddressIndex even when the wallet has no shielded keys,
causing nextIndex to jump back; update the logic in the index-limit branch
(after storage.getWalletData and isIndexLimitScanPolicy) to only consider
shieldedLastLoadedAddressIndex when the wallet actually has shielded keys (i.e.,
use the same key-availability gate used in the gap-limit path), otherwise use
lastLoadedAddressIndex alone before comparing to limits.endIndex so
nextIndex/count are computed from the correct cursor.

---

Duplicate comments:
In `@src/shielded/processing.ts`:
- Around line 117-125: The code calls cryptoProvider.rewindFullShieldedOutput
for FullShielded outputs without validating shieldedOutput.asset_commitment;
check that shieldedOutput.asset_commitment exists and is a valid hex string
before doing Buffer.from(...) and before calling
rewindFullShieldedOutput(privkey, ephPk, commitment, rangeProof,
assetCommitment); if it's missing or malformed, emit a warning (e.g., using your
logger) that includes identifying context and continue to the next output
instead of attempting the rewind.

In `@src/storage/storage.ts`:
- Around line 186-192: Change setPinCode(pinCode: string) to accept an optional
value (e.g., setPinCode(pinCode?: string)) and have it clear the stored PIN when
called with undefined (set this.pinCode = pinCode). Update the Storage instance
state so the pinCode field can be undefined/nullable. In handleStop(), call
this.setPinCode() with no argument to clear the cached PIN during teardown.
Ensure references that read this.pinCode handle the nullable/undefined case
appropriately.

In `@src/utils/storage.ts`:
- Around line 395-420: The code uses shieldedLastLoadedAddressIndex even when
shielded keys are absent, causing minLastLoaded and shieldedTarget to pull the
inactive shielded cursor and reset suffix loading; update the logic to ignore
shielded values when hasShieldedKeys is false: compute shieldedTarget using
hasShieldedKeys (if false, set shieldedTarget to lastLoadedAddressIndex) and
compute minLastLoaded as hasShieldedKeys ? Math.min(lastLoadedAddressIndex,
shieldedLastLoadedAddressIndex) : lastLoadedAddressIndex, so
shieldedLastLoadedAddressIndex is only considered when hasShieldedKeys is true
(affecting shieldedNeedMore, shieldedTarget, and minLastLoaded).
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 073ac695-dcbd-4dd8-9c2b-2f3679554527

📥 Commits

Reviewing files that changed from the base of the PR and between 30f4078 and c01f7c1.

📒 Files selected for processing (12)
  • __tests__/integration/shielded_transactions.test.ts
  • src/constants.ts
  • src/new/sendTransaction.ts
  • src/new/wallet.ts
  • src/shielded/creation.ts
  • src/shielded/processing.ts
  • src/storage/memory_store.ts
  • src/storage/storage.ts
  • src/types.ts
  • src/utils/storage.ts
  • src/utils/transaction.ts
  • src/utils/wallet.ts
🚧 Files skipped from review as they are similar to previous changes (5)
  • src/utils/transaction.ts
  • src/utils/wallet.ts
  • src/constants.ts
  • src/new/wallet.ts
  • src/new/sendTransaction.ts

Comment thread src/utils/storage.ts
  - Add shielded filter option to IUtxoFilterOptions and selectUtxos
  - Add getUtxo lookup method to IStore and MemoryStore
  - Extract shielded input blinding factors in prepareTxData and pass
    to createShieldedOutputs for the homomorphic balance equation
  - Mark UTXOs from decoded shielded outputs with shielded: true flag
  - Make IHistoryInput fields optional for shielded inputs (spent output
    value/token hidden in commitments)
  - Handle missing input.decoded safely with optional chaining
  - Fix test helper updateInputsSpentBy to skip shielded inputs
  - Add unshielding integration test (shielded -> transparent)
  - Replace all approximate assertions with exact expected values

  Blocked on hathor-core: HTR surplus check in _check_token_permissions_and_deposits
  must skip for shielded transactions (patched in Docker for testing).

  Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
  - Throw on unknown shielded output mode in deserializer
  - Add buffer length validation in shielded output deserializer
  - Validate asset_commitment before non-null assertion for FullShielded
  - Throw on invalid ephemeral pubkey instead of zero-filling
  - Expose opts on getNextAddress interface
  - Make setPinCode parameter optional for clearing
  - Change getScanXPrivKey/getSpendXPrivKey return type to string | undefined
  - Return correct addressPath for shielded chain derivation
  - Run Prettier on creation.test.ts

  Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
@coderabbitai
Copy link
Copy Markdown

coderabbitai bot commented Apr 15, 2026

@pedroferreira1 Sure! I'll resolve all my existing comments and kick off a fresh full review of the latest changes now.

✅ Actions performed

Full review triggered.

Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 14

♻️ Duplicate comments (8)
src/utils/bigint.ts (1)

29-53: ⚠️ Potential issue | 🟠 Major

Guard missing reviver context before BigInt(context.source)

Line 36 still dereferences context unconditionally. In engines where the reviver context is unavailable, numeric parsing throws before reaching the new null-safe comparison on Line 47.

💡 Minimal fix
-  bigIntReviver(_key: string, value: any, context: { source: string }): any {
+  bigIntReviver(_key: string, value: any, context?: { source: string }): any {
     if (typeof value !== 'number') {
       // No special handling needed for non-number values.
       return value;
     }

+    // Some runtimes don't provide reviver context.
+    if (!context?.source) {
+      if (Number.isInteger(value) && !Number.isSafeInteger(value)) {
+        throw new Error('JSONBigInt.parse cannot safely recover an unsafe integer without reviver context');
+      }
+      return value;
+    }
+
     try {
       const bigIntValue = BigInt(context.source);
Is the third `context` parameter in `JSON.parse` reviver (including `context.source`) supported in Hermes/React Native as of 2026, and what is the recommended fallback behavior when it is missing?
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/utils/bigint.ts` around lines 29 - 53, The reviver bigIntReviver
currently dereferences context.source unguarded; modify bigIntReviver to first
check that context and context.source exist and are strings (e.g., let source =
(context && typeof context.source === 'string') ? context.source :
String(value)) before calling BigInt(source), then proceed with the existing
try/catch and the numeric range check using bigIntValue; ensure all references
to context.source are replaced with this guarded source variable so engines
without the reviver context (Hermes/React Native) fall back safely.
__tests__/integration/hathorwallet_facade.test.ts (1)

70-70: ⚠️ Potential issue | 🟡 Minor

Avoid flaky nonce assertion on mined transaction.

Line 70 can fail intermittently because mined nonce may legitimately be 0. Keep nonce as a type check and assert a stable mining side effect instead (e.g., hash/parents).

Suggested fix
-    expect(tx.nonce).toBeGreaterThan(0);
+    expect(tx.nonce).toEqual(expect.any(Number));
+    expect(tx.parents.length).toBeGreaterThan(0);
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@__tests__/integration/hathorwallet_facade.test.ts` at line 70, The assertion
expecting tx.nonce > 0 is flaky because mined nonce can be 0; change the check
to a stable type/integrity assertion and assert a reliable mining side-effect
instead: verify tx.nonce is a number (e.g., Number.isInteger or typeof check on
the tx.nonce field) and replace the >0 assertion with an assertion that the
transaction has a valid hash or non-empty parents array (e.g., check tx.hash
exists/has length or tx.parents.length > 0) so the test validates mining
deterministically; update the test containing the tx variable (the assertion at
expect(tx.nonce).toBeGreaterThan(0)) accordingly.
src/models/shielded_output.ts (1)

134-138: ⚠️ Potential issue | 🟠 Major

Inconsistent ephemeralPubkey validation between serialize() and serializeSighash().

serialize() (lines 102-107) correctly throws on an invalid ephemeralPubkey, but serializeSighash() silently substitutes 33 zero bytes. This inconsistency can mask upstream bugs and produce an incorrect sighash for malformed outputs.

Proposed fix
-    // Always include ephemeral pubkey in sighash
-    arr.push(
-      this.ephemeralPubkey && this.ephemeralPubkey.length === EPHEMERAL_PUBKEY_SIZE
-        ? this.ephemeralPubkey
-        : Buffer.alloc(EPHEMERAL_PUBKEY_SIZE)
-    );
+    // Always include ephemeral pubkey in sighash
+    if (!this.ephemeralPubkey || this.ephemeralPubkey.length !== EPHEMERAL_PUBKEY_SIZE) {
+      throw new Error(
+        `Invalid ephemeral pubkey: expected ${EPHEMERAL_PUBKEY_SIZE} bytes, ` +
+          `got ${this.ephemeralPubkey?.length ?? 0}`
+      );
+    }
+    arr.push(this.ephemeralPubkey);
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/models/shielded_output.ts` around lines 134 - 138, serializeSighash
currently substitutes a zeroed buffer when this.ephemeralPubkey is missing or
wrong-sized, while serialize throws on invalid ephemeralPubkey; update
serializeSighash to perform the same validation as serialize by checking
this.ephemeralPubkey and its length against EPHEMERAL_PUBKEY_SIZE and throwing
an error (with a clear message referencing ephemeralPubkey) instead of silently
using Buffer.alloc; locate the logic in the serializeSighash method and mirror
the validation behavior from serialize to ensure malformed outputs fail fast.
src/storage/storage.ts (1)

381-410: ⚠️ Potential issue | 🔴 Critical

Don't compact tx.outputs when extracting shielded entries.

Reassigning tx.outputs to transparentOutputs renumbers every transparent output after the first shielded one. The rest of the wallet still uses the original on-chain output index, so later spends/UTXO lookups can hit the wrong output metadata.

🛠️ Suggested fix
     if (!tx.shielded_outputs) {
       const shieldedEntries: IHistoryShieldedOutput[] = [];
-      const transparentOutputs: IHistoryTx['outputs'] = [];
       for (const output of tx.outputs) {
         if (transactionUtils.isShieldedOutputEntry(output)) {
           shieldedEntries.push({
             mode: output.asset_commitment ? 2 : 1,
             commitment: output.commitment,
@@
             surjection_proof: output.surjection_proof
               ? Buffer.from(output.surjection_proof, 'base64').toString('hex')
               : undefined,
           });
-        } else {
-          transparentOutputs.push(output);
         }
       }
       if (shieldedEntries.length > 0) {
         // eslint-disable-next-line no-param-reassign
         tx.shielded_outputs = shieldedEntries;
-        // eslint-disable-next-line no-param-reassign
-        tx.outputs = transparentOutputs;
       }
     }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/storage/storage.ts` around lines 381 - 410, The code currently removes
shielded entries from tx.outputs (reassigning tx.outputs = transparentOutputs)
which shifts on-chain indexes and breaks downstream UTXO/spend lookups; instead,
keep tx.outputs unchanged and only populate tx.shielded_outputs with the
extracted shielded entries (built via transactionUtils.isShieldedOutputEntry),
so downstream consumers still see original output indexing; update the block
that fills tx.shielded_outputs to stop modifying tx.outputs (remove
transparentOutputs and the assignment to tx.outputs) and only set
tx.shielded_outputs when shieldedEntries.length > 0.
src/utils/storage.ts (2)

845-857: ⚠️ Potential issue | 🟠 Major

Use the persisted shielded spend discriminator consistently.

These branches check shielded-spend, but the stored spend-address discriminator is shielded_spend. When they miss, shielded activity never updates shieldedMaxIndexUsed, so the shielded current/last-used cursors stay stale.

🛠️ Suggested fix
-    if (addressInfo.addressType === 'shielded-spend') {
+    if (addressInfo.addressType === 'shielded_spend') {
@@
-    if (addressInfo.addressType === 'shielded-spend') {
+    if (addressInfo.addressType === 'shielded_spend') {

Also applies to: 951-963

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/utils/storage.ts` around lines 845 - 857, The branch is checking
addressInfo.addressType for 'shielded-spend' but the persisted discriminator is
'shielded_spend', so shielded addresses never update shieldedMaxIndexUsed;
update the comparisons in the branches that reference addressInfo.addressType
(the blocks that set shieldedMaxIndexUsed and legacyMaxIndexUsed) to use
'shielded_spend' instead of 'shielded-spend' and apply the same fix to the other
occurrence that mirrors this logic (the later block that also updates
shieldedMaxIndexUsed / legacyMaxIndexUsed) so shielded activity correctly
updates the shieldedMaxIndexUsed cursor.

786-825: ⚠️ Potential issue | 🔴 Critical

Make the shielded-output merge idempotent before saving the tx.

This block appends decoded shielded outputs into tx.outputs and persists that mutated tx. On the next processHistory() run, transparentCount is computed from an already-expanded array, so the same shielded results are appended again and tx.shielded_outputs[result.index - transparentCount] starts drifting. That duplicates balances/UTXOs on every reprocess.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/utils/storage.ts` around lines 786 - 825, Make the shielded-output merge
idempotent by detecting and skipping already-appended decoded shielded outputs
before pushing them and before calling store.saveTx; e.g., in the block that
calls processShieldedOutputs (around processShieldedOutputs, transparentCount,
shieldedResults, tx.outputs, and store.saveTx) check for a unique identifier on
each decoded shielded output (commitment or ephemeral_pubkey or a new flag like
decoded_shielded_appended) and only push results that are not already present,
or set a per-tx marker (decoded_shielded_appended = true) after first merge and
skip merging on subsequent runs so tx.outputs and subsequent indexing remain
stable.
src/new/wallet.ts (1)

1543-1567: ⚠️ Potential issue | 🔴 Critical

Don't compact live newTx.outputs; it renumbers on-chain indexes.

Replacing newTx.outputs with transparentOutputs shifts every transparent output after the first shielded one. Later spend/UTXO code still uses the original output index, so it can point at the wrong output.

🛠️ Suggested fix
     if (!newTx.shielded_outputs) {
       const shieldedEntries: IHistoryShieldedOutput[] = [];
-      const transparentOutputs: IHistoryOutput[] = [];
       for (const output of newTx.outputs) {
         if (transactionUtils.isShieldedOutputEntry(output)) {
           shieldedEntries.push({
             mode: output.asset_commitment ? 2 : 1, // FullShielded=2, AmountShielded=1
             commitment: output.commitment, // already hex
@@
             surjection_proof: output.surjection_proof
               ? Buffer.from(output.surjection_proof, 'base64').toString('hex')
               : undefined,
           });
-        } else {
-          transparentOutputs.push(output);
         }
       }
       if (shieldedEntries.length > 0) {
         newTx.shielded_outputs = shieldedEntries;
-        newTx.outputs = transparentOutputs;
       }
     }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/new/wallet.ts` around lines 1543 - 1567, The code currently overwrites
newTx.outputs with a compacted transparentOutputs array which shifts on-chain
output indexes; instead, keep newTx.outputs length/ordering intact and only
extract shielded entries into newTx.shielded_outputs. Use
transactionUtils.isShieldedOutputEntry when iterating and either (a) leave the
original shielded output objects in newTx.outputs untouched, or (b) replace
shielded slots with explicit placeholders (e.g., null or {removed_shielded:
true, original_index: i}) so indexes remain stable for later UTXO/spend logic;
ensure newTx.shielded_outputs is set to the collected shieldedEntries and do not
assign transparentOutputs back to newTx.outputs.
src/storage/memory_store.ts (1)

313-337: ⚠️ Potential issue | 🔴 Critical

shielded_spend is being routed into the legacy chain.

These checks use shielded-spend, so persisted spend addresses fall through the legacy branch. That can overwrite addressIndexes at the same BIP32 index and later make legacy address/cursor APIs return the spend address instead of the transparent one.

🛠️ Suggested fix
-    } else if (info.addressType !== 'shielded-spend') {
+    } else if (info.addressType !== 'shielded_spend') {
@@
-      info.addressType === 'shielded' || info.addressType === 'shielded-spend';
+      info.addressType === 'shielded' || info.addressType === 'shielded_spend';
@@
-        if (addrInfo.addressType === 'shielded-spend') {
+        if (addrInfo.addressType === 'shielded_spend') {

Also applies to: 508-539

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/storage/memory_store.ts` around lines 313 - 337, The code is using an
inconsistent addressType string ('shielded-spend') which causes spend addresses
to be misrouted; update all checks from 'shielded-spend' to the correct
'shielded_spend' and treat that value as part of the shielded chain.
Specifically, change the routing logic around addressType (the first if/else
that updates shieldedAddressIndexes and addressIndexes) to exclude
'shielded_spend' from the legacy branch and ensure shieldedAddressIndexes (and
shielded chain updates) include info.addressType === 'shielded' ||
info.addressType === 'shielded_spend'; also update the isShieldedChain
computation and subsequent updates to walletData.shieldedCurrentAddressIndex,
walletData.shieldedLastLoadedAddressIndex, and calls to setCurrentAddressIndex
so spend addresses are tracked on the shielded chain rather than the legacy
addressIndexes.
🧹 Nitpick comments (4)
__tests__/integration/helpers/wallet.helper.ts (1)

337-337: Prefer ?? undefined when forwarding the PIN.

|| undefined also rewrites '' to “absent”, which changes behavior for any intentionally empty test PIN. hWallet.pinCode ?? undefined keeps the undefined-normalization without collapsing other falsy values.

Suggested change
-    await hWallet.storage.processHistory(hWallet.pinCode || undefined);
+    await hWallet.storage.processHistory(hWallet.pinCode ?? undefined);

Based on learnings, this repo prefers undefined/? for optional values and reserves null for explicit semantics.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@__tests__/integration/helpers/wallet.helper.ts` at line 337, The call to
hWallet.storage.processHistory currently normalizes falsy pin values using
"hWallet.pinCode || undefined" which incorrectly converts empty-string PINs to
undefined; change this to "hWallet.pinCode ?? undefined" so only undefined/null
are normalized while preserving other falsy values like ''. Update the
invocation in the hWallet.storage.processHistory call to use the nullish
coalescing operator (hWallet.pinCode ?? undefined) and run tests to verify
behavior.
__tests__/wallet/walletServiceStorageProxy.test.ts (1)

20-20: Add one explicit shielded-branch test case in this suite.

This mock hardcodes non-shielded behavior, so the new passthrough branch is not validated here. Add one case with isShieldedOutputEntry returning true and assert shielded outputs are preserved as-is.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@__tests__/wallet/walletServiceStorageProxy.test.ts` at line 20, Add a test in
the walletServiceStorageProxy.test.ts suite that covers the shielded-output
passthrough branch by mocking isShieldedOutputEntry to return true (replace or
augment the existing mock: isShieldedOutputEntry:
jest.fn().mockReturnValue(true)), call the same function under test that
processes outputs, and assert that shielded outputs are returned/preserved
unchanged (exactly matching the input shielded output object) to validate the
shielded branch logic.
src/new/sendTransaction.ts (1)

374-379: Phantom output removal relies on reference equality — document this invariant.

The removal of phantom outputs using Set.has() depends on shuffle() and spread operators preserving object references. While this works correctly, a future refactor that clones objects could silently break this logic.

Consider adding a defensive comment or using a more explicit marker (e.g., a _phantom property) to make this invariant clearer.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/new/sendTransaction.ts` around lines 374 - 379, The phantom output
removal uses reference-equality via phantomOutputs.has(out) inside
sendTransaction (variables: phantomOutputs, outputs) and relies on shuffle() and
spread preserving object references; make this invariant explicit by either
adding a defensive comment above that block mentioning the reliance on object
identity or, more robustly, mark phantom outputs when created (e.g., set a
_phantom boolean on the output objects) and change the removal to check that
marker (or use a unique id field) so future cloning/shallow-copy refactors
cannot break phantom filtering.
__tests__/integration/shielded_transactions.test.ts (1)

33-36: Test timeout override is acceptable but consider a cleaner approach.

Casting constants to any to override TIMEOUT works but suppresses type safety. A cleaner alternative would be to export a setTestTimeout function from constants or use Jest's built-in request mocking.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@__tests__/integration/shielded_transactions.test.ts` around lines 33 - 36,
The test suppresses type safety by casting `constants` to `any` to override
`TIMEOUT`; add a typed setter in the constants module (e.g., export function
setTestTimeout(ms: number)) and change the test to call `setTestTimeout(30000)`
instead of mutating `constants as any`. Update the constants file to keep
`TIMEOUT` read-only by default and have `setTestTimeout` modify the internal
value only for tests; reference the existing `constants` and `TIMEOUT` symbols
when implementing and calling the new `setTestTimeout` API.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@__tests__/headers/shielded_outputs.test.ts`:
- Line 11: Remove the unused import VertexHeaderId from the test file: locate
the import statement "import { VertexHeaderId } from '../../src/headers/types';"
in __tests__/headers/shielded_outputs.test.ts and delete VertexHeaderId (or
remove the entire import line if nothing else is imported) so the linter warning
is resolved.

In `@__tests__/integration/configuration/docker-compose.yml`:
- Around line 240-241: Update the hardcoded localhost ports to match the
docker-compose change: replace usages of
config.setWalletServiceBaseUrl('http://localhost:3000/dev/') with port 3100 and
config.setWalletServiceBaseWsUrl('ws://localhost:3001/') with port 3101 (i.e.,
'http://localhost:3100/dev/' and 'ws://localhost:3101/'), and update the
explanatory comment that mentions "localhost:3000" to reference the new host
port 3100 so tests and adapters point to the correct service endpoints.

In `@__tests__/models/shielded_output.test.ts`:
- Line 10: Remove the unused import intToBytes from the test file; locate the
import statement that references intToBytes (import { intToBytes } from
'../../src/utils/buffer';) in __tests__/models/shielded_output.test.ts and
delete that symbol (or the whole import line if it only imports intToBytes) so
the linter warning is resolved and the test file no longer imports an unused
utility.

In `@__tests__/models/transaction.test.ts`:
- Line 637: Replace the explicit "any" casts used in the test (the occurrences
currently written as "as any") with the concrete element type from
Transaction['shieldedOutputs']; specifically cast to
Transaction['shieldedOutputs'][number] (or the equivalent ShieldedOutput element
type) for both the occurrence at the block ending with "})) as any;" and the
similar cast at the other occurrence so the tests comply with
`@typescript-eslint/no-explicit-any`.

In `@__tests__/shielded/creation.test.ts`:
- Around line 15-26: The test mock defines an unused callCount variable that is
incremented inside the createAmountShieldedOutput mock but never read; remove
the callCount declaration and the callCount++ increment from the mock
implementation (leave generateRandomBlindingFactor and
createAmountShieldedOutput mocks intact) so the test relies on Jest's call
assertions (e.g., toHaveBeenCalled) instead of a manual counter.

In `@__tests__/shielded/processing.test.ts`:
- Around line 179-180: Replace the inline CommonJS require of bitcore-lib with
an ES import: add "import { HDPrivateKey } from 'bitcore-lib';" to the file's
top-level imports, then remove the inline require usages and update the two test
lines that call "new HDPrivateKey().deriveNonCompliantChild(0).xprivkey" (e.g.,
the variables named mockXpriv) to use the imported HDPrivateKey; ensure both
occurrences (the one around line 179 and the one around line 219) are updated
accordingly so ESLint no longer flags require().

In `@package.json`:
- Line 17: The package.json currently lists "@hathor/ct-crypto-node":
"file:../hathor-core/hathor-ct-crypto/npm" in dependencies which will break
installs for consumers; move that dependency into devDependencies if it's only
required for local development/build, or replace the file: reference with a
published semver string (e.g., "@hathor/ct-crypto-node": "1.2.3") after
publishing the package to npm so production installs can resolve it; update the
package.json entry for "@hathor/ct-crypto-node" accordingly and run an install
to verify no other code (build scripts or runtime imports) depends on it at
runtime.

In `@src/models/address.ts`:
- Around line 8-14: Remove the unused "crypto" import from the bitcore-lib
import list in this module; update the import statement that currently declares
"crypto, encoding, util, Address as BitcoreAddress, PublicKey as
bitcorePublicKey" to omit "crypto" so only used symbols (encoding, util,
BitcoreAddress, bitcorePublicKey) remain, which will clear the linter warning.

In `@src/new/wallet.ts`:
- Around line 1825-1837: The catch block for the dynamic import of
createDefaultShieldedCryptoProvider can itself throw when it blindly passes the
caught exception as a second argument to this.logger.info, so update the catch
to avoid forwarding the raw error object: log a plain descriptive string
(optionally include e?.message or String(e) interpolated) and do not rethrow;
keep the fallback path that leaves this.storage.shieldedCryptoProvider unset.
Change the code around the import/try-catch (the block using
createDefaultShieldedCryptoProvider, this.storage.setShieldedCryptoProvider, and
this.logger.info) to ensure the logger call is safe and start() cannot throw
when the optional provider is missing.
- Around line 3454-3459: The parameter order in syncHistory violates ESLint
because a defaulted parameter precedes an optional one; change the signature so
shouldProcessHistory is optional (remove the = false from the parameter list and
mark it optional) and keep pinCode optional, then set the default inside the
function body (e.g., if shouldProcessHistory is undefined, assign false at the
top of syncHistory) so behavior remains the same while complying with the
airbnb-base rule; locate the syncHistory method to implement this change.

In `@src/shielded/types.ts`:
- Around line 20-31: IShieldedOutput should be a discriminated union on the
existing mode field: create a shared base (common fields like mode:
ShieldedOutputMode, commitment, range_proof, script, ephemeral_pubkey, decoded)
and then two concrete interfaces (e.g., AmountShieldedOutput with mode:
'AMOUNT_SHIELDED' and a required token_data: number, and FullyShieldedOutput
with mode: 'FULLY_SHIELDED' and required asset_commitment and surjection_proof
strings). Replace the current IShieldedOutput with a union type
(AmountShieldedOutput | FullyShieldedOutput) so token_data is only present for
AMOUNT_SHIELDED and asset_commitment/surjection_proof are required for
FULLY_SHIELDED; keep IShieldedOutputDecoded and ShieldedOutputMode names
unchanged.

In `@src/sync/stream.ts`:
- Around line 265-266: The parameter order/default violation comes from the
signature using shouldProcessHistory: boolean = false before an optional
pinCode?: string; change both call sites (the signature at the shown block and
the other occurrence) to make both parameters optional and handle the default
inside the function: replace the signature with shouldProcessHistory?: boolean,
pinCode?: string and inside the function derive const _shouldProcessHistory =
shouldProcessHistory ?? false (or similar) so ESLint is satisfied without a
breaking API change.

In `@src/utils/transaction.ts`:
- Line 39: Remove the unused imports IDataShieldedOutput and ShieldedOutputMode
from the import list in this module (they are flagged by the linter and not
referenced anywhere); locate the import statement that currently includes
IDataShieldedOutput and ShieldedOutputMode, delete those two identifiers from
the import clause, and run the linter/typecheck to confirm no other references
remain or re-add them only if they are actually needed by functions/types in
this file.

In `@src/utils/wallet.ts`:
- Around line 770-773: The PIN-change routine currently decrypts and re-encrypts
only data.spendMainKey using decryptData(oldPin) and encryptData(newPin); add
the same flow for data.scanMainKey so both keys are re-encrypted when the PIN
changes—i.e., if data.scanMainKey exists, call decryptData(data.scanMainKey,
oldPin) and then set data.scanMainKey = encryptData(decryptedScanKey, newPin)
using the same helper functions to mirror the spendMainKey handling.

---

Duplicate comments:
In `@__tests__/integration/hathorwallet_facade.test.ts`:
- Line 70: The assertion expecting tx.nonce > 0 is flaky because mined nonce can
be 0; change the check to a stable type/integrity assertion and assert a
reliable mining side-effect instead: verify tx.nonce is a number (e.g.,
Number.isInteger or typeof check on the tx.nonce field) and replace the >0
assertion with an assertion that the transaction has a valid hash or non-empty
parents array (e.g., check tx.hash exists/has length or tx.parents.length > 0)
so the test validates mining deterministically; update the test containing the
tx variable (the assertion at expect(tx.nonce).toBeGreaterThan(0)) accordingly.

In `@src/models/shielded_output.ts`:
- Around line 134-138: serializeSighash currently substitutes a zeroed buffer
when this.ephemeralPubkey is missing or wrong-sized, while serialize throws on
invalid ephemeralPubkey; update serializeSighash to perform the same validation
as serialize by checking this.ephemeralPubkey and its length against
EPHEMERAL_PUBKEY_SIZE and throwing an error (with a clear message referencing
ephemeralPubkey) instead of silently using Buffer.alloc; locate the logic in the
serializeSighash method and mirror the validation behavior from serialize to
ensure malformed outputs fail fast.

In `@src/new/wallet.ts`:
- Around line 1543-1567: The code currently overwrites newTx.outputs with a
compacted transparentOutputs array which shifts on-chain output indexes;
instead, keep newTx.outputs length/ordering intact and only extract shielded
entries into newTx.shielded_outputs. Use transactionUtils.isShieldedOutputEntry
when iterating and either (a) leave the original shielded output objects in
newTx.outputs untouched, or (b) replace shielded slots with explicit
placeholders (e.g., null or {removed_shielded: true, original_index: i}) so
indexes remain stable for later UTXO/spend logic; ensure newTx.shielded_outputs
is set to the collected shieldedEntries and do not assign transparentOutputs
back to newTx.outputs.

In `@src/storage/memory_store.ts`:
- Around line 313-337: The code is using an inconsistent addressType string
('shielded-spend') which causes spend addresses to be misrouted; update all
checks from 'shielded-spend' to the correct 'shielded_spend' and treat that
value as part of the shielded chain. Specifically, change the routing logic
around addressType (the first if/else that updates shieldedAddressIndexes and
addressIndexes) to exclude 'shielded_spend' from the legacy branch and ensure
shieldedAddressIndexes (and shielded chain updates) include info.addressType ===
'shielded' || info.addressType === 'shielded_spend'; also update the
isShieldedChain computation and subsequent updates to
walletData.shieldedCurrentAddressIndex,
walletData.shieldedLastLoadedAddressIndex, and calls to setCurrentAddressIndex
so spend addresses are tracked on the shielded chain rather than the legacy
addressIndexes.

In `@src/storage/storage.ts`:
- Around line 381-410: The code currently removes shielded entries from
tx.outputs (reassigning tx.outputs = transparentOutputs) which shifts on-chain
indexes and breaks downstream UTXO/spend lookups; instead, keep tx.outputs
unchanged and only populate tx.shielded_outputs with the extracted shielded
entries (built via transactionUtils.isShieldedOutputEntry), so downstream
consumers still see original output indexing; update the block that fills
tx.shielded_outputs to stop modifying tx.outputs (remove transparentOutputs and
the assignment to tx.outputs) and only set tx.shielded_outputs when
shieldedEntries.length > 0.

In `@src/utils/bigint.ts`:
- Around line 29-53: The reviver bigIntReviver currently dereferences
context.source unguarded; modify bigIntReviver to first check that context and
context.source exist and are strings (e.g., let source = (context && typeof
context.source === 'string') ? context.source : String(value)) before calling
BigInt(source), then proceed with the existing try/catch and the numeric range
check using bigIntValue; ensure all references to context.source are replaced
with this guarded source variable so engines without the reviver context
(Hermes/React Native) fall back safely.

In `@src/utils/storage.ts`:
- Around line 845-857: The branch is checking addressInfo.addressType for
'shielded-spend' but the persisted discriminator is 'shielded_spend', so
shielded addresses never update shieldedMaxIndexUsed; update the comparisons in
the branches that reference addressInfo.addressType (the blocks that set
shieldedMaxIndexUsed and legacyMaxIndexUsed) to use 'shielded_spend' instead of
'shielded-spend' and apply the same fix to the other occurrence that mirrors
this logic (the later block that also updates shieldedMaxIndexUsed /
legacyMaxIndexUsed) so shielded activity correctly updates the
shieldedMaxIndexUsed cursor.
- Around line 786-825: Make the shielded-output merge idempotent by detecting
and skipping already-appended decoded shielded outputs before pushing them and
before calling store.saveTx; e.g., in the block that calls
processShieldedOutputs (around processShieldedOutputs, transparentCount,
shieldedResults, tx.outputs, and store.saveTx) check for a unique identifier on
each decoded shielded output (commitment or ephemeral_pubkey or a new flag like
decoded_shielded_appended) and only push results that are not already present,
or set a per-tx marker (decoded_shielded_appended = true) after first merge and
skip merging on subsequent runs so tx.outputs and subsequent indexing remain
stable.

---

Nitpick comments:
In `@__tests__/integration/helpers/wallet.helper.ts`:
- Line 337: The call to hWallet.storage.processHistory currently normalizes
falsy pin values using "hWallet.pinCode || undefined" which incorrectly converts
empty-string PINs to undefined; change this to "hWallet.pinCode ?? undefined" so
only undefined/null are normalized while preserving other falsy values like ''.
Update the invocation in the hWallet.storage.processHistory call to use the
nullish coalescing operator (hWallet.pinCode ?? undefined) and run tests to
verify behavior.

In `@__tests__/integration/shielded_transactions.test.ts`:
- Around line 33-36: The test suppresses type safety by casting `constants` to
`any` to override `TIMEOUT`; add a typed setter in the constants module (e.g.,
export function setTestTimeout(ms: number)) and change the test to call
`setTestTimeout(30000)` instead of mutating `constants as any`. Update the
constants file to keep `TIMEOUT` read-only by default and have `setTestTimeout`
modify the internal value only for tests; reference the existing `constants` and
`TIMEOUT` symbols when implementing and calling the new `setTestTimeout` API.

In `@__tests__/wallet/walletServiceStorageProxy.test.ts`:
- Line 20: Add a test in the walletServiceStorageProxy.test.ts suite that covers
the shielded-output passthrough branch by mocking isShieldedOutputEntry to
return true (replace or augment the existing mock: isShieldedOutputEntry:
jest.fn().mockReturnValue(true)), call the same function under test that
processes outputs, and assert that shielded outputs are returned/preserved
unchanged (exactly matching the input shielded output object) to validate the
shielded branch logic.

In `@src/new/sendTransaction.ts`:
- Around line 374-379: The phantom output removal uses reference-equality via
phantomOutputs.has(out) inside sendTransaction (variables: phantomOutputs,
outputs) and relies on shuffle() and spread preserving object references; make
this invariant explicit by either adding a defensive comment above that block
mentioning the reliance on object identity or, more robustly, mark phantom
outputs when created (e.g., set a _phantom boolean on the output objects) and
change the removal to check that marker (or use a unique id field) so future
cloning/shallow-copy refactors cannot break phantom filtering.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 901fbe48-378b-4631-bc99-5c48af3d172f

📥 Commits

Reviewing files that changed from the base of the PR and between 0ccc910 and 7e72bf1.

⛔ Files ignored due to path filters (1)
  • package-lock.json is excluded by !**/package-lock.json
📒 Files selected for processing (46)
  • __tests__/headers/shielded_outputs.test.ts
  • __tests__/integration/configuration/docker-compose.yml
  • __tests__/integration/configuration/privnet.yml
  • __tests__/integration/hathorwallet_facade.test.ts
  • __tests__/integration/helpers/wallet.helper.ts
  • __tests__/integration/shielded_transactions.test.ts
  • __tests__/models/address.test.ts
  • __tests__/models/network.test.ts
  • __tests__/models/shielded_output.test.ts
  • __tests__/models/transaction.test.ts
  • __tests__/shielded/creation.test.ts
  • __tests__/shielded/processing.test.ts
  • __tests__/storage/storage.test.ts
  • __tests__/wallet/walletServiceStorageProxy.test.ts
  • package.json
  • setupTests-integration.js
  • src/constants.ts
  • src/headers/parser.ts
  • src/headers/shielded_outputs.ts
  • src/headers/types.ts
  • src/lib.ts
  • src/models/address.ts
  • src/models/network.ts
  • src/models/shielded_output.ts
  • src/models/transaction.ts
  • src/new/sendTransaction.ts
  • src/new/types.ts
  • src/new/wallet.ts
  • src/schemas.ts
  • src/shielded/creation.ts
  • src/shielded/index.ts
  • src/shielded/processing.ts
  • src/shielded/provider.ts
  • src/shielded/types.ts
  • src/storage/memory_store.ts
  • src/storage/storage.ts
  • src/sync/stream.ts
  • src/types.ts
  • src/utils/address.ts
  • src/utils/bigint.ts
  • src/utils/shieldedAddress.ts
  • src/utils/storage.ts
  • src/utils/transaction.ts
  • src/utils/wallet.ts
  • src/wallet/types.ts
  • src/wallet/walletServiceStorageProxy.ts
💤 Files with no reviewable changes (1)
  • setupTests-integration.js

Comment thread __tests__/headers/shielded_outputs.test.ts Outdated
Comment on lines +240 to +241
- "3100:3000"
- "3101:3001"
Copy link
Copy Markdown

@coderabbitai coderabbitai bot Apr 15, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Search for hardcoded references to old ports 3000/3001 in test files
echo "=== Searching for localhost:3000 or localhost:3001 references ==="
rg -n "localhost:300[01]|127.0.0.1:300[01]" --type ts --type js

echo ""
echo "=== Checking service-facade.helper.ts for port definitions ==="
fd -t f "service-facade.helper.ts" --exec cat {}

Repository: HathorNetwork/hathor-wallet-lib

Length of output: 8039


Update test helpers to use the new host ports 3100 and 3101.

The docker-compose port mapping changed from 3000:3000/3001:3001 to 3100:3000/3101:3001, but the test helpers still hardcode the old ports:

  • __tests__/integration/helpers/service-facade.helper.ts:37-38config.setWalletServiceBaseUrl('http://localhost:3000/dev/') and config.setWalletServiceBaseWsUrl('ws://localhost:3001/') must be updated to use 3100 and 3101
  • __tests__/integration/adapters/types.ts:102 — comment references localhost:3000 and needs updating

Tests will fail connecting to the services without these updates.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@__tests__/integration/configuration/docker-compose.yml` around lines 240 -
241, Update the hardcoded localhost ports to match the docker-compose change:
replace usages of config.setWalletServiceBaseUrl('http://localhost:3000/dev/')
with port 3100 and config.setWalletServiceBaseWsUrl('ws://localhost:3001/') with
port 3101 (i.e., 'http://localhost:3100/dev/' and 'ws://localhost:3101/'), and
update the explanatory comment that mentions "localhost:3000" to reference the
new host port 3100 so tests and adapters point to the correct service endpoints.

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There's a comment by myself about changes in this file. This will be addressed later

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@pedroferreira1, understood! I'll leave this open for now. Would you like me to create a follow-up GitHub issue to track the update of the hardcoded ports in __tests__/integration/helpers/service-facade.helper.ts (lines 37–38) and __tests__/integration/adapters/types.ts (line 102) from 3000/3001 to 3100/3101?

Comment thread __tests__/models/shielded_output.test.ts Outdated
Comment thread __tests__/models/transaction.test.ts Outdated
Comment thread __tests__/shielded/creation.test.ts Outdated
Comment thread src/new/wallet.ts
Comment thread src/shielded/types.ts
Comment on lines +20 to +31
export interface IShieldedOutput {
mode: ShieldedOutputMode;
commitment: string; // hex, 33 bytes
range_proof: string; // hex, variable (~675 bytes)
script: string; // hex, output script (P2PKH/P2SH)
token_data: number; // token index (AmountShielded only)
ephemeral_pubkey: string; // hex, 33 bytes
decoded: IShieldedOutputDecoded;
// FullShielded only:
asset_commitment?: string; // hex, 33 bytes
surjection_proof?: string; // hex, variable
}
Copy link
Copy Markdown

@coderabbitai coderabbitai bot Apr 15, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Model IShieldedOutput as a discriminated union by mode.

Line 25 makes token_data mandatory even though the comment says it exists only for AMOUNT_SHIELDED, while Lines 29-30 make the full-shielded fields optional. That means a valid FULLY_SHIELDED output cannot be represented without inventing token_data, and malformed full-shielded payloads still type-check. Please split this into two variants keyed by mode so each protocol mode requires the right fields.

Proposed refactor
-export interface IShieldedOutput {
-  mode: ShieldedOutputMode;
+interface IShieldedOutputBase {
   commitment: string; // hex, 33 bytes
   range_proof: string; // hex, variable (~675 bytes)
   script: string; // hex, output script (P2PKH/P2SH)
-  token_data: number; // token index (AmountShielded only)
   ephemeral_pubkey: string; // hex, 33 bytes
   decoded: IShieldedOutputDecoded;
-  // FullShielded only:
-  asset_commitment?: string; // hex, 33 bytes
-  surjection_proof?: string; // hex, variable
 }
+
+export type IShieldedOutput =
+  | (IShieldedOutputBase & {
+      mode: ShieldedOutputMode.AMOUNT_SHIELDED;
+      token_data: number;
+    })
+  | (IShieldedOutputBase & {
+      mode: ShieldedOutputMode.FULLY_SHIELDED;
+      asset_commitment: string;
+      surjection_proof: string;
+    });
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/shielded/types.ts` around lines 20 - 31, IShieldedOutput should be a
discriminated union on the existing mode field: create a shared base (common
fields like mode: ShieldedOutputMode, commitment, range_proof, script,
ephemeral_pubkey, decoded) and then two concrete interfaces (e.g.,
AmountShieldedOutput with mode: 'AMOUNT_SHIELDED' and a required token_data:
number, and FullyShieldedOutput with mode: 'FULLY_SHIELDED' and required
asset_commitment and surjection_proof strings). Replace the current
IShieldedOutput with a union type (AmountShieldedOutput | FullyShieldedOutput)
so token_data is only present for AMOUNT_SHIELDED and
asset_commitment/surjection_proof are required for FULLY_SHIELDED; keep
IShieldedOutputDecoded and ShieldedOutputMode names unchanged.

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Valid design improvement but out of scope. The current shape works correctly

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@pedroferreira1 Understood, thanks for the context! Would you like me to open a follow-up GitHub issue to track this improvement for a future PR?


🧠 Learnings used
Learnt from: tuliomir
Repo: HathorNetwork/hathor-wallet-lib PR: 1015
File: src/new/wallet.ts:3076-3081
Timestamp: 2026-03-03T23:42:09.591Z
Learning: In `src/new/wallet.ts` (HathorNetwork/hathor-wallet-lib), the methods `createNanoContractTransaction` and `createNanoContractCreateTokenTransaction` use non-null assertions (`data.blueprintId!`, `data.ncId!`, `data.args!`) on optional fields of `CreateNanoTxData` (defined in `src/new/types.ts`) without prior runtime validation. If these fields are `undefined` or `null`, malformed transactions will be created silently. A fix was deliberately deferred (not in scope of typings-only PRs). Always flag this missing validation in future reviews of these methods until it is resolved.

Learnt from: tuliomir
Repo: HathorNetwork/hathor-wallet-lib PR: 1036
File: src/utils/tokens.ts:960-960
Timestamp: 2026-03-18T20:29:28.330Z
Learning: In HathorNetwork/hathor-wallet-lib, avoid using null to indicate optional/unprovided parameters in TypeScript. Use undefined and the ? optional syntax (e.g., data?: string[]). Null should be reserved for explicit semantic meaning. This applies repo-wide as part of the TS migration; flag any occurrence of a parameter type like T | null for optional parameters in reviews and prefer data?: T or data: T | undefined with appropriate guards when needed.

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No need

Comment thread src/sync/stream.ts Outdated
Comment thread src/utils/transaction.ts Outdated
Comment thread src/utils/wallet.ts
  - Fix missing scanMainKey re-encryption when changing PIN
  - Remove unused imports: VertexHeaderId, intToBytes, crypto,
    IDataShieldedOutput, ShieldedOutputMode
  - Remove unused callCount variable in creation.test.ts
  - Replace inline require('bitcore-lib') with ES import in processing.test.ts

  Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
  - Convert shouldProcessHistory from defaulted to optional parameter in
    syncHistory, apiSyncHistory, xpubStreamSyncHistory, manualStreamSyncHistory,
    and streamSyncHistory to comply with default-param-last rule
  - Replace as any with as ShieldedOutput[] in transaction tests

  Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
  Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
@pedroferreira1
Copy link
Copy Markdown
Member Author

@coderabbitai I addresses all your comments in the last 3 commits or replying to them. Please resolve all of them and do a full review again of the new code

@coderabbitai
Copy link
Copy Markdown

coderabbitai bot commented Apr 15, 2026

@pedroferreira1 Sure! I'll resolve all existing comments and kick off a fresh full review of the latest changes right away.

✅ Actions performed

Full review triggered.

Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 7

♻️ Duplicate comments (10)
src/utils/bigint.ts (1)

29-49: ⚠️ Potential issue | 🟠 Major

Guard missing reviver context before BigInt(context.source)

context?.source on Line 47 avoids one property access in the catch condition, but Line 36 still dereferences context.source unconditionally. In runtimes that don’t pass reviver context, this throws and breaks numeric JSON parsing.

💡 Suggested fix
-  bigIntReviver(_key: string, value: any, context: { source: string }): any {
+  bigIntReviver(_key: string, value: any, context?: { source: string }): any {
     if (typeof value !== 'number') {
       // No special handling needed for non-number values.
       return value;
     }

+    // Some runtimes (e.g. Hermes) may not provide reviver context.
+    if (!context?.source) {
+      if (Number.isInteger(value) && !Number.isSafeInteger(value)) {
+        throw new Error('JSONBigInt.parse cannot safely recover an unsafe integer without reviver context');
+      }
+      return value;
+    }
+
     try {
       const bigIntValue = BigInt(context.source);
       if (bigIntValue < Number.MIN_SAFE_INTEGER || bigIntValue > Number.MAX_SAFE_INTEGER) {
         // We only return the value as a BigInt if it's in the unsafe range.
         return bigIntValue;
       }
Does JSON.parse reviver receive a third `context` argument in Node.js v22, and is that same argument supported in Hermes (React Native)?
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/utils/bigint.ts` around lines 29 - 49, bigIntReviver currently
dereferences context.source unconditionally when calling BigInt(context.source)
which can throw if reviver context isn't provided; update bigIntReviver to first
guard that context and context.source exist and are the expected type (e.g.,
typeof context?.source === 'string') before calling BigInt(context.source), and
if the context is missing fall back to treating value as a Number (return value)
so parsing doesn't break; also normalize the catch condition to use the same
safe context access (context?.source) rather than assuming context exists.
src/utils/transaction.ts (1)

667-695: ⚠️ Potential issue | 🟠 Major

Add validation for FULLY_SHIELDED-specific fields.

The validation only checks common fields. A FULLY_SHIELDED output missing assetCommitment or surjectionProof will pass validation here but fail during serialization or processing.

🛠️ Suggested fix
         const shieldedModels = txData.shieldedOutputs.map(so => {
           if (!so.commitment || !so.rangeProof || !so.script || !so.ephemeralPubkey) {
             throw new Error(
               'Shielded output missing required crypto fields (commitment, rangeProof, script, ephemeralPubkey)'
             );
           }
+          if (
+            so.mode === ShieldedOutputMode.FULLY_SHIELDED &&
+            (!so.assetCommitment || !so.surjectionProof)
+          ) {
+            throw new Error(
+              'Fully shielded output missing required crypto fields (assetCommitment, surjectionProof)'
+            );
+          }
           const tokenData = this.getTokenDataFromOutput(

Note: You'll need to import ShieldedOutputMode from '../shielded/types'.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/utils/transaction.ts` around lines 667 - 695, Shielded output validation
currently only checks common fields in txData.shieldedOutputs before
constructing a ShieldedOutput, so FULLY_SHIELDED outputs missing assetCommitment
or surjectionProof can slip through; update the validation in the
txData.shieldedOutputs mapping (the block that throws when !so.commitment ||
!so.rangeProof || !so.script || !so.ephemeralPubkey) to also check if so.mode
=== ShieldedOutputMode.FULLY_SHIELDED then require so.assetCommitment and
so.surjectionProof, throwing a clear Error if missing; import ShieldedOutputMode
from ../shielded/types and keep the rest of the ShieldedOutput construction
unchanged.
src/utils/wallet.ts (2)

710-729: ⚠️ Potential issue | 🟠 Major

Apply same multisig guard here.

Similar to generateAccessDataFromXpriv, shielded key derivation should be conditional on wallet type to avoid persisting single-sig keys for multisig wallets.

🔒 Suggested fix
-    // Derive shielded scan (account 1') and spend (account 2') keys.
-    // Separate from legacy (account 0') so scan key only grants view access.
-    const scanAcctXpriv = rootXpriv.deriveNonCompliantChild(SHIELDED_SCAN_ACCT_PATH);
-    const scanXpriv = scanAcctXpriv.deriveNonCompliantChild(0);
-    const spendAcctXpriv = rootXpriv.deriveNonCompliantChild(SHIELDED_SPEND_ACCT_PATH);
-    const spendXpriv = spendAcctXpriv.deriveNonCompliantChild(0);
+    // Derive shielded scan (account 1') and spend (account 2') keys for P2PKH wallets only.
+    // Separate from legacy (account 0') so scan key only grants view access.
+    let scanXpriv: HDPrivateKey | undefined;
+    let spendXpriv: HDPrivateKey | undefined;
+    if (walletType === WalletType.P2PKH) {
+      const scanAcctXpriv = rootXpriv.deriveNonCompliantChild(SHIELDED_SCAN_ACCT_PATH);
+      scanXpriv = scanAcctXpriv.deriveNonCompliantChild(0);
+      const spendAcctXpriv = rootXpriv.deriveNonCompliantChild(SHIELDED_SPEND_ACCT_PATH);
+      spendXpriv = spendAcctXpriv.deriveNonCompliantChild(0);
+    }

     return {
       walletType,
       multisigData,
       xpubkey: xpriv.xpubkey,
       mainKey: encryptedMainKey,
       acctPathKey: encryptedAcctPathKey,
       authKey: encryptedAuthPathKey,
       words: encryptedWords,
       walletFlags: 0,
-      scanXpubkey: scanXpriv.xpubkey,
-      scanMainKey: encryptData(scanXpriv.xprivkey, pin),
-      spendXpubkey: spendXpriv.xpubkey,
-      spendMainKey: encryptData(spendXpriv.xprivkey, pin),
+      ...(scanXpriv && {
+        scanXpubkey: scanXpriv.xpubkey,
+        scanMainKey: encryptData(scanXpriv.xprivkey, pin),
+      }),
+      ...(spendXpriv && {
+        spendXpubkey: spendXpriv.xpubkey,
+        spendMainKey: encryptData(spendXpriv.xprivkey, pin),
+      }),
     };
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/utils/wallet.ts` around lines 710 - 729, The shielded scan/spend
derivation and storing of scanXpubkey/scanMainKey and spendXpubkey/spendMainKey
should be guarded the same way as in generateAccessDataFromXpriv: check
walletType or multisigData and skip deriving/including these single-sig keys for
multisig wallets. Modify the block using scanAcctXpriv, scanXpriv,
spendAcctXpriv, spendXpriv so you only call deriveNonCompliantChild and
encryptData and only add scanXpubkey/scanMainKey and spendXpubkey/spendMainKey
to the returned object when the wallet is not multisig (e.g., walletType !==
'multisig' or !multisigData).

624-637: ⚠️ Potential issue | 🟠 Major

Guard shielded key derivation against multisig wallets.

This derives single-sig shielded keys (m/44'/280'/1'/0 and m/44'/280'/2'/0) even for multisig wallets. These paths are unrelated to the wallet's m/45' multisig account and could break the signing model if used.

🔒 Suggested fix
     // Derive shielded scan and spend keys if root key is available.
     // Scan (account 1') and spend (account 2') use separate derivation paths from legacy (account 0')
     // so the scan key only grants view access, not spending authority over legacy funds.
-    if (argXpriv.depth === 0) {
+    if (argXpriv.depth === 0 && walletType === WalletType.P2PKH) {
       const scanAcctXpriv = argXpriv.deriveNonCompliantChild(SHIELDED_SCAN_ACCT_PATH);
src/models/shielded_output.ts (1)

133-138: ⚠️ Potential issue | 🟠 Major

Inconsistent validation: serializeSighash() still zero-pads invalid ephemeral pubkeys.

serialize() (lines 102-107) now correctly throws on invalid ephemeralPubkey, but serializeSighash() still silently replaces it with 33 zero bytes. This inconsistency could cause the sighash to be computed over invalid data instead of failing early.

Suggested fix for consistency
     // Always include ephemeral pubkey in sighash
-    arr.push(
-      this.ephemeralPubkey && this.ephemeralPubkey.length === EPHEMERAL_PUBKEY_SIZE
-        ? this.ephemeralPubkey
-        : Buffer.alloc(EPHEMERAL_PUBKEY_SIZE)
-    );
+    if (!this.ephemeralPubkey || this.ephemeralPubkey.length !== EPHEMERAL_PUBKEY_SIZE) {
+      throw new Error(
+        `Invalid ephemeral pubkey for sighash: expected ${EPHEMERAL_PUBKEY_SIZE} bytes, ` +
+          `got ${this.ephemeralPubkey?.length ?? 0}`
+      );
+    }
+    arr.push(this.ephemeralPubkey);
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/models/shielded_output.ts` around lines 133 - 138, serializeSighash()
currently zero-pads an invalid ephemeralPubkey instead of failing like
serialize(), causing inconsistent behavior; update serializeSighash() to
validate this.ephemeralPubkey against EPHEMERAL_PUBKEY_SIZE and throw an error
if missing/incorrect length (same validation logic used in serialize()), and
then push the validated ephemeralPubkey into the arr rather than
Buffer.alloc(...).
src/shielded/processing.ts (1)

117-147: ⚠️ Potential issue | 🟡 Minor

Add explicit validation for asset_commitment in FullShielded branch.

The asset_commitment field is optional in IShieldedOutput (defined in src/shielded/types.ts), but line 120 uses a non-null assertion without runtime validation. While the comment claims this is a "protocol invariant," defensive validation is needed against malformed fullnode responses. The ShieldedOutput serialization model (src/models/shielded_output.ts:91–93) shows the pattern already used elsewhere: explicit guards that throw errors when required fields are missing for FullShielded mode. Apply the same approach here.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/shielded/processing.ts` around lines 117 - 147, In the FullShielded
branch (check using isFullShielded and shieldedOutput), don't use non-null
assertion on shieldedOutput.asset_commitment; instead validate it at runtime and
throw a clear error if missing (mirroring the pattern in
src/models/shielded_output.ts). Before converting to Buffer and calling
cryptoProvider.rewindFullShieldedOutput, assert that
shieldedOutput.asset_commitment is defined and a valid hex string, otherwise
throw an Error indicating the missing/invalid asset_commitment for FullShielded
outputs; keep the subsequent logic (deriveTag/createAssetCommitment cross-check
and storage.logger.warn) unchanged.
src/utils/storage.ts (2)

845-857: ⚠️ Potential issue | 🟠 Major

Use the persisted shielded_spend discriminator consistently.

These branches look for 'shielded-spend', so spend-chain addresses fall through the legacy branch and shieldedMaxIndexUsed never advances. That leaves shielded wallet cursors stale.

🛠️ Suggested fix
-    if (addressInfo.addressType === 'shielded-spend') {
+    if (addressInfo.addressType === 'shielded_spend') {

Apply the same replacement to both occurrences.

Also applies to: 951-963

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/utils/storage.ts` around lines 845 - 857, The address-type check is using
the wrong discriminator 'shielded-spend', causing shielded spend addresses to be
treated as legacy; update the checks that compare addressInfo.addressType to use
the persisted discriminator 'shielded_spend' (replace 'shielded-spend' with
'shielded_spend') in the blocks that update shieldedMaxIndexUsed (and the other
identical occurrence later that updates legacyMaxIndexUsed), ensuring both
occurrences consistently use 'shielded_spend' so shieldedMaxIndexUsed advances
correctly.

786-825: ⚠️ Potential issue | 🔴 Critical

Make shielded-output merging idempotent before saving the tx.

processHistory() reprocesses stored transactions. This block appends decrypted shielded outputs into tx.outputs and persists the mutated tx, so the next reprocess appends the same decoded outputs again and double-counts balances/UTXOs.

🛠️ Suggested fix
-      const transparentCount = tx.outputs.length;
+      const transparentOutputs = tx.outputs.filter(
+        output => output.type !== 'shielded' && !transactionUtils.isShieldedOutputEntry(output)
+      );

       const shieldedResults = await processShieldedOutputs(
         storage,
         tx,
         storage.shieldedCryptoProvider,
         pinCode
       );
-      for (const result of shieldedResults) {
+      tx.outputs = transparentOutputs;
+      for (const [idx, result] of shieldedResults.entries()) {
         const walletTokenUid =
           result.tokenUid === NATIVE_TOKEN_UID_HEX ? NATIVE_TOKEN_UID : result.tokenUid;
-        const so = tx.shielded_outputs[result.index - transparentCount];
+        const so = tx.shielded_outputs[idx];
+        if (tx.outputs.some(output => output.type === 'shielded' && output.commitment === so?.commitment)) {
+          continue;
+        }

         // Append decoded shielded output to tx.outputs so the main loop processes it
         // alongside transparent outputs (UTXO creation, balance, metadata).
         tx.outputs.push({
@@
-      if (shieldedResults.length > 0) {
+      if (tx.outputs.length !== transparentOutputs.length) {
         await store.saveTx(tx);
       }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/utils/storage.ts` around lines 786 - 825, Avoid double-appending
decrypted shielded outputs by making the merge idempotent: before pushing each
result from processShieldedOutputs into tx.outputs, check whether an equivalent
shielded output already exists (match on unique identifiers such as commitment,
ephemeral_pubkey or a combination like commitment+ephemeral_pubkey or
token+value) and only push if not found; only call store.saveTx(tx) when at
least one new output was appended (use a boolean flag like didAppendNew) so
reprocessing via processHistory() won’t duplicate UTXOs/balances.
src/storage/storage.ts (1)

381-410: ⚠️ Potential issue | 🔴 Critical

Preserve original output indexes when extracting shielded entries.

tx.outputs = transparentOutputs compacts the array, so every transparent output after the first shielded one gets a new index. The rest of the wallet uses array position as the on-chain output index, so spends and UTXO bookkeeping can point at the wrong output after this rewrite.

🛠️ Suggested fix
     if (!tx.shielded_outputs) {
       const shieldedEntries: IHistoryShieldedOutput[] = [];
-      const transparentOutputs: IHistoryTx['outputs'] = [];
       for (const output of tx.outputs) {
         if (transactionUtils.isShieldedOutputEntry(output)) {
           shieldedEntries.push({
             mode: output.asset_commitment ? 2 : 1,
             commitment: output.commitment,
@@
             surjection_proof: output.surjection_proof
               ? Buffer.from(output.surjection_proof, 'base64').toString('hex')
               : undefined,
           });
-        } else {
-          transparentOutputs.push(output);
         }
       }
       if (shieldedEntries.length > 0) {
         // eslint-disable-next-line no-param-reassign
         tx.shielded_outputs = shieldedEntries;
-        // eslint-disable-next-line no-param-reassign
-        tx.outputs = transparentOutputs;
       }
     }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/storage/storage.ts` around lines 381 - 410, When extracting shielded
entries in the block that uses transactionUtils.isShieldedOutputEntry, preserve
the original output indexes instead of compacting tx.outputs: record each
output's original index (e.g., output_index) when pushing to shielded_outputs
and build transparentOutputs with placeholders (null or the original entry) at
the same positions so array length and index mapping remain unchanged; then
assign tx.shielded_outputs = shieldedEntries (including their original indexes)
and replace tx.outputs with the placeholder-preserving transparentOutputs so
downstream code that relies on array positions still points to the correct
on-chain output indexes.
src/new/wallet.ts (1)

1543-1568: ⚠️ Potential issue | 🔴 Critical

Do not renumber outputs while normalizing websocket shielded entries.

newTx.outputs = transparentOutputs compacts the array before the tx reaches storage/metadata processing. Any transparent output that comes after a shielded one gets a different index, which breaks spent-output lookups and UTXO bookkeeping.

🛠️ Suggested fix
     if (!newTx.shielded_outputs) {
       const shieldedEntries: IHistoryShieldedOutput[] = [];
-      const transparentOutputs: IHistoryOutput[] = [];
       for (const output of newTx.outputs) {
         if (transactionUtils.isShieldedOutputEntry(output)) {
           shieldedEntries.push({
             mode: output.asset_commitment ? 2 : 1, // FullShielded=2, AmountShielded=1
             commitment: output.commitment, // already hex
@@
             surjection_proof: output.surjection_proof
               ? Buffer.from(output.surjection_proof, 'base64').toString('hex')
               : undefined,
           });
-        } else {
-          transparentOutputs.push(output);
         }
       }
       if (shieldedEntries.length > 0) {
         newTx.shielded_outputs = shieldedEntries;
-        newTx.outputs = transparentOutputs;
       }
     }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/new/wallet.ts` around lines 1543 - 1568, The current normalization
removes shielded outputs from newTx.outputs, changing indices and breaking UTXO
lookups; instead, when iterating outputs in the block that builds
shieldedEntries (using transactionUtils.isShieldedOutputEntry) preserve the
original outputs array and stop doing newTx.outputs = transparentOutputs. Add
the original output index to each IHistoryShieldedOutput entry (e.g.,
output_index = i) while leaving newTx.outputs intact (or if you must place a
placeholder, replace the shielded entry with a null/marker but keep array
length), and keep all existing field transformations (range_proof, script,
surjection_proof decoding) the same. Ensure newTx.shielded_outputs is set to the
shieldedEntries with indices but do not compact or reorder newTx.outputs.
🧹 Nitpick comments (5)
__tests__/wallet/walletServiceStorageProxy.test.ts (1)

18-21: Add one positive test for the shielded-output branch.

This global mock now forces every output through the transparent path, so the new convertFullNodeToHistoryTx() shielded pass-through is never exercised in this suite. Please add a getTx() case that overrides transactionUtils.isShieldedOutputEntry to true and asserts the shielded entry is preserved as-is.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@__tests__/wallet/walletServiceStorageProxy.test.ts` around lines 18 - 21, The
global mock of transaction utilities forces the transparent branch and prevents
testing the shielded branch of convertFullNodeToHistoryTx; add a test for getTx
that temporarily overrides transactionUtils.isShieldedOutputEntry to return true
(use jest.spyOn or mockImplementation on isShieldedOutputEntry), invoke
convertFullNodeToHistoryTx/getTx path for a sample full-node transaction
containing a shielded output, and assert that the returned history transaction
preserves the shielded output entry unchanged (verify the shielded entry
object/reference/content is preserved).
__tests__/models/address.test.ts (1)

76-86: Consider using valid compressed pubkey prefixes for consistency.

Unlike the test at lines 61-64 which properly sets the first byte to valid compressed pubkey prefixes (0x02/0x03), this test uses buffers filled entirely with 0xaa/0xbb. While encoding/decoding doesn't validate the format, using consistent valid-looking test data improves test clarity.

🧹 Suggested improvement
 test('Shielded address getScanPubkey and getSpendPubkey', () => {
   const testnetNetwork = new Network('testnet');
   const scanPubkey = Buffer.alloc(33, 0xaa);
+  scanPubkey[0] = 0x02; // Valid compressed prefix
   const spendPubkey = Buffer.alloc(33, 0xbb);
+  spendPubkey[0] = 0x03; // Valid compressed prefix
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@__tests__/models/address.test.ts` around lines 76 - 86, The test uses buffers
filled with 0xaa/0xbb which are not valid compressed pubkey prefixes; update the
test that constructs scanPubkey and spendPubkey before calling
encodeShieldedAddress so their first byte(s) are valid compressed pubkey
prefixes (e.g., 0x02 or 0x03) while leaving the remaining 32 bytes as-is, then
run the existing assertions on Address.getScanPubkey() and
Address.getSpendPubkey() to keep behavior the same but with clearer, consistent
test data referencing encodeShieldedAddress, Address, getScanPubkey, and
getSpendPubkey.
src/new/sendTransaction.ts (2)

374-379: Phantom output removal relies on reference equality — add a comment for clarity.

The removal of phantom outputs using Set.has() relies on JavaScript's reference equality for objects. This works because the same object references are preserved through shuffle() and spread operations. Consider adding a brief comment explaining this assumption for future maintainers.

Optional: Add clarifying comment
     // Remove phantom outputs (shielded) from the final outputs list.
-    // This relies on reference equality — Set.has() matches the same object instances
-    // created above. The spread operators and shuffle preserve object references.
+    // IMPORTANT: This relies on JavaScript reference equality — Set.has() matches the same
+    // object instances created above. Both lodash shuffle() and array spread operators
+    // preserve object references (they don't deep-clone), so this approach is safe.
     if (phantomOutputs.size > 0) {
       outputs = outputs.filter(out => !phantomOutputs.has(out));
     }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/new/sendTransaction.ts` around lines 374 - 379, Add a short clarifying
comment where phantom outputs are filtered (the block using phantomOutputs.size
and outputs.filter) explaining that Set.has() relies on JavaScript object
reference equality and that this is safe here because the same object references
are preserved by the earlier shuffle() and spread operations; reference the
phantomOutputs Set, outputs array, and shuffle() to make the assumption explicit
for future maintainers.

398-413: Consider handling missing UTXO gracefully.

The loop extracts blinding factors from shielded inputs, but if getUtxo() returns null for an input, it's silently skipped. This is likely intentional (transparent inputs won't have UTXO entries with blinding factors), but consider adding a debug log when a shielded input's UTXO is unexpectedly missing.

Optional: Add debug logging for missing shielded UTXOs
       for (const inp of allInputs) {
         const utxo = await this.storage.getUtxo({
           txId: inp.txId,
           index: inp.index,
         });
-        if (utxo?.shielded && utxo.blindingFactor) {
+        if (utxo?.shielded) {
+          if (!utxo.blindingFactor) {
+            this.storage.logger?.debug?.(
+              'Shielded UTXO missing blindingFactor:',
+              inp.txId,
+              inp.index
+            );
+            continue;
+          }
           blindedInputsArr.push({
             value: utxo.value,
             vbf: Buffer.from(utxo.blindingFactor, 'hex'),
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/new/sendTransaction.ts` around lines 398 - 413, Loop over allInputs that
builds blindedInputsArr currently skips missing UTXO entries silently; update
the code around the call to this.storage.getUtxo in sendTransaction so that when
getUtxo returns null/undefined you emit a debug-level log including the input
identifier (use inp.txId and inp.index) and any context (e.g., whether the input
was expected to be shielded) before continuing, while preserving existing
behavior of pushing blinding factors when utxo?.shielded and utxo.blindingFactor
are present; reference blindedInputsArr, allInputs and this.storage.getUtxo to
locate the change.
__tests__/integration/shielded_transactions.test.ts (1)

608-611: TODO comment indicates known issue with custom token shielded outputs.

The TODO explains that custom-token shielded outputs have a balance equation issue where the fullnode sees a melt. This is good documentation of a known limitation.

Would you like me to open an issue to track the custom-token shielded output balance equation fix?

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@__tests__/integration/shielded_transactions.test.ts` around lines 608 - 611,
The TODO notes that sendTransaction.ts's phantom output trick causes custom
token inputs to exceed transparent outputs once phantom outputs are removed,
producing "tokens melted" errors; update the token balance calculation in
sendTransaction.ts (the UTXO selection and the function that computes token
balance for shielded outputs) to include shielded output amounts so shielded
outputs reduce the required transparent token input, and ensure the logic that
detects melts (token balance check) considers both transparent and shielded
output amounts; locate the token-selection/validation routines and modify them
to add shielded output token amounts into the token balance equation and adjust
phantom output handling accordingly.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@__tests__/shielded/processing.test.ts`:
- Around line 257-316: The test never exercises the asset-commitment mismatch
because storage.getScanXPrivKey is mocked to reject; change
storage.getScanXPrivKey to resolve to the same valid xpriv used in earlier tests
so deriveScanPrivkeyForAddress/key derivation succeeds, ensure
makeMockProvider().createAssetCommitment returns a different/mismatched Buffer
than provider.rewindFullShieldedOutput's token/commitment so the cross-check in
processShieldedOutputs fails, and keep provider.rewindFullShieldedOutput mocked
to be called (so you can assert it was invoked and that the output is ultimately
skipped due to the mismatched commitment).

In `@src/new/wallet.ts`:
- Around line 1692-1708: The shielded branch inside the outputs.map
(sendOutputs) drops o.timelock while the transparent branch preserves it; update
the object returned in the shielded branch (the mapping that uses Address,
getSpendAddress, getScanPubkey, and sets shieldedMode) to include timelock:
o.timelock so shielded recipients receive the same timelock semantics as
transparent ones.

In `@src/schemas.ts`:
- Around line 122-126: IHistoryTxSchema currently allows unvalidated
shielded_outputs through due to .passthrough(); update IHistoryTxSchema to
explicitly validate a shielded_outputs field as z.array(IHistoryOutputSchema)
(using the existing IHistoryOutputSchema which unions TransparentOutputSchema,
AmountShieldedOutputSchema, FullShieldedOutputSchema), and remove or replace
.passthrough() with a stricter object schema (or .strict()) so malformed
shielded_outputs are rejected at parse time; apply the same change to the other
schema block referenced (lines 174-203) to ensure both schemas validate
shielded_outputs the same way.

In `@src/shielded/provider.ts`:
- Around line 20-23: The CI failure is due to ESLint flagging the dynamic
require of '@hathor/ct-crypto-node' as unresolved in
createDefaultShieldedCryptoProvider; fix by adding the import/no-unresolved rule
to the existing eslint-disable-next-line so the
require('@hathor/ct-crypto-node') line is allowed (e.g. merge/add
import/no-unresolved to the existing comment that currently disables
`@typescript-eslint/no-var-requires` and global-require), leaving the dynamic
require of ct unchanged.

In `@src/sync/stream.ts`:
- Around line 566-578: Wrap the queue processing logic in processQueue() (the
block that calls deriveShieldedAddressFromStorage, storage.isAddressMine, and
storage.saveAddress) inside a try/finally so isProcessingQueue is always reset;
additionally catch errors from deriveShieldedAddressFromStorage and saveAddress
and call this.abortStream() (or the stream-abort path used by shutdown()) before
rethrowing or returning to avoid leaving shutdown() waiting—in short, ensure
processQueue uses try { ... /* existing work including
deriveShieldedAddressFromStorage, storage.isAddressMine, storage.saveAddress */
} catch (err) { this.abortStream(); throw err; } finally {
this.isProcessingQueue = false; }.

In `@src/utils/shieldedAddress.ts`:
- Around line 55-58: The JSDoc for shielded derivation is out of date: update
the param descriptions for scanXpubkey and spendXpubkey in
src/utils/shieldedAddress.ts (the doc block above the
derive/deriveShieldedAddress logic) to reference the new shielded account paths
(use m/44'/280'/1'/0 for the scan chain and m/44'/280'/2'/0 for the spend chain)
instead of the old .../0'/0 and .../1'/0 paths so the documented paths match the
code changes.

In `@src/utils/storage.ts`:
- Around line 808-813: The new code replaces the full shielded metadata with
only { address }, dropping fields like timelock and making timelocked outputs
spendable; change the tx.outputs push to preserve the original decoded shielded
metadata (e.g., use result.decoded if present or merge it with the address) so
that fields like timelock remain available for isOutputLocked() to check; update
the decoded property in the tx.outputs object (in the same block that pushes
shielded outputs) to use result.decoded or a shallow merge of result.decoded and
{ address: result.address } rather than overwriting with only { address }.

---

Duplicate comments:
In `@src/models/shielded_output.ts`:
- Around line 133-138: serializeSighash() currently zero-pads an invalid
ephemeralPubkey instead of failing like serialize(), causing inconsistent
behavior; update serializeSighash() to validate this.ephemeralPubkey against
EPHEMERAL_PUBKEY_SIZE and throw an error if missing/incorrect length (same
validation logic used in serialize()), and then push the validated
ephemeralPubkey into the arr rather than Buffer.alloc(...).

In `@src/new/wallet.ts`:
- Around line 1543-1568: The current normalization removes shielded outputs from
newTx.outputs, changing indices and breaking UTXO lookups; instead, when
iterating outputs in the block that builds shieldedEntries (using
transactionUtils.isShieldedOutputEntry) preserve the original outputs array and
stop doing newTx.outputs = transparentOutputs. Add the original output index to
each IHistoryShieldedOutput entry (e.g., output_index = i) while leaving
newTx.outputs intact (or if you must place a placeholder, replace the shielded
entry with a null/marker but keep array length), and keep all existing field
transformations (range_proof, script, surjection_proof decoding) the same.
Ensure newTx.shielded_outputs is set to the shieldedEntries with indices but do
not compact or reorder newTx.outputs.

In `@src/shielded/processing.ts`:
- Around line 117-147: In the FullShielded branch (check using isFullShielded
and shieldedOutput), don't use non-null assertion on
shieldedOutput.asset_commitment; instead validate it at runtime and throw a
clear error if missing (mirroring the pattern in src/models/shielded_output.ts).
Before converting to Buffer and calling cryptoProvider.rewindFullShieldedOutput,
assert that shieldedOutput.asset_commitment is defined and a valid hex string,
otherwise throw an Error indicating the missing/invalid asset_commitment for
FullShielded outputs; keep the subsequent logic (deriveTag/createAssetCommitment
cross-check and storage.logger.warn) unchanged.

In `@src/storage/storage.ts`:
- Around line 381-410: When extracting shielded entries in the block that uses
transactionUtils.isShieldedOutputEntry, preserve the original output indexes
instead of compacting tx.outputs: record each output's original index (e.g.,
output_index) when pushing to shielded_outputs and build transparentOutputs with
placeholders (null or the original entry) at the same positions so array length
and index mapping remain unchanged; then assign tx.shielded_outputs =
shieldedEntries (including their original indexes) and replace tx.outputs with
the placeholder-preserving transparentOutputs so downstream code that relies on
array positions still points to the correct on-chain output indexes.

In `@src/utils/bigint.ts`:
- Around line 29-49: bigIntReviver currently dereferences context.source
unconditionally when calling BigInt(context.source) which can throw if reviver
context isn't provided; update bigIntReviver to first guard that context and
context.source exist and are the expected type (e.g., typeof context?.source ===
'string') before calling BigInt(context.source), and if the context is missing
fall back to treating value as a Number (return value) so parsing doesn't break;
also normalize the catch condition to use the same safe context access
(context?.source) rather than assuming context exists.

In `@src/utils/storage.ts`:
- Around line 845-857: The address-type check is using the wrong discriminator
'shielded-spend', causing shielded spend addresses to be treated as legacy;
update the checks that compare addressInfo.addressType to use the persisted
discriminator 'shielded_spend' (replace 'shielded-spend' with 'shielded_spend')
in the blocks that update shieldedMaxIndexUsed (and the other identical
occurrence later that updates legacyMaxIndexUsed), ensuring both occurrences
consistently use 'shielded_spend' so shieldedMaxIndexUsed advances correctly.
- Around line 786-825: Avoid double-appending decrypted shielded outputs by
making the merge idempotent: before pushing each result from
processShieldedOutputs into tx.outputs, check whether an equivalent shielded
output already exists (match on unique identifiers such as commitment,
ephemeral_pubkey or a combination like commitment+ephemeral_pubkey or
token+value) and only push if not found; only call store.saveTx(tx) when at
least one new output was appended (use a boolean flag like didAppendNew) so
reprocessing via processHistory() won’t duplicate UTXOs/balances.

In `@src/utils/transaction.ts`:
- Around line 667-695: Shielded output validation currently only checks common
fields in txData.shieldedOutputs before constructing a ShieldedOutput, so
FULLY_SHIELDED outputs missing assetCommitment or surjectionProof can slip
through; update the validation in the txData.shieldedOutputs mapping (the block
that throws when !so.commitment || !so.rangeProof || !so.script ||
!so.ephemeralPubkey) to also check if so.mode ===
ShieldedOutputMode.FULLY_SHIELDED then require so.assetCommitment and
so.surjectionProof, throwing a clear Error if missing; import ShieldedOutputMode
from ../shielded/types and keep the rest of the ShieldedOutput construction
unchanged.

In `@src/utils/wallet.ts`:
- Around line 710-729: The shielded scan/spend derivation and storing of
scanXpubkey/scanMainKey and spendXpubkey/spendMainKey should be guarded the same
way as in generateAccessDataFromXpriv: check walletType or multisigData and skip
deriving/including these single-sig keys for multisig wallets. Modify the block
using scanAcctXpriv, scanXpriv, spendAcctXpriv, spendXpriv so you only call
deriveNonCompliantChild and encryptData and only add scanXpubkey/scanMainKey and
spendXpubkey/spendMainKey to the returned object when the wallet is not multisig
(e.g., walletType !== 'multisig' or !multisigData).

---

Nitpick comments:
In `@__tests__/integration/shielded_transactions.test.ts`:
- Around line 608-611: The TODO notes that sendTransaction.ts's phantom output
trick causes custom token inputs to exceed transparent outputs once phantom
outputs are removed, producing "tokens melted" errors; update the token balance
calculation in sendTransaction.ts (the UTXO selection and the function that
computes token balance for shielded outputs) to include shielded output amounts
so shielded outputs reduce the required transparent token input, and ensure the
logic that detects melts (token balance check) considers both transparent and
shielded output amounts; locate the token-selection/validation routines and
modify them to add shielded output token amounts into the token balance equation
and adjust phantom output handling accordingly.

In `@__tests__/models/address.test.ts`:
- Around line 76-86: The test uses buffers filled with 0xaa/0xbb which are not
valid compressed pubkey prefixes; update the test that constructs scanPubkey and
spendPubkey before calling encodeShieldedAddress so their first byte(s) are
valid compressed pubkey prefixes (e.g., 0x02 or 0x03) while leaving the
remaining 32 bytes as-is, then run the existing assertions on
Address.getScanPubkey() and Address.getSpendPubkey() to keep behavior the same
but with clearer, consistent test data referencing encodeShieldedAddress,
Address, getScanPubkey, and getSpendPubkey.

In `@__tests__/wallet/walletServiceStorageProxy.test.ts`:
- Around line 18-21: The global mock of transaction utilities forces the
transparent branch and prevents testing the shielded branch of
convertFullNodeToHistoryTx; add a test for getTx that temporarily overrides
transactionUtils.isShieldedOutputEntry to return true (use jest.spyOn or
mockImplementation on isShieldedOutputEntry), invoke
convertFullNodeToHistoryTx/getTx path for a sample full-node transaction
containing a shielded output, and assert that the returned history transaction
preserves the shielded output entry unchanged (verify the shielded entry
object/reference/content is preserved).

In `@src/new/sendTransaction.ts`:
- Around line 374-379: Add a short clarifying comment where phantom outputs are
filtered (the block using phantomOutputs.size and outputs.filter) explaining
that Set.has() relies on JavaScript object reference equality and that this is
safe here because the same object references are preserved by the earlier
shuffle() and spread operations; reference the phantomOutputs Set, outputs
array, and shuffle() to make the assumption explicit for future maintainers.
- Around line 398-413: Loop over allInputs that builds blindedInputsArr
currently skips missing UTXO entries silently; update the code around the call
to this.storage.getUtxo in sendTransaction so that when getUtxo returns
null/undefined you emit a debug-level log including the input identifier (use
inp.txId and inp.index) and any context (e.g., whether the input was expected to
be shielded) before continuing, while preserving existing behavior of pushing
blinding factors when utxo?.shielded and utxo.blindingFactor are present;
reference blindedInputsArr, allInputs and this.storage.getUtxo to locate the
change.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: a77918d4-10ec-4e70-9921-b98ba72c8b96

📥 Commits

Reviewing files that changed from the base of the PR and between 0ccc910 and c406dba.

⛔ Files ignored due to path filters (1)
  • package-lock.json is excluded by !**/package-lock.json
📒 Files selected for processing (46)
  • __tests__/headers/shielded_outputs.test.ts
  • __tests__/integration/configuration/docker-compose.yml
  • __tests__/integration/configuration/privnet.yml
  • __tests__/integration/hathorwallet_facade.test.ts
  • __tests__/integration/helpers/wallet.helper.ts
  • __tests__/integration/shielded_transactions.test.ts
  • __tests__/models/address.test.ts
  • __tests__/models/network.test.ts
  • __tests__/models/shielded_output.test.ts
  • __tests__/models/transaction.test.ts
  • __tests__/shielded/creation.test.ts
  • __tests__/shielded/processing.test.ts
  • __tests__/storage/storage.test.ts
  • __tests__/wallet/walletServiceStorageProxy.test.ts
  • package.json
  • setupTests-integration.js
  • src/constants.ts
  • src/headers/parser.ts
  • src/headers/shielded_outputs.ts
  • src/headers/types.ts
  • src/lib.ts
  • src/models/address.ts
  • src/models/network.ts
  • src/models/shielded_output.ts
  • src/models/transaction.ts
  • src/new/sendTransaction.ts
  • src/new/types.ts
  • src/new/wallet.ts
  • src/schemas.ts
  • src/shielded/creation.ts
  • src/shielded/index.ts
  • src/shielded/processing.ts
  • src/shielded/provider.ts
  • src/shielded/types.ts
  • src/storage/memory_store.ts
  • src/storage/storage.ts
  • src/sync/stream.ts
  • src/types.ts
  • src/utils/address.ts
  • src/utils/bigint.ts
  • src/utils/shieldedAddress.ts
  • src/utils/storage.ts
  • src/utils/transaction.ts
  • src/utils/wallet.ts
  • src/wallet/types.ts
  • src/wallet/walletServiceStorageProxy.ts
💤 Files with no reviewable changes (1)
  • setupTests-integration.js

Comment thread __tests__/shielded/processing.test.ts
Comment thread src/new/wallet.ts
Comment thread src/schemas.ts
Comment thread src/shielded/provider.ts
Comment thread src/sync/stream.ts Outdated
Comment on lines +566 to +578
// Generate shielded address pair at the same index (if keys are available)
const shieldedResult = await deriveShieldedAddressFromStorage(
addr.bip32AddressIndex,
this.storage
);
if (shieldedResult) {
if (!(await this.storage.isAddressMine(shieldedResult.shieldedAddress.base58))) {
await this.storage.saveAddress(shieldedResult.shieldedAddress);
}
if (!(await this.storage.isAddressMine(shieldedResult.spendAddress.base58))) {
await this.storage.saveAddress(shieldedResult.spendAddress);
}
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Guard shielded address derivation so the queue can't get stuck.

These awaited calls run inside processQueue() without local error handling. If deriveShieldedAddressFromStorage() or either saveAddress() rejects, the method exits before isProcessingQueue is reset, and shutdown() will sit in the timeout loop. Please wrap queue processing in a try/finally and abort the stream on derivation/save failures instead of letting them escape here.

Possible fix
   async processQueue() {
     if (this.isProcessingQueue) {
       return;
     }
     this.isProcessingQueue = true;
-    while (true) {
-      if (this.signal.aborted) {
-        // Abort processing the queue
-        break;
-      }
-      const item = this.itemQueue.dequeue();
-      this.ack();
-      if (!item) {
-        // Queue is empty
-        this.stats.queueEmpty();
-        break;
-      }
-      this.lastProcSeq = item.seq;
-      if (isStreamItemAddress(item)) {
-        const addr = item.address;
-        const alreadyExists = await this.storage.isAddressMine(addr.base58);
-        if (!alreadyExists) {
-          await this.storage.saveAddress(addr);
-        }
-        // Generate shielded address pair at the same index (if keys are available)
-        const shieldedResult = await deriveShieldedAddressFromStorage(
-          addr.bip32AddressIndex,
-          this.storage
-        );
-        if (shieldedResult) {
-          if (!(await this.storage.isAddressMine(shieldedResult.shieldedAddress.base58))) {
-            await this.storage.saveAddress(shieldedResult.shieldedAddress);
-          }
-          if (!(await this.storage.isAddressMine(shieldedResult.spendAddress.base58))) {
-            await this.storage.saveAddress(shieldedResult.spendAddress);
-          }
-        }
-      } else if (isStreamItemVertex(item)) {
-        await this.storage.addTx(item.vertex);
-      }
-      this.stats.proc();
-      await new Promise<void>(resolve => {
-        queueMicrotask(resolve);
-      });
-    }
-
-    this.isProcessingQueue = false;
+    try {
+      while (true) {
+        if (this.signal.aborted) {
+          break;
+        }
+        const item = this.itemQueue.dequeue();
+        this.ack();
+        if (!item) {
+          this.stats.queueEmpty();
+          break;
+        }
+        this.lastProcSeq = item.seq;
+        if (isStreamItemAddress(item)) {
+          const addr = item.address;
+          const alreadyExists = await this.storage.isAddressMine(addr.base58);
+          if (!alreadyExists) {
+            await this.storage.saveAddress(addr);
+          }
+          try {
+            const shieldedResult = await deriveShieldedAddressFromStorage(
+              addr.bip32AddressIndex,
+              this.storage
+            );
+            if (shieldedResult) {
+              if (!(await this.storage.isAddressMine(shieldedResult.shieldedAddress.base58))) {
+                await this.storage.saveAddress(shieldedResult.shieldedAddress);
+              }
+              if (!(await this.storage.isAddressMine(shieldedResult.spendAddress.base58))) {
+                await this.storage.saveAddress(shieldedResult.spendAddress);
+              }
+            }
+          } catch (err) {
+            this.abortWithError(`Failed to derive shielded addresses: ${String(err)}`);
+            break;
+          }
+        } else if (isStreamItemVertex(item)) {
+          await this.storage.addTx(item.vertex);
+        }
+        this.stats.proc();
+        await new Promise<void>(resolve => {
+          queueMicrotask(resolve);
+        });
+      }
+    } finally {
+      this.isProcessingQueue = false;
+    }
   }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/sync/stream.ts` around lines 566 - 578, Wrap the queue processing logic
in processQueue() (the block that calls deriveShieldedAddressFromStorage,
storage.isAddressMine, and storage.saveAddress) inside a try/finally so
isProcessingQueue is always reset; additionally catch errors from
deriveShieldedAddressFromStorage and saveAddress and call this.abortStream() (or
the stream-abort path used by shutdown()) before rethrowing or returning to
avoid leaving shutdown() waiting—in short, ensure processQueue uses try { ... /*
existing work including deriveShieldedAddressFromStorage, storage.isAddressMine,
storage.saveAddress */ } catch (err) { this.abortStream(); throw err; } finally
{ this.isProcessingQueue = false; }.

Comment thread src/utils/shieldedAddress.ts Outdated
Comment thread src/utils/storage.ts Outdated
  - Replace hardcoded HTR_UID with imported NATIVE_TOKEN_UID in creation.ts
  - Add upfront input validation in createShieldedOutputs: pubkey size,
    token UID size, FullShielded domain requirement
  - Wrap per-output crypto errors with context (index, mode, token)
  - Change outer shielded processing catch from warn to error level
  - Add tx_id and output index to FullShielded cross-check failure log
  - Enforce MAX_SHIELDED_OUTPUTS in prepareTxData before crypto work

  Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
  - resolveTokenUid throws on out-of-bounds token_data index instead of
    silently falling back to HTR (could misattribute custom token funds)
  - Move minimum 2 shielded outputs check before createShieldedOutputs
    to avoid wasted crypto work on invalid input

  Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
…lded spending

  Blinding factors recovered during shielded output decryption were being
  discarded — not saved to the decoded output entry or the UTXO record.
  This meant computeBalancingBlindingFactor received empty inputs when
  spending shielded UTXOs to create new shielded outputs, breaking the
  homomorphic balance equation.

  - Add blindingFactor and assetBlindingFactor fields to IShieldedOutputEntry
  - Populate them when appending decoded shielded outputs to tx.outputs
  - Pass them through to UTXO via saveUtxo
  - Add integration test that verifies blinding factors are persisted and
    used correctly for shielded-to-shielded chain spending

  Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
  - Forward timelock for shielded outputs in sendManyOutputsSendTransaction
  - Preserve decoded metadata (timelock) when merging shielded outputs
  - Add shielded_outputs schema validation to IHistoryTxSchema
  - Add import/no-unresolved eslint-disable for optional ct-crypto-node
  - Guard shielded address derivation in stream processQueue with try/catch
  - Update JSDoc derivation paths to 1'/0 (scan) and 2'/0 (spend)
  - Fix FullShielded cross-check test to use valid xpriv and exercise
    the actual rewind + cross-check code path

  Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
@pedroferreira1
Copy link
Copy Markdown
Member Author

@coderabbitai once again, all your comments were addressed in the latest commit. Resolve ALL previous comments you made and do a full review

@coderabbitai
Copy link
Copy Markdown

coderabbitai bot commented Apr 15, 2026

@pedroferreira1 Understood! I'll resolve all my existing comments now and kick off a fresh full review of the latest changes right away.

✅ Actions performed

Full review triggered.

Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 10

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (4)
src/models/transaction.ts (1)

80-120: ⚠️ Potential issue | 🟠 Major

Keep shielded outputs in one source of truth.

toBytes(), getDataToSign(), and hashing serialize this.headers, but validate() and calculateWeight() now read this.shieldedOutputs. With the new public field, a freshly-built tx can count/value shielded outputs that never get serialized or signed if only tx.shieldedOutputs is updated, or the reverse if only the header is updated. Please derive shielded outputs from the ShieldedOutputsHeader or funnel all mutations through a single helper that updates both together.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/models/transaction.ts` around lines 80 - 120, The code currently has two
sources of truth for shielded outputs (the public field shieldedOutputs and the
serialized headers entry ShieldedOutputsHeader), causing inconsistencies between
serialization (toBytes, getDataToSign, hashing) and runtime checks (validate,
calculateWeight); change the implementation so there is a single source of
truth: either (A) remove direct mutations of shieldedOutputs and always derive
its value on access from the ShieldedOutputsHeader stored in headers (add a
getter like getShieldedOutputs() used by validate, calculateWeight, signing), or
(B) create a single helper (e.g., setShieldedOutputs(headerOrList)) that updates
both this.headers (the ShieldedOutputsHeader entry) and this.shieldedOutputs
atomically and replace all direct assignments with that helper; update toBytes,
getDataToSign, validate, and calculateWeight to use the new getter/helper so
serialization and runtime logic remain consistent.
src/wallet/types.ts (1)

723-731: ⚠️ Potential issue | 🟠 Major

FullNodeOutput should be a discriminated union type, not a single interface with optional type field.

The outputs array in FullNodeTx contains both transparent outputs and IShieldedOutputEntry objects (with type='shielded' discriminant), as evidenced by the isShieldedOutputEntry() check in walletServiceStorageProxy.ts:193. Adding type?: string alone doesn't prevent callers from assuming all transparent-only fields (value, token_data, script, decoded) are always present.

Change FullNodeTx.outputs to a union type (e.g., FullNodeOutput = FullNodeTransparentOutput | IShieldedOutputEntry, or declare outputs: (FullNodeTransparentOutput | IShieldedOutputEntry)[]) to properly express the polymorphic structure.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/wallet/types.ts` around lines 723 - 731, Replace the single
FullNodeOutput interface with a discriminated union so consumers cannot assume
transparent-only fields exist on shielded entries: define
FullNodeTransparentOutput (with value, token_data, script, decoded, optional
token/spent_by and type='transparent' or undefined) and use the existing
IShieldedOutputEntry as the other variant, then change FullNodeTx.outputs to be
(FullNodeTransparentOutput | IShieldedOutputEntry)[]; update any type guards
(e.g., isShieldedOutputEntry) to narrow to IShieldedOutputEntry so code
accessing value/token_data/script/decoded only does so after the guard.
src/schemas.ts (1)

174-205: ⚠️ Potential issue | 🟠 Major

Make decoded optional on shielded_outputs.

Downstream code already treats decoded?.address as optional and skips entries it can't resolve. Requiring decoded here means a payload without that helper block fails at IHistoryTxSchema.parse() and aborts sync before the skip logic runs.

🔧 Minimal schema fix
 const IHistoryShieldedOutputSchema = z
   .object({
     mode: z.number(),
     commitment: z.string(),
     range_proof: z.string(),
     script: z.string(),
     token_data: z.number(),
     ephemeral_pubkey: z.string(),
-    decoded: IHistoryOutputDecodedSchema,
+    decoded: IHistoryOutputDecodedSchema.optional(),
     asset_commitment: z.string().optional(),
     surjection_proof: z.string().optional(),
   })
   .passthrough();
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/schemas.ts` around lines 174 - 205, The schema requires the `decoded`
field in `IHistoryShieldedOutputSchema`, causing `IHistoryTxSchema.parse()` to
fail when a shielded output lacks that helper; make `decoded` optional by
changing the `decoded: IHistoryOutputDecodedSchema` entry in
`IHistoryShieldedOutputSchema` to an optional variant (e.g., use `.optional()`
on `IHistoryOutputDecodedSchema`) so `shielded_outputs` can be parsed when
`decoded` is absent while leaving the rest of the schema (including passthrough)
intact.
src/models/address.ts (1)

98-127: ⚠️ Potential issue | 🟠 Major

Tie payload length to the version byte.

validateAddress() currently accepts any 25/71-byte payload as long as the checksum is valid and the first byte is one of the network bytes. That means a 71-byte payload with a transparent version byte, or a 25-byte payload with the shielded byte, is treated as valid and then misclassified by getType()/getScript().

🛡️ Suggested guard
     // Validate version byte
     const firstByte = addressBytes[0];
-    if (!this.network.isVersionByteValid(firstByte)) {
+    const isLegacyLength = addressBytes.length === LEGACY_ADDR_LENGTH;
+    const isShieldedLength = addressBytes.length === SHIELDED_ADDR_LENGTH;
+    const isLegacyVersion =
+      firstByte === this.network.versionBytes.p2pkh ||
+      firstByte === this.network.versionBytes.p2sh;
+    const isShieldedVersion = firstByte === this.network.versionBytes.shielded;
+
+    if (
+      !this.network.isVersionByteValid(firstByte) ||
+      (isLegacyLength && !isLegacyVersion) ||
+      (isShieldedLength && !isShieldedVersion)
+    ) {
       throw new AddressError(
-        `${errorMessage} Invalid network byte. Expected: ${this.network.versionBytes.p2pkh}, ${this.network.versionBytes.p2sh}, or ${this.network.versionBytes.shielded} and received ${firstByte}.`
+        `${errorMessage} Invalid version byte ${firstByte} for a ${addressBytes.length}-byte address.`
       );
     }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/models/address.ts` around lines 98 - 127, In validateAddress (in
src/models/address.ts) ensure the payload length is consistent with the version
byte: after computing firstByte (version) and before returning/continuing,
enforce that if firstByte === this.network.versionBytes.shielded then
addressBytes.length must equal SHIELDED_ADDR_LENGTH, otherwise
addressBytes.length must equal LEGACY_ADDR_LENGTH (or explicitly allow
p2pkh/p2sh to only match LEGACY_ADDR_LENGTH); throw an AddressError with a clear
message if the mismatch occurs — this ties payload length to the version byte
and prevents misclassification by getType()/getScript().
♻️ Duplicate comments (9)
src/utils/transaction.ts (1)

667-695: ⚠️ Potential issue | 🟠 Major

Validate FULLY_SHIELDED-only fields before instantiating the model.

The current guard only checks the shared fields. A FULLY_SHIELDED output can still be built with assetCommitment or surjectionProof missing and then fail later during serialization or processing instead of failing fast here.

🧪 Fail fast on mode-specific fields
         const shieldedModels = txData.shieldedOutputs.map(so => {
           if (!so.commitment || !so.rangeProof || !so.script || !so.ephemeralPubkey) {
             throw new Error(
               'Shielded output missing required crypto fields (commitment, rangeProof, script, ephemeralPubkey)'
             );
           }
+          if (
+            so.mode === 'FULLY_SHIELDED' &&
+            (!so.assetCommitment || !so.surjectionProof)
+          ) {
+            throw new Error(
+              'Fully shielded output missing required crypto fields (assetCommitment, surjectionProof)'
+            );
+          }
           const tokenData = this.getTokenDataFromOutput(
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/utils/transaction.ts` around lines 667 - 695, The guard in the
shieldedModels mapping only checks shared fields but doesn't enforce
mode-specific requirements: before instantiating ShieldedOutput in the
txData.shieldedOutputs -> shieldedModels block, add a check for outputs with
so.mode === 'FULLY_SHIELDED' to ensure so.assetCommitment and so.surjectionProof
are present (non-null/defined) and throw a clear Error if missing; keep existing
checks for commitment, rangeProof, script, ephemeralPubkey and perform the
FULLY_SHIELDED checks prior to calling this.getTokenDataFromOutput and new
ShieldedOutput so failures occur fast and deterministically.
src/utils/wallet.ts (2)

710-729: ⚠️ Potential issue | 🔴 Critical

Apply the same multisig guard to the seed-based path.

generateAccessDataFromSeed() has the same issue: multisig wallets still derive and store single-sig shielded keys from the root seed. That creates a second, incompatible signing authority alongside the multisig account.

🔒 Safer guard
-    const scanAcctXpriv = rootXpriv.deriveNonCompliantChild(SHIELDED_SCAN_ACCT_PATH);
-    const scanXpriv = scanAcctXpriv.deriveNonCompliantChild(0);
-    const spendAcctXpriv = rootXpriv.deriveNonCompliantChild(SHIELDED_SPEND_ACCT_PATH);
-    const spendXpriv = spendAcctXpriv.deriveNonCompliantChild(0);
+    let scanXpriv: HDPrivateKey | undefined;
+    let spendXpriv: HDPrivateKey | undefined;
+    if (walletType === WalletType.P2PKH) {
+      const scanAcctXpriv = rootXpriv.deriveNonCompliantChild(SHIELDED_SCAN_ACCT_PATH);
+      scanXpriv = scanAcctXpriv.deriveNonCompliantChild(0);
+      const spendAcctXpriv = rootXpriv.deriveNonCompliantChild(SHIELDED_SPEND_ACCT_PATH);
+      spendXpriv = spendAcctXpriv.deriveNonCompliantChild(0);
+    }
@@
-      scanXpubkey: scanXpriv.xpubkey,
-      scanMainKey: encryptData(scanXpriv.xprivkey, pin),
-      spendXpubkey: spendXpriv.xpubkey,
-      spendMainKey: encryptData(spendXpriv.xprivkey, pin),
+      ...(scanXpriv && {
+        scanXpubkey: scanXpriv.xpubkey,
+        scanMainKey: encryptData(scanXpriv.xprivkey, pin),
+      }),
+      ...(spendXpriv && {
+        spendXpubkey: spendXpriv.xpubkey,
+        spendMainKey: encryptData(spendXpriv.xprivkey, pin),
+      }),
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/utils/wallet.ts` around lines 710 - 729, In generateAccessDataFromSeed(),
avoid deriving and storing single-sig shielded keys when multisigData is
present: add a guard checking multisigData before using scanAcctXpriv/scanXpriv
and spendAcctXpriv/spendXpriv (the derivation and encryptData calls), and if
multisigData exists skip those derivations and set
scanXpubkey/scanMainKey/spendXpubkey/spendMainKey to null/undefined (or the
multisig-appropriate placeholders) so you do not create a second single-sig
signing authority; mirror the multisig check used in the other wallet creation
path to keep behavior consistent.

627-636: ⚠️ Potential issue | 🔴 Critical

Skip shielded key derivation for multisig wallets here.

This block still runs for WalletType.MULTISIG, so a multisig root key ends up persisting standalone shielded scan/spend chains derived from a single signer. That breaks the wallet’s signing model by making a multisig wallet look shielded-capable with one-party keys.

🔒 Safer guard
-    if (argXpriv.depth === 0) {
+    if (argXpriv.depth === 0 && walletType === WalletType.P2PKH) {
       const scanAcctXpriv = argXpriv.deriveNonCompliantChild(SHIELDED_SCAN_ACCT_PATH);
       const scanXpriv = scanAcctXpriv.deriveNonCompliantChild(0);
       accessData.scanXpubkey = scanXpriv.xpubkey;
       accessData.scanMainKey = encryptData(scanXpriv.xprivkey, pin);

       const spendAcctXpriv = argXpriv.deriveNonCompliantChild(SHIELDED_SPEND_ACCT_PATH);
       const spendXpriv2 = spendAcctXpriv.deriveNonCompliantChild(0);
       accessData.spendXpubkey = spendXpriv2.xpubkey;
       accessData.spendMainKey = encryptData(spendXpriv2.xprivkey, pin);
     }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/utils/wallet.ts` around lines 627 - 636, The shielded key derivation
block currently runs even for multisig roots and must be skipped for
WalletType.MULTISIG; wrap the existing code that derives
SHIELDED_SCAN_ACCT_PATH/SHIELDED_SPEND_ACCT_PATH and sets
accessData.scanXpubkey/scanMainKey/spendXpubkey/spendMainKey with a guard that
checks the wallet type (e.g., if (argXpriv.depth === 0 && walletType !==
WalletType.MULTISIG) or use the local isMultisig flag), so multisig root keys do
not generate or persist standalone shielded scan/spend chains; leave encryptData
usage intact for non-multisig flows.
src/sync/stream.ts (1)

542-605: ⚠️ Potential issue | 🟠 Major

isProcessingQueue can still get stuck on non-derivation failures.

The new try/catch only protects the shielded-derivation block. If saveAddress(addr), isAddressMine(), addTx(), or another awaited call throws, processQueue() still exits before clearing the flag, and shutdown() falls back to the timeout path.

Wrap the whole loop in try/finally and set abortWithError(...) from the outer catch path so queue teardown is reliable even when item processing fails.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/sync/stream.ts` around lines 542 - 605, The processQueue loop can leave
isProcessingQueue true if any awaited call (e.g., storage.isAddressMine,
storage.saveAddress, storage.addTx, deriveShieldedAddressFromStorage) throws;
wrap the entire while(true) processing loop in a try/finally so
isProcessingQueue is always cleared in the finally block, and add an outer catch
that calls abortWithError(error) before rethrowing or breaking to ensure
shutdown() does not fall back to the timeout path; update processQueue (the
method) to catch errors from any item processing, call abortWithError(err) (or
set the same failure path used by shutdown) in that catch, and then allow the
finally to reset this.isProcessingQueue.
src/types.ts (1)

761-765: ⚠️ Potential issue | 🟠 Major

Reflect optional shielded private keys in the interface.

scanMainKey and spendMainKey are optional in IWalletAccessData, so these accessors cannot always return string for pre-shielded wallets. The current signature hides that state from callers and pushes them into runtime failure paths instead of typed handling.

Suggested fix
-  getScanXPrivKey(pinCode: string): Promise<string>;
-  getSpendXPrivKey(pinCode: string): Promise<string>;
+  getScanXPrivKey(pinCode: string): Promise<string | undefined>;
+  getSpendXPrivKey(pinCode: string): Promise<string | undefined>;
   getScanXPubKey(): Promise<string | undefined>;
   getSpendXPubKey(): Promise<string | undefined>;
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/types.ts` around lines 761 - 765, The getters getScanXPrivKey and
getSpendXPrivKey currently promise a string but IWalletAccessData marks
scanMainKey and spendMainKey as optional, so update the signatures of
getScanXPrivKey(pinCode: string) and getSpendXPrivKey(pinCode: string) to return
Promise<string | undefined>, and update any implementations (the methods that
read scanMainKey / spendMainKey) to return undefined when those keys are absent
and adjust callers to handle the undefined case (e.g., early-return, throw with
a clear message, or conditional logic). Ensure the public
getScanXPubKey/getSpendXPubKey remain Promise<string | undefined> as already
declared.
src/utils/storage.ts (2)

855-867: ⚠️ Potential issue | 🟠 Major

Use shielded_spend consistently here.

These branches still look for shielded-spend, so shielded spend addresses fall through the transparent path and shieldedMaxIndexUsed never advances.

Based on learnings, deriveShieldedAddressFromStorage() persists shielded spend addresses as shielded_spend.

Also applies to: 967-979

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/utils/storage.ts` around lines 855 - 867, The branch is checking for the
wrong addressType token ('shielded-spend') so shielded spend addresses are
misclassified; update all checks that compare addressInfo.addressType for
shielded spend (including the comparison near the shieldedMaxIndexUsed update
and the similar logic later) to use 'shielded_spend' (the same token used by
deriveShieldedAddressFromStorage()) so shieldedMaxIndexUsed is correctly
advanced instead of falling through to the legacy path; make sure any other
comparisons in this file that look for 'shielded-spend' are changed to
'shielded_spend' and run tests/linters.

787-828: ⚠️ Potential issue | 🔴 Critical

Make shielded-output merging idempotent and index-stable.

processHistory() can revisit the same tx, but this block appends decoded shielded entries every time and then saves the enlarged transaction back to storage. On the next pass, tx.outputs.length is already inflated, so result.index - transparentCount no longer maps to the original tx.shielded_outputs slot and you start creating duplicate balances/UTXOs off synthetic indexes.

🛠️ Suggested fix
-      // Capture transparent output count before appending decoded shielded outputs.
-      // result.index from processShieldedOutputs uses this same count as base,
-      // so (result.index - transparentCount) gives the shielded_outputs array index.
-      const transparentCount = tx.outputs.length;
+      const transparentOutputs = tx.outputs.filter(
+        output => !transactionUtils.isShieldedOutputEntry(output)
+      );
 
       const shieldedResults = await processShieldedOutputs(
         storage,
         tx,
         storage.shieldedCryptoProvider,
         pinCode
       );
-      for (const result of shieldedResults) {
+      tx.outputs = transparentOutputs;
+      for (const [idx, result] of shieldedResults.entries()) {
         const walletTokenUid =
           result.tokenUid === NATIVE_TOKEN_UID_HEX ? NATIVE_TOKEN_UID : result.tokenUid;
-        const so = tx.shielded_outputs[result.index - transparentCount];
+        const so = tx.shielded_outputs[idx];
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/utils/storage.ts` around lines 787 - 828, The current merge appends
decoded shielded outputs every pass which inflates tx.outputs and breaks index
mapping; to fix, compute a stable base count of transparent outputs (e.g., const
baseTransparentCount = tx.outputs.filter(o => o.type !== 'shielded').length)
before calling processShieldedOutputs, then for each result compute soIndex =
result.index - baseTransparentCount and only append the decoded shielded output
if it is not already present (compare a unique field like commitment or
decoded/address) — and only call store.saveTx(tx) if you actually appended any
new entries; update the logic around processShieldedOutputs, tx.outputs pushing,
and store.saveTx accordingly to make merging idempotent and index-stable.
src/new/wallet.ts (2)

1543-1568: ⚠️ Potential issue | 🔴 Critical

Keep newTx.outputs index-stable when extracting shielded entries.

Replacing newTx.outputs with transparentOutputs renumbers every transparent output after the first shielded one. The rest of the wallet still uses array position as the on-chain output index, so spends and UTXO bookkeeping for shielded transactions will point at the wrong output.

🛠️ Suggested fix
     if (!newTx.shielded_outputs) {
       const shieldedEntries: IHistoryShieldedOutput[] = [];
-      const transparentOutputs: IHistoryOutput[] = [];
       for (const output of newTx.outputs) {
         if (transactionUtils.isShieldedOutputEntry(output)) {
           shieldedEntries.push({
             mode: output.asset_commitment ? 2 : 1, // FullShielded=2, AmountShielded=1
             commitment: output.commitment, // already hex
@@
             surjection_proof: output.surjection_proof
               ? Buffer.from(output.surjection_proof, 'base64').toString('hex')
               : undefined,
           });
-        } else {
-          transparentOutputs.push(output);
         }
       }
       if (shieldedEntries.length > 0) {
         newTx.shielded_outputs = shieldedEntries;
-        newTx.outputs = transparentOutputs;
       }
     }

861-865: ⚠️ Potential issue | 🟠 Major

Resolve shielded paths from the stored address type, not only from opts.

This only returns a shielded path when the caller already knows to pass { legacy: false }. Internal APIs that derive addressPath from stored addresses/UTXOs (getAvailableUtxos(), getWalletInputInfo(), getSignatures()) still call getAddressPathForIndex(index) without that hint, so shielded-spend addresses keep surfacing with transparent derivation paths.

🧹 Nitpick comments (1)
src/new/types.ts (1)

330-344: Encode the shielded-address invariant in the type.

The JSDoc now says address must be shielded when shielded is set, but ProposedOutput still accepts any string in both cases. That means invalid output combinations still compile and only fail later. A discriminated union for transparent vs shielded outputs would make this API much harder to misuse.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/new/types.ts` around lines 330 - 344, Change ProposedOutput into a
discriminated union so the compiler enforces the invariant that shielded outputs
require a shielded address: define two interfaces (e.g.
TransparentProposedOutput and ShieldedProposedOutput) that share common fields
(value: OutputValueType, token: string, timelock?: number) but differ on the
discriminator: TransparentProposedOutput has shielded?: undefined (or absent)
and address: string (transparent address), while ShieldedProposedOutput has
shielded: ShieldedOutputMode and address: string (or a new ShieldedAddress type
if available) and then export type ProposedOutput = TransparentProposedOutput |
ShieldedProposedOutput; update references to ProposedOutput accordingly.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@__tests__/headers/shielded_outputs.test.ts`:
- Around line 214-215: The inner test-suite declares a new const network which
shadows the suite-level network (the inner `const network = new
Network('testnet')` inside `describe('deserialization bounds checking')`);
remove the redeclaration and reuse the outer `network` variable (or rename the
inner variable to a non-conflicting name) so the suite-level `network` is not
shadowed and the no-shadow lint rule is satisfied.

In `@__tests__/integration/shielded_transactions.test.ts`:
- Around line 608-684: Two integration tests are known to fail because shielded
token amounts are not included in the token-balance equation; mark the two cases
as skipped to keep the suite green. Locate the two tests named "should send
shielded outputs with a custom token (AmountShielded)" and "should send
FullShielded outputs with a custom token" in shielded_transactions.test.ts and
change them to skipped tests (e.g., it.skip or test.skip) with a short message
referencing the TODO ("shielded amounts missing from token-balance equation") so
the reason is recorded; restore them once the token-balance fix (accounting for
shielded outputs) is implemented.
- Around line 31-36: The test mutates the module-scoped constant
(constants.TIMEOUT) and never restores it; save the original value before
overriding and restore it after the suite: e.g., capture const originalTimeout =
(constants as any).TIMEOUT, set (constants as any).TIMEOUT = 30000 in the
existing setup, and add an afterAll (or finally in the suite) that resets
(constants as any).TIMEOUT = originalTimeout so other tests aren’t affected.

In `@__tests__/wallet/walletServiceStorageProxy.test.ts`:
- Around line 18-21: Add a positive shielded-output test that flips the existing
mock of isShieldedOutputEntry to return true for that single test and asserts
the proxied transaction shape produced by WalletServiceStorageProxy;
specifically, in the new test use the mocked transaction util (e.g., require or
jest.spyOn on isShieldedOutputEntry) to mockReturnValue(true), call the
WalletServiceStorageProxy method that creates/proxies the tx, and assert the
resulting proxied tx includes the expected shielded-output-specific
fields/structure while leaving getSignatureForTx behavior unchanged.

In `@src/models/shielded_output.ts`:
- Around line 87-99: The serializer currently ignores unknown this.mode values
and emits partial bytes; update the logic handling ShieldedOutputMode in the
serialize path (the block checking this.mode ===
ShieldedOutputMode.AMOUNT_SHIELDED / FULLY_SHIELDED) to throw an Error for any
unsupported/unknown mode instead of falling through, and apply the same strict
check in the corresponding deserialization/path (the complementary branch around
assetCommitment, surjectionProof, tokenData handling) so unknown modes
consistently raise an exception; reference ShieldedOutputMode, this.mode,
tokenData, assetCommitment, and surjectionProof to locate and update the code.

In `@src/new/sendTransaction.ts`:
- Around line 411-425: The loop in sendTransaction.ts that builds
blindedInputsArr silently drops or substitutes missing blinding metadata;
instead, when this.storage.getUtxo returns a shielded utxo, validate that
utxo.blindingFactor and utxo.assetBlindingFactor are present and if either is
missing throw an error (or reject) immediately so we fail fast; update the loop
around allInputs (the code that populates blindedInputsArr) to throw with a
clear message referencing the offending inp (txId/index) when utxo.shielded &&
(!utxo.blindingFactor || !utxo.assetBlindingFactor), and remove the fallback
Buffer.alloc(32,0) behavior so createShieldedOutputs uses only complete
blindedInputsArr.

In `@src/shielded/creation.ts`:
- Around line 19-25: Add a timelock field to ShieldedOutputDef (e.g., timelock:
number | null) and propagate it into the shielded output script creation instead
of hardcoding timelock: null; specifically, update the ShieldedOutputDef
interface to include timelock, ensure places that construct shielded outputs
(the code that currently sets timelock: null when building the output script)
use o.timelock, and adjust any other spots mentioned (around the 166-170 block)
to read/pass the timelock from sendManyOutputsSendTransaction through to the
final output construction.

In `@src/storage/memory_store.ts`:
- Around line 313-338: The code advances the shielded cursors
(walletData.shieldedCurrentAddressIndex and shieldedLastLoadedAddressIndex) for
both 'shielded' and 'shielded-spend' addresses even though only 'shielded'
addresses are stored in shieldedAddressIndexes, causing lookups to point at
missing entries; change the branching so that the calls to
setCurrentAddressIndex(..., { legacy: false }) and the update to
shieldedLastLoadedAddressIndex only run when info.addressType === 'shielded'
(not for 'shielded-spend'), leaving any handling of 'shielded-spend' out of the
shielded cursor advancement logic and preserving existing behavior for legacy
(currentAddressIndex/lastLoadedAddressIndex) paths.

In `@src/storage/storage.ts`:
- Around line 380-410: The current addTx logic mutates tx.outputs to remove
shielded entries, shifting on-chain indexes and breaking consumers that rely on
stable output positions; instead, extract shielded entries into
tx.shielded_outputs (using transactionUtils.isShieldedOutputEntry) but do not
reassign or modify tx.outputs—keep the original outputs array intact for
persistence and on-chain index stability, and if you need a transparent-only
view for processing create a temporary local variable (e.g., transparentOutputs)
and use that for downstream processing without writing it back to tx.outputs.

In `@src/utils/address.ts`:
- Around line 23-43: getAddressType currently throws for shielded addresses;
change it to classify instead: remove the throw in getAddressType (and return
the literal 'shielded' when addressObj.getType() === 'shielded'), and widen the
function signature to return 'p2pkh' | 'p2sh' | 'shielded' so callers can branch
on the parsed type. Leave enforcement of "transparent-only" in the call sites
that build output scripts (instead of here) — update those script-building
callers to check for 'shielded' and throw or handle as appropriate.

---

Outside diff comments:
In `@src/models/address.ts`:
- Around line 98-127: In validateAddress (in src/models/address.ts) ensure the
payload length is consistent with the version byte: after computing firstByte
(version) and before returning/continuing, enforce that if firstByte ===
this.network.versionBytes.shielded then addressBytes.length must equal
SHIELDED_ADDR_LENGTH, otherwise addressBytes.length must equal
LEGACY_ADDR_LENGTH (or explicitly allow p2pkh/p2sh to only match
LEGACY_ADDR_LENGTH); throw an AddressError with a clear message if the mismatch
occurs — this ties payload length to the version byte and prevents
misclassification by getType()/getScript().

In `@src/models/transaction.ts`:
- Around line 80-120: The code currently has two sources of truth for shielded
outputs (the public field shieldedOutputs and the serialized headers entry
ShieldedOutputsHeader), causing inconsistencies between serialization (toBytes,
getDataToSign, hashing) and runtime checks (validate, calculateWeight); change
the implementation so there is a single source of truth: either (A) remove
direct mutations of shieldedOutputs and always derive its value on access from
the ShieldedOutputsHeader stored in headers (add a getter like
getShieldedOutputs() used by validate, calculateWeight, signing), or (B) create
a single helper (e.g., setShieldedOutputs(headerOrList)) that updates both
this.headers (the ShieldedOutputsHeader entry) and this.shieldedOutputs
atomically and replace all direct assignments with that helper; update toBytes,
getDataToSign, validate, and calculateWeight to use the new getter/helper so
serialization and runtime logic remain consistent.

In `@src/schemas.ts`:
- Around line 174-205: The schema requires the `decoded` field in
`IHistoryShieldedOutputSchema`, causing `IHistoryTxSchema.parse()` to fail when
a shielded output lacks that helper; make `decoded` optional by changing the
`decoded: IHistoryOutputDecodedSchema` entry in `IHistoryShieldedOutputSchema`
to an optional variant (e.g., use `.optional()` on
`IHistoryOutputDecodedSchema`) so `shielded_outputs` can be parsed when
`decoded` is absent while leaving the rest of the schema (including passthrough)
intact.

In `@src/wallet/types.ts`:
- Around line 723-731: Replace the single FullNodeOutput interface with a
discriminated union so consumers cannot assume transparent-only fields exist on
shielded entries: define FullNodeTransparentOutput (with value, token_data,
script, decoded, optional token/spent_by and type='transparent' or undefined)
and use the existing IShieldedOutputEntry as the other variant, then change
FullNodeTx.outputs to be (FullNodeTransparentOutput | IShieldedOutputEntry)[];
update any type guards (e.g., isShieldedOutputEntry) to narrow to
IShieldedOutputEntry so code accessing value/token_data/script/decoded only does
so after the guard.

---

Duplicate comments:
In `@src/sync/stream.ts`:
- Around line 542-605: The processQueue loop can leave isProcessingQueue true if
any awaited call (e.g., storage.isAddressMine, storage.saveAddress,
storage.addTx, deriveShieldedAddressFromStorage) throws; wrap the entire
while(true) processing loop in a try/finally so isProcessingQueue is always
cleared in the finally block, and add an outer catch that calls
abortWithError(error) before rethrowing or breaking to ensure shutdown() does
not fall back to the timeout path; update processQueue (the method) to catch
errors from any item processing, call abortWithError(err) (or set the same
failure path used by shutdown) in that catch, and then allow the finally to
reset this.isProcessingQueue.

In `@src/types.ts`:
- Around line 761-765: The getters getScanXPrivKey and getSpendXPrivKey
currently promise a string but IWalletAccessData marks scanMainKey and
spendMainKey as optional, so update the signatures of getScanXPrivKey(pinCode:
string) and getSpendXPrivKey(pinCode: string) to return Promise<string |
undefined>, and update any implementations (the methods that read scanMainKey /
spendMainKey) to return undefined when those keys are absent and adjust callers
to handle the undefined case (e.g., early-return, throw with a clear message, or
conditional logic). Ensure the public getScanXPubKey/getSpendXPubKey remain
Promise<string | undefined> as already declared.

In `@src/utils/storage.ts`:
- Around line 855-867: The branch is checking for the wrong addressType token
('shielded-spend') so shielded spend addresses are misclassified; update all
checks that compare addressInfo.addressType for shielded spend (including the
comparison near the shieldedMaxIndexUsed update and the similar logic later) to
use 'shielded_spend' (the same token used by deriveShieldedAddressFromStorage())
so shieldedMaxIndexUsed is correctly advanced instead of falling through to the
legacy path; make sure any other comparisons in this file that look for
'shielded-spend' are changed to 'shielded_spend' and run tests/linters.
- Around line 787-828: The current merge appends decoded shielded outputs every
pass which inflates tx.outputs and breaks index mapping; to fix, compute a
stable base count of transparent outputs (e.g., const baseTransparentCount =
tx.outputs.filter(o => o.type !== 'shielded').length) before calling
processShieldedOutputs, then for each result compute soIndex = result.index -
baseTransparentCount and only append the decoded shielded output if it is not
already present (compare a unique field like commitment or decoded/address) —
and only call store.saveTx(tx) if you actually appended any new entries; update
the logic around processShieldedOutputs, tx.outputs pushing, and store.saveTx
accordingly to make merging idempotent and index-stable.

In `@src/utils/transaction.ts`:
- Around line 667-695: The guard in the shieldedModels mapping only checks
shared fields but doesn't enforce mode-specific requirements: before
instantiating ShieldedOutput in the txData.shieldedOutputs -> shieldedModels
block, add a check for outputs with so.mode === 'FULLY_SHIELDED' to ensure
so.assetCommitment and so.surjectionProof are present (non-null/defined) and
throw a clear Error if missing; keep existing checks for commitment, rangeProof,
script, ephemeralPubkey and perform the FULLY_SHIELDED checks prior to calling
this.getTokenDataFromOutput and new ShieldedOutput so failures occur fast and
deterministically.

In `@src/utils/wallet.ts`:
- Around line 710-729: In generateAccessDataFromSeed(), avoid deriving and
storing single-sig shielded keys when multisigData is present: add a guard
checking multisigData before using scanAcctXpriv/scanXpriv and
spendAcctXpriv/spendXpriv (the derivation and encryptData calls), and if
multisigData exists skip those derivations and set
scanXpubkey/scanMainKey/spendXpubkey/spendMainKey to null/undefined (or the
multisig-appropriate placeholders) so you do not create a second single-sig
signing authority; mirror the multisig check used in the other wallet creation
path to keep behavior consistent.
- Around line 627-636: The shielded key derivation block currently runs even for
multisig roots and must be skipped for WalletType.MULTISIG; wrap the existing
code that derives SHIELDED_SCAN_ACCT_PATH/SHIELDED_SPEND_ACCT_PATH and sets
accessData.scanXpubkey/scanMainKey/spendXpubkey/spendMainKey with a guard that
checks the wallet type (e.g., if (argXpriv.depth === 0 && walletType !==
WalletType.MULTISIG) or use the local isMultisig flag), so multisig root keys do
not generate or persist standalone shielded scan/spend chains; leave encryptData
usage intact for non-multisig flows.

---

Nitpick comments:
In `@src/new/types.ts`:
- Around line 330-344: Change ProposedOutput into a discriminated union so the
compiler enforces the invariant that shielded outputs require a shielded
address: define two interfaces (e.g. TransparentProposedOutput and
ShieldedProposedOutput) that share common fields (value: OutputValueType, token:
string, timelock?: number) but differ on the discriminator:
TransparentProposedOutput has shielded?: undefined (or absent) and address:
string (transparent address), while ShieldedProposedOutput has shielded:
ShieldedOutputMode and address: string (or a new ShieldedAddress type if
available) and then export type ProposedOutput = TransparentProposedOutput |
ShieldedProposedOutput; update references to ProposedOutput accordingly.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: c0b57b7d-1e0f-4505-b6c4-7d85fe6c7ca9

📥 Commits

Reviewing files that changed from the base of the PR and between 0ccc910 and e32f589.

⛔ Files ignored due to path filters (1)
  • package-lock.json is excluded by !**/package-lock.json
📒 Files selected for processing (46)
  • __tests__/headers/shielded_outputs.test.ts
  • __tests__/integration/configuration/docker-compose.yml
  • __tests__/integration/configuration/privnet.yml
  • __tests__/integration/hathorwallet_facade.test.ts
  • __tests__/integration/helpers/wallet.helper.ts
  • __tests__/integration/shielded_transactions.test.ts
  • __tests__/models/address.test.ts
  • __tests__/models/network.test.ts
  • __tests__/models/shielded_output.test.ts
  • __tests__/models/transaction.test.ts
  • __tests__/shielded/creation.test.ts
  • __tests__/shielded/processing.test.ts
  • __tests__/storage/storage.test.ts
  • __tests__/wallet/walletServiceStorageProxy.test.ts
  • package.json
  • setupTests-integration.js
  • src/constants.ts
  • src/headers/parser.ts
  • src/headers/shielded_outputs.ts
  • src/headers/types.ts
  • src/lib.ts
  • src/models/address.ts
  • src/models/network.ts
  • src/models/shielded_output.ts
  • src/models/transaction.ts
  • src/new/sendTransaction.ts
  • src/new/types.ts
  • src/new/wallet.ts
  • src/schemas.ts
  • src/shielded/creation.ts
  • src/shielded/index.ts
  • src/shielded/processing.ts
  • src/shielded/provider.ts
  • src/shielded/types.ts
  • src/storage/memory_store.ts
  • src/storage/storage.ts
  • src/sync/stream.ts
  • src/types.ts
  • src/utils/address.ts
  • src/utils/bigint.ts
  • src/utils/shieldedAddress.ts
  • src/utils/storage.ts
  • src/utils/transaction.ts
  • src/utils/wallet.ts
  • src/wallet/types.ts
  • src/wallet/walletServiceStorageProxy.ts
💤 Files with no reviewable changes (1)
  • setupTests-integration.js

Comment thread __tests__/headers/shielded_outputs.test.ts Outdated
Comment on lines +31 to +36
import * as constants from '../../src/constants';

// Increase Axios timeout for test environment — the fullnode is under load from continuous mining.
// TIMEOUT is declared as `const` so we must cast to override it in tests.
// eslint-disable-next-line @typescript-eslint/no-explicit-any
(constants as any).TIMEOUT = 30000;
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Restore the shared timeout after this suite.

This mutates src/constants at module scope and never puts it back, so later integration files in the same worker inherit the 30s Axios timeout and become order-dependent.

🛠️ Suggested fix
 import * as constants from '../../src/constants';
 
-// Increase Axios timeout for test environment — the fullnode is under load from continuous mining.
-// TIMEOUT is declared as `const` so we must cast to override it in tests.
-// eslint-disable-next-line `@typescript-eslint/no-explicit-any`
-(constants as any).TIMEOUT = 30000;
+const originalTimeout = constants.TIMEOUT;
 
 describe('shielded transactions', () => {
+  beforeAll(() => {
+    // Increase Axios timeout for this suite only — the fullnode is under load from continuous mining.
+    // eslint-disable-next-line `@typescript-eslint/no-explicit-any`
+    (constants as any).TIMEOUT = 30000;
+  });
+
+  afterAll(() => {
+    // eslint-disable-next-line `@typescript-eslint/no-explicit-any`
+    (constants as any).TIMEOUT = originalTimeout;
+  });
+
   jest.setTimeout(300_000);
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@__tests__/integration/shielded_transactions.test.ts` around lines 31 - 36,
The test mutates the module-scoped constant (constants.TIMEOUT) and never
restores it; save the original value before overriding and restore it after the
suite: e.g., capture const originalTimeout = (constants as any).TIMEOUT, set
(constants as any).TIMEOUT = 30000 in the existing setup, and add an afterAll
(or finally in the suite) that resets (constants as any).TIMEOUT =
originalTimeout so other tests aren’t affected.

Comment on lines +608 to +684
// TODO: Custom token shielded outputs fail with "tokens melted, but there is no melt authority input".
// The phantom output trick in sendTransaction.ts balances UTXO selection, but when phantoms are
// removed the custom token inputs exceed the transparent outputs, looking like a melt to the fullnode.
// The shielded output amounts need to be accounted for in the token balance equation.
it('should send shielded outputs with a custom token (AmountShielded)', async () => {
const walletA = await generateWalletHelper();
const walletB = await generateWalletHelper();

// Fund walletA and create a custom token
const addrA = await walletA.getAddressAtIndex(0);
await GenesisWalletHelper.injectFunds(walletA, addrA, 20n);
const tokenResp = await createTokenHelper(walletA, 'ShieldedToken', 'SHT', 1000n);
const tokenUid = tokenResp.hash;

// Send shielded outputs of the custom token to walletB
const shieldedAddrB0 = await walletB.getAddressAtIndex(0, { legacy: false });
const shieldedAddrB1 = await walletB.getAddressAtIndex(1, { legacy: false });

const tx = await walletA.sendManyOutputsTransaction([
{
address: shieldedAddrB0,
value: 300n,
token: tokenUid,
shielded: ShieldedOutputMode.AMOUNT_SHIELDED,
},
{
address: shieldedAddrB1,
value: 200n,
token: tokenUid,
shielded: ShieldedOutputMode.AMOUNT_SHIELDED,
},
]);
expect(tx).not.toBeNull();
await waitForTxReceived(walletB, tx!.hash!);

// WalletB should see the custom token balance from decrypted shielded outputs
const balanceB = await walletB.getBalance(tokenUid);
expect(balanceB[0].balance.unlocked).toBe(500n);

// WalletA should have the remaining custom tokens
const balanceA = await walletA.getBalance(tokenUid);
expect(balanceA[0].balance.unlocked).toBe(500n);
});

// TODO: Same issue as AmountShielded custom token test above.
it('should send FullShielded outputs with a custom token', async () => {
const walletA = await generateWalletHelper();
const walletB = await generateWalletHelper();

const addrA = await walletA.getAddressAtIndex(0);
await GenesisWalletHelper.injectFunds(walletA, addrA, 20n);
const tokenResp = await createTokenHelper(walletA, 'FullShieldToken', 'FST', 1000n);
const tokenUid = tokenResp.hash;

const shieldedAddrB0 = await walletB.getAddressAtIndex(0, { legacy: false });
const shieldedAddrB1 = await walletB.getAddressAtIndex(1, { legacy: false });

const tx = await walletA.sendManyOutputsTransaction([
{
address: shieldedAddrB0,
value: 400n,
token: tokenUid,
shielded: ShieldedOutputMode.FULLY_SHIELDED,
},
{
address: shieldedAddrB1,
value: 100n,
token: tokenUid,
shielded: ShieldedOutputMode.FULLY_SHIELDED,
},
]);
expect(tx).not.toBeNull();
await waitForTxReceived(walletB, tx!.hash!);

const balanceB = await walletB.getBalance(tokenUid);
expect(balanceB[0].balance.unlocked).toBe(500n);
});
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Don’t leave known-failing custom-token cases enabled.

The TODO here says these flows still fail because shielded amounts are missing from the token-balance equation, but both specs are still active. That will keep the suite red until the production fix lands.

Do you want me to draft a minimal follow-up that marks these as skipped with the TODO reason, or would you rather land the token-balance fix in this PR?

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@__tests__/integration/shielded_transactions.test.ts` around lines 608 - 684,
Two integration tests are known to fail because shielded token amounts are not
included in the token-balance equation; mark the two cases as skipped to keep
the suite green. Locate the two tests named "should send shielded outputs with a
custom token (AmountShielded)" and "should send FullShielded outputs with a
custom token" in shielded_transactions.test.ts and change them to skipped tests
(e.g., it.skip or test.skip) with a short message referencing the TODO
("shielded amounts missing from token-balance equation") so the reason is
recorded; restore them once the token-balance fix (accounting for shielded
outputs) is implemented.

Comment on lines 18 to 21
jest.mock('../../src/utils/transaction', () => ({
getSignatureForTx: jest.fn(),
isShieldedOutputEntry: jest.fn().mockReturnValue(false),
}));
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Add one positive shielded-output case to this suite.

Line 20 pins isShieldedOutputEntry to false for every test here, so the new shielded branch in WalletServiceStorageProxy can regress without this file noticing. A companion case that returns true and asserts the proxied tx shape would close that gap.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@__tests__/wallet/walletServiceStorageProxy.test.ts` around lines 18 - 21, Add
a positive shielded-output test that flips the existing mock of
isShieldedOutputEntry to return true for that single test and asserts the
proxied transaction shape produced by WalletServiceStorageProxy; specifically,
in the new test use the mocked transaction util (e.g., require or jest.spyOn on
isShieldedOutputEntry) to mockReturnValue(true), call the
WalletServiceStorageProxy method that creates/proxies the tx, and assert the
resulting proxied tx includes the expected shielded-output-specific
fields/structure while leaving getSignatureForTx behavior unchanged.

Comment thread src/models/shielded_output.ts
Comment thread src/new/sendTransaction.ts Outdated
Comment on lines +411 to +425
// Extract blinding factors from shielded inputs for the homomorphic balance equation.
const blindedInputsArr: Array<{ value: bigint; vbf: Buffer; gbf: Buffer }> = [];
for (const inp of allInputs) {
const utxo = await this.storage.getUtxo({
txId: inp.txId,
index: inp.index,
});
if (utxo?.shielded && utxo.blindingFactor) {
blindedInputsArr.push({
value: utxo.value,
vbf: Buffer.from(utxo.blindingFactor, 'hex'),
gbf: utxo.assetBlindingFactor
? Buffer.from(utxo.assetBlindingFactor, 'hex')
: Buffer.alloc(32, 0),
});
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Fail fast if a selected shielded input has incomplete blinding metadata.

This loop silently drops shielded inputs when blindingFactor is missing, and it also substitutes an all-zero asset blinding factor when assetBlindingFactor is absent. That means createShieldedOutputs() can build proofs against the wrong blinded-input set instead of rejecting an inconsistent wallet state.

Suggested fix
       const blindedInputsArr: Array<{ value: bigint; vbf: Buffer; gbf: Buffer }> = [];
       for (const inp of allInputs) {
         const utxo = await this.storage.getUtxo({
           txId: inp.txId,
           index: inp.index,
         });
-        if (utxo?.shielded && utxo.blindingFactor) {
+        if (utxo?.shielded) {
+          if (!utxo.blindingFactor) {
+            throw new SendTxError(
+              `Missing blinding metadata for shielded input ${inp.txId}:${inp.index}.`
+            );
+          }
           blindedInputsArr.push({
             value: utxo.value,
             vbf: Buffer.from(utxo.blindingFactor, 'hex'),
             gbf: utxo.assetBlindingFactor
               ? Buffer.from(utxo.assetBlindingFactor, 'hex')
               : Buffer.alloc(32, 0),
           });
         }
       }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
// Extract blinding factors from shielded inputs for the homomorphic balance equation.
const blindedInputsArr: Array<{ value: bigint; vbf: Buffer; gbf: Buffer }> = [];
for (const inp of allInputs) {
const utxo = await this.storage.getUtxo({
txId: inp.txId,
index: inp.index,
});
if (utxo?.shielded && utxo.blindingFactor) {
blindedInputsArr.push({
value: utxo.value,
vbf: Buffer.from(utxo.blindingFactor, 'hex'),
gbf: utxo.assetBlindingFactor
? Buffer.from(utxo.assetBlindingFactor, 'hex')
: Buffer.alloc(32, 0),
});
// Extract blinding factors from shielded inputs for the homomorphic balance equation.
const blindedInputsArr: Array<{ value: bigint; vbf: Buffer; gbf: Buffer }> = [];
for (const inp of allInputs) {
const utxo = await this.storage.getUtxo({
txId: inp.txId,
index: inp.index,
});
if (utxo?.shielded) {
if (!utxo.blindingFactor) {
throw new SendTxError(
`Missing blinding metadata for shielded input ${inp.txId}:${inp.index}.`
);
}
blindedInputsArr.push({
value: utxo.value,
vbf: Buffer.from(utxo.blindingFactor, 'hex'),
gbf: utxo.assetBlindingFactor
? Buffer.from(utxo.assetBlindingFactor, 'hex')
: Buffer.alloc(32, 0),
});
}
}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/new/sendTransaction.ts` around lines 411 - 425, The loop in
sendTransaction.ts that builds blindedInputsArr silently drops or substitutes
missing blinding metadata; instead, when this.storage.getUtxo returns a shielded
utxo, validate that utxo.blindingFactor and utxo.assetBlindingFactor are present
and if either is missing throw an error (or reject) immediately so we fail fast;
update the loop around allInputs (the code that populates blindedInputsArr) to
throw with a clear message referencing the offending inp (txId/index) when
utxo.shielded && (!utxo.blindingFactor || !utxo.assetBlindingFactor), and remove
the fallback Buffer.alloc(32,0) behavior so createShieldedOutputs uses only
complete blindedInputsArr.

Comment thread src/shielded/creation.ts
Comment thread src/storage/memory_store.ts
Comment thread src/storage/storage.ts Outdated
Comment thread src/utils/address.ts
Comment on lines 23 to +43
/**
* Parse address and return the address type
* Parse address and return the address type.
* Returns 'p2pkh' or 'p2sh' for legacy addresses.
* Throws for shielded addresses — callers expecting an output script type
* should not receive shielded addresses directly.
*
* @param {string} address
* @param {Network} network
*
* @returns {string} output type of the address (p2pkh or p2sh)
* @returns {'p2pkh' | 'p2sh'} output type of the address
*/
export function getAddressType(address: string, network: Network): 'p2pkh' | 'p2sh' {
const addressObj = new Address(address, { network });
return addressObj.getType();
const addrType = addressObj.getType();
if (addrType === 'shielded') {
throw new Error(
'Shielded addresses cannot be used directly as output script type. Use the spend-derived P2PKH address instead.'
);
}
return addrType;
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Don't make getAddressType() throw on a valid shielded address.

This helper reads like a classifier, but it now rejects a supported address kind. That is a breaking semantic change for callers that only need to branch on the parsed type. Keep the shielded rejection in the script-building paths instead, where the transparent-only restriction actually applies.

Suggested direction
-export function getAddressType(address: string, network: Network): 'p2pkh' | 'p2sh' {
+export function getAddressType(
+  address: string,
+  network: Network
+): 'p2pkh' | 'p2sh' | 'shielded' {
   const addressObj = new Address(address, { network });
-  const addrType = addressObj.getType();
-  if (addrType === 'shielded') {
-    throw new Error(
-      'Shielded addresses cannot be used directly as output script type. Use the spend-derived P2PKH address instead.'
-    );
-  }
-  return addrType;
+  return addressObj.getType();
 }

Then keep the transparent-output restriction at callers that actually need a script type.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/utils/address.ts` around lines 23 - 43, getAddressType currently throws
for shielded addresses; change it to classify instead: remove the throw in
getAddressType (and return the literal 'shielded' when addressObj.getType() ===
'shielded'), and widen the function signature to return 'p2pkh' | 'p2sh' |
'shielded' so callers can branch on the parsed type. Leave enforcement of
"transparent-only" in the call sites that build output scripts (instead of here)
— update those script-building callers to check for 'shielded' and throw or
handle as appropriate.

…ge cases

  - Test selectUtxos shielded filter (true/false/undefined) in MemoryStore
  - Test getSignatureForTx uses spend key chain for shielded-spend addresses
  - Test deriveShieldedAddressFromStorage returns null without shielded keys
  - Test read-only wallet gracefully handles shielded outputs

  Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
…rectness

  Security & validation:
  - Fix wrong signing path comment (account 2', not 1')
  - Zero private key buffers after ECDH use (defense-in-depth)
  - Validate recovered shielded value is positive after rewind
  - Throw on missing blindingFactor for shielded inputs
  - Throw on unknown mode in serialize/serializeSighash
  - Guard resolveTokenUid against FullShielded misuse

  Deduplication & code quality:
  - Extract normalizeShieldedOutputs shared utility
  - Replace magic numbers with ShieldedOutputMode enum
  - Move ZERO_TWEAK to shared constants
  - Use typed parameter for isShieldedOutputEntry

  Correctness:
  - Restore missing saveAddress for on-demand legacy derivation
  - Propagate timelock through shielded output creation
  - Only advance shielded cursor for 'shielded' type, not 'shielded-spend'
  - Clarify WS subscription strategy comment
  - Remove shadowed network variable in tests

  Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
@pedroferreira1
Copy link
Copy Markdown
Member Author

@coderabbitai once again. All your comments were addressed already. Resolve everything and run a full review

@coderabbitai
Copy link
Copy Markdown

coderabbitai bot commented Apr 15, 2026

@pedroferreira1 Understood! I'll resolve all my existing comments now and kick off a fresh full review of the latest changes right away.

✅ Actions performed

Full review triggered.

Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
__tests__/integration/helpers/wallet.helper.ts (1)

327-338: ⚠️ Potential issue | 🟠 Major

processHistory is invoked too late to prevent the timeout path.

The current flow waits for processingStatus === FINISHED first, but the timeout failures indicate history may need processing before that state is reached. Line 337 won’t run in that failure path.

🩹 Suggested fix
-  while (!storageTx || storageTx.processingStatus !== TxHistoryProcessingStatus.FINISHED) {
+  while (!storageTx || storageTx.processingStatus !== TxHistoryProcessingStatus.FINISHED) {
     if (timeoutReached) {
       break;
     }

+    if (storageTx && storageTx.processingStatus !== TxHistoryProcessingStatus.FINISHED) {
+      await hWallet.storage.processHistory(hWallet.pinCode ?? undefined);
+    }
+
     // Tx not found, wait 1s before trying again
     await delay(1000);
     storageTx = (await hWallet.getTx(txId)) as IHistoryTx;
   }
@@
-    await hWallet.storage.processHistory(hWallet.pinCode || undefined);
+    await hWallet.storage.processHistory(hWallet.pinCode ?? undefined);
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@__tests__/integration/helpers/wallet.helper.ts` around lines 327 - 338, The
timeout happens because processHistory is only called after waiting for
processingStatus === FINISHED, so when the timeout path is taken line invoking
hWallet.storage.processHistory(...) never runs; change the flow so that after
marking inputs spent you always invoke and await
hWallet.storage.processHistory(hWallet.pinCode || undefined) (i.e., call
processHistory immediately after updateInputsSpentBy(hWallet, storageTx)
whenever storageTx.is_voided === false) rather than waiting for processingStatus
to reach FINISHED, ensuring history is processed in the timeout path as well.
♻️ Duplicate comments (12)
src/utils/bigint.ts (1)

29-48: ⚠️ Potential issue | 🟠 Major

Optional chaining here doesn’t prevent missing-context runtime failures.

Line 47 now uses context?.source, but Line 36 still does BigInt(context.source). In runtimes where the reviver context arg is absent, this throws before the SyntaxError branch, so parsing can fail for numeric fields.

Suggested fix
-  bigIntReviver(_key: string, value: any, context: { source: string }): any {
+  bigIntReviver(_key: string, value: any, context?: { source: string }): any {
     if (typeof value !== 'number') {
       // No special handling needed for non-number values.
       return value;
     }
+
+    // Some runtimes do not provide the reviver context argument.
+    if (!context?.source) {
+      if (Number.isInteger(value) && !Number.isSafeInteger(value)) {
+        throw new Error(
+          'JSONBigInt.parse cannot safely recover an unsafe integer when reviver context is unavailable'
+        );
+      }
+      return value;
+    }
 
     try {
       const bigIntValue = BigInt(context.source);
@@
-        (e.message === `Cannot convert ${context?.source} to a BigInt` ||
+        (e.message === `Cannot convert ${context.source} to a BigInt` ||
           e.message === `invalid BigInt syntax`)
Does JSON.parse reviver provide a third `context` argument (with `source`) in Node.js 22, V8, and Hermes/React Native runtimes?
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/utils/bigint.ts` around lines 29 - 48, bigIntReviver currently calls
BigInt(context.source) without ensuring context or context.source exists, which
can throw before the catch; change bigIntReviver to first determine a safe input
string for BigInt by checking if context and context.source are present (e.g.,
typeof context?.source === 'string') and use that, otherwise fall back to
String(value) or another safe representation, then call BigInt on that input;
also update the catch branch to compare error messages against generic BigInt
errors without assuming context.source is available (remove context?.source
usage in error message comparisons) so the reviver is safe when the third
argument is missing.
__tests__/integration/hathorwallet_facade.test.ts (1)

70-70: ⚠️ Potential issue | 🟡 Minor

Avoid nonce > 0 in mining assertion (flaky).

Line 70 reintroduces a brittle check; nonce can be 0 in valid mined transactions.

✅ Suggested assertion update
-    expect(tx.nonce).toBeGreaterThan(0);
+    expect(tx.nonce).toEqual(expect.any(Number));
+    expect(tx.parents.length).toBeGreaterThan(0);
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@__tests__/integration/hathorwallet_facade.test.ts` at line 70, The test
currently asserts a mined transaction nonce with
expect(tx.nonce).toBeGreaterThan(0), which is flaky because nonce may be 0;
update the assertion for tx.nonce in
__tests__/integration/hathorwallet_facade.test.ts to allow zero (e.g., assert it
is a number and >= 0 or use toBeGreaterThanOrEqual(0)) so the test only fails
for negative or non-numeric nonces; adjust any surrounding comments to reflect
the new expectation.
package.json (1)

17-17: ⚠️ Potential issue | 🟠 Major

Release-blocker: avoid file: runtime dependency in published package.

Line 17 points to a local sibling path, which breaks installs outside this exact checkout layout. Keep this only for local integration branches; before release, replace with a published semver (or an internal registry tarball URL).

🔧 Suggested change
-    "@hathor/ct-crypto-node": "file:../hathor-core/hathor-ct-crypto/npm",
+    "@hathor/ct-crypto-node": "<published-version>",
#!/bin/bash
set -euo pipefail

echo "Runtime dependencies that use file: references"
jq -r '.dependencies // {} | to_entries[] | select(.value | startswith("file:")) | "\(.key)=\(.value)"' package.json

echo
echo "Where `@hathor/ct-crypto-node` is imported (to confirm runtime need):"
rg -nP "['\"]@hathor/ct-crypto-node['\"]"
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@package.json` at line 17, The package.json currently declares a runtime
dependency using a local file reference "@hathor/ct-crypto-node":
"file:../hathor-core/hathor-ct-crypto/npm"; replace that file: reference with a
published semver (or an internal-registry/tarball URL) suitable for releases,
update any lockfile accordingly, and verify no other dependencies use "file:" by
searching for the string and confirming imports of "@hathor/ct-crypto-node" (to
ensure it's a runtime dependency) before publishing.
__tests__/wallet/walletServiceStorageProxy.test.ts (1)

18-21: ⚠️ Potential issue | 🟡 Minor

Add one positive isShieldedOutputEntry path test.

Line 20 locks the helper to false for the whole suite, so shielded-output conversion behavior can regress without detection. Please add a companion case that flips this mock to true and asserts the proxied tx output shape.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@__tests__/wallet/walletServiceStorageProxy.test.ts` around lines 18 - 21, The
suite globally mocks isShieldedOutputEntry to always return false, so add a new
positive test in __tests__/wallet/walletServiceStorageProxy.test.ts that flips
the mock to return true for that case (e.g., using (isShieldedOutputEntry as
jest.Mock).mockReturnValueOnce(true) or
jest.requireMock(...).isShieldedOutputEntry.mockReturnValueOnce(true)), call the
same proxy logic under test (the wallet storage proxy invocation used in other
tests) and assert the proxied transaction outputs match the shielded-output
shape expected (verify fields/structure that differ when isShieldedOutputEntry
returns true).
src/utils/wallet.ts (2)

710-729: ⚠️ Potential issue | 🟠 Major

Same multisig concern applies here.

generateAccessDataFromSeed unconditionally derives shielded keys for all wallet types. Apply the same walletType === WalletType.P2PKH guard to avoid storing single-sig shielded keys on multisig wallets.

Proposed fix
-    // Derive shielded scan (account 1') and spend (account 2') keys.
-    // Separate from legacy (account 0') so scan key only grants view access.
-    const scanAcctXpriv = rootXpriv.deriveNonCompliantChild(SHIELDED_SCAN_ACCT_PATH);
-    const scanXpriv = scanAcctXpriv.deriveNonCompliantChild(0);
-    const spendAcctXpriv = rootXpriv.deriveNonCompliantChild(SHIELDED_SPEND_ACCT_PATH);
-    const spendXpriv = spendAcctXpriv.deriveNonCompliantChild(0);
+    // Derive shielded scan (account 1') and spend (account 2') keys only for P2PKH wallets.
+    let scanXpriv: HDPrivateKey | undefined;
+    let spendXpriv: HDPrivateKey | undefined;
+    if (walletType === WalletType.P2PKH) {
+      const scanAcctXpriv = rootXpriv.deriveNonCompliantChild(SHIELDED_SCAN_ACCT_PATH);
+      scanXpriv = scanAcctXpriv.deriveNonCompliantChild(0);
+      const spendAcctXpriv = rootXpriv.deriveNonCompliantChild(SHIELDED_SPEND_ACCT_PATH);
+      spendXpriv = spendAcctXpriv.deriveNonCompliantChild(0);
+    }

     return {
       walletType,
       multisigData,
       xpubkey: xpriv.xpubkey,
       mainKey: encryptedMainKey,
       acctPathKey: encryptedAcctPathKey,
       authKey: encryptedAuthPathKey,
       words: encryptedWords,
       walletFlags: 0,
-      scanXpubkey: scanXpriv.xpubkey,
-      scanMainKey: encryptData(scanXpriv.xprivkey, pin),
-      spendXpubkey: spendXpriv.xpubkey,
-      spendMainKey: encryptData(spendXpriv.xprivkey, pin),
+      ...(scanXpriv && {
+        scanXpubkey: scanXpriv.xpubkey,
+        scanMainKey: encryptData(scanXpriv.xprivkey, pin),
+      }),
+      ...(spendXpriv && {
+        spendXpubkey: spendXpriv.xpubkey,
+        spendMainKey: encryptData(spendXpriv.xprivkey, pin),
+      }),
     };
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/utils/wallet.ts` around lines 710 - 729, The code in
generateAccessDataFromSeed always derives and stores shielded keys (scanXpriv,
spendXpriv, scanMainKey, spendMainKey) regardless of wallet type; change it to
only derive/encrypt/populate scanXpubkey/scanMainKey and
spendXpubkey/spendMainKey when walletType === WalletType.P2PKH (i.e.,
single-sig). Wrap the existing derivation and the return fields scanXpubkey,
scanMainKey, spendXpubkey, spendMainKey behind that guard and ensure for
non-P2PKH (multisig) those properties are omitted or set to null/undefined
consistently so multisig wallets do not store single-sig shielded keys.

624-637: ⚠️ Potential issue | 🟠 Major

Do not derive single-sig shielded keys for multisig wallets.

The condition if (argXpriv.depth === 0) derives shielded keys regardless of wallet type. For WalletType.MULTISIG, this stores single-sig shielded keys (derived from m/44'/280'/1'/0 and m/44'/280'/2'/0) that one signer controls alone, which is inconsistent with the wallet's multisig signing model.

Proposed fix
-    if (argXpriv.depth === 0) {
+    if (argXpriv.depth === 0 && walletType === WalletType.P2PKH) {
       const scanAcctXpriv = argXpriv.deriveNonCompliantChild(SHIELDED_SCAN_ACCT_PATH);
       const scanXpriv = scanAcctXpriv.deriveNonCompliantChild(0);
       accessData.scanXpubkey = scanXpriv.xpubkey;
       accessData.scanMainKey = encryptData(scanXpriv.xprivkey, pin);

       const spendAcctXpriv = argXpriv.deriveNonCompliantChild(SHIELDED_SPEND_ACCT_PATH);
       const spendXpriv2 = spendAcctXpriv.deriveNonCompliantChild(0);
       accessData.spendXpubkey = spendXpriv2.xpubkey;
       accessData.spendMainKey = encryptData(spendXpriv2.xprivkey, pin);
     }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/utils/wallet.ts` around lines 624 - 637, The code currently derives and
stores single-sig shielded keys from argXpriv (using deriveNonCompliantChild
with SHIELDED_SCAN_ACCT_PATH and SHIELDED_SPEND_ACCT_PATH, then encryptData)
regardless of wallet type; change the condition to skip this entire derivation
block when the wallet is a multisig (check the wallet type variable, e.g.,
wallet.type or walletType !== WalletType.MULTISIG) so you do not set
accessData.scanXpubkey, accessData.scanMainKey, accessData.spendXpubkey, or
accessData.spendMainKey for WalletType.MULTISIG; keep the existing behavior for
non-multisig wallets and leave all deriveNonCompliantChild/encryptData calls
unchanged inside the guarded block.
src/shielded/processing.ts (1)

130-145: ⚠️ Potential issue | 🟡 Minor

Missing runtime validation for asset_commitment on FullShielded outputs.

The code at line 133 uses a non-null assertion (shieldedOutput.asset_commitment!) but relies only on the mode check. Since asset_commitment is optional in the type definition, a malformed fullnode response with mode === FULLY_SHIELDED but missing asset_commitment would cause a runtime error when calling Buffer.from(undefined, 'hex').

Suggested fix
       if (isFullShielded) {
         // FullShielded: rewind recovers token UID and asset blinding factor
-        // asset_commitment is guaranteed for FullShielded outputs (protocol invariant)
-        const assetCommitment = Buffer.from(shieldedOutput.asset_commitment!, 'hex');
+        // Validate asset_commitment exists (required for FullShielded)
+        if (!shieldedOutput.asset_commitment) {
+          storage.logger.warn(
+            `FullShielded output missing asset_commitment for tx ${tx.tx_id} ` +
+              `output ${transparentCount + idx} — skipping`
+          );
+          continue;
+        }
+        const assetCommitment = Buffer.from(shieldedOutput.asset_commitment, 'hex');
         const result = await cryptoProvider.rewindFullShieldedOutput(
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/shielded/processing.ts` around lines 130 - 145, Check and validate that
shieldedOutput.asset_commitment is present before using the non-null assertion
in the isFullShielded branch: in the block where you call
cryptoProvider.rewindFullShieldedOutput (inside the isFullShielded condition),
guard against a missing or falsy shieldedOutput.asset_commitment and throw or
return a clear error (or skip processing) if it's undefined/invalid, rather than
calling Buffer.from(shieldedOutput.asset_commitment!, 'hex'); this ensures
recoveredAbf/recoveredTokenUid logic only runs when asset_commitment is a valid
hex string.
src/utils/transaction.ts (1)

708-738: ⚠️ Potential issue | 🟠 Major

Validate FULLY_SHIELDED-specific fields before building the model.

The validation at lines 710-714 only checks common crypto fields. A FULLY_SHIELDED entry with missing assetCommitment or surjectionProof will still construct a ShieldedOutput and fail later during serialization or processing.

🧪 Suggested fix
         const shieldedModels = txData.shieldedOutputs.map(so => {
           if (!so.commitment || !so.rangeProof || !so.script || !so.ephemeralPubkey) {
             throw new Error(
               'Shielded output missing required crypto fields (commitment, rangeProof, script, ephemeralPubkey)'
             );
           }
+          if (
+            so.mode === ShieldedOutputMode.FULLY_SHIELDED &&
+            (!so.assetCommitment || !so.surjectionProof)
+          ) {
+            throw new Error(
+              'Fully shielded output missing required crypto fields (assetCommitment, surjectionProof)'
+            );
+          }
           const tokenData = this.getTokenDataFromOutput(
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/utils/transaction.ts` around lines 708 - 738, The shielded output mapping
in the transaction builder currently only validates common crypto fields before
constructing a ShieldedOutput; update the validation inside the
txData.shieldedOutputs.map callback (where ShieldedOutput is constructed) to
also require assetCommitment and surjectionProof when so.mode indicates
FULLY_SHIELDED (or the equivalent constant), throwing a clear Error if those
fields are missing, so the getTokenDataFromOutput/ShieldedOutput construction
won't proceed with incomplete data.
__tests__/integration/shielded_transactions.test.ts (2)

33-36: ⚠️ Potential issue | 🟠 Major

Restore the shared timeout after this suite.

This mutates src/constants at module scope and never restores it, so later integration files in the same worker inherit the 30s Axios timeout and become order-dependent.

🛠️ Suggested fix
 import * as constants from '../../src/constants';
 
-// Increase Axios timeout for test environment — the fullnode is under load from continuous mining.
-// TIMEOUT is declared as `const` so we must cast to override it in tests.
-// eslint-disable-next-line `@typescript-eslint/no-explicit-any`
-(constants as any).TIMEOUT = 30000;
+const originalTimeout = constants.TIMEOUT;
 
 describe('shielded transactions', () => {
+  beforeAll(() => {
+    // Increase Axios timeout for this suite only — the fullnode is under load from continuous mining.
+    // eslint-disable-next-line `@typescript-eslint/no-explicit-any`
+    (constants as any).TIMEOUT = 30000;
+  });
+
+  afterAll(() => {
+    // eslint-disable-next-line `@typescript-eslint/no-explicit-any`
+    (constants as any).TIMEOUT = originalTimeout;
+  });
+
   jest.setTimeout(300_000);
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@__tests__/integration/shielded_transactions.test.ts` around lines 33 - 36,
You mutate the module-scoped constant TIMEOUT on the imported constants object
and never restore it, causing test order dependence; update the test to save the
original value of constants.TIMEOUT before setting (e.g., const originalTimeout
= (constants as any).TIMEOUT), set (constants as any).TIMEOUT = 30000 for this
suite, and restore it in an afterAll/afterEach hook (or a finally block) by
reassigning (constants as any).TIMEOUT = originalTimeout so subsequent tests see
the original timeout.

608-650: ⚠️ Potential issue | 🟡 Minor

Mark custom token shielded tests as skipped until the balance equation is fixed.

The phantom output mechanism creates transparent outputs for UTXO selection but removes them before finalizing the transaction. This causes custom token inputs to exceed transparent outputs, triggering a spurious "melt" error when the fullnode validates the homomorphic balance equation. The shielded output amounts must be integrated into the token balance calculation. Until this is resolved, mark both tests at lines 608-650 and 653-684 with it.skip() and the existing TODO to prevent CI failures.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@__tests__/integration/shielded_transactions.test.ts` around lines 608 - 650,
Mark the two failing tests that exercise custom token shielded outputs as
skipped: change the test declarations for "should send shielded outputs with a
custom token (AmountShielded)" and the companion "should send shielded outputs
with a custom token (ValueShielded)" to use it.skip(...) instead of it(...),
preserving the existing TODO comment about phantom outputs and the
balance-equation bug so CI won't run them until the token balance calculation
(including shielded output amounts) is fixed.
src/types.ts (1)

761-765: ⚠️ Potential issue | 🟠 Major

The shielded xpriv accessor signatures still hide the legacy-wallet absence case.

The docstring says these methods return undefined when the wallet predates shielded support, but getScanXPrivKey() and getSpendXPrivKey() still promise string. That makes the interface lie to callers and defeats the compatibility guard you just documented.

💡 Suggested signature fix
   // Shielded key methods (return undefined if wallet was created before shielded feature)
-  getScanXPrivKey(pinCode: string): Promise<string>;
-  getSpendXPrivKey(pinCode: string): Promise<string>;
+  getScanXPrivKey(pinCode: string): Promise<string | undefined>;
+  getSpendXPrivKey(pinCode: string): Promise<string | undefined>;
   getScanXPubKey(): Promise<string | undefined>;
   getSpendXPubKey(): Promise<string | undefined>;
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/types.ts` around lines 761 - 765, Update the shielded xpriv accessor
signatures so they match the documented behavior: change
getScanXPrivKey(pinCode: string) and getSpendXPrivKey(pinCode: string) to return
Promise<string | undefined> (like getScanXPubKey/getSpendXPubKey) so callers can
handle the legacy-wallet (undefined) case.
src/new/wallet.ts (1)

861-866: ⚠️ Potential issue | 🟠 Major

getAddressPathForIndex() is still ambiguous for shielded spend addresses.

Once both transparent and shielded-spend chains exist at the same index, index alone is no longer enough to reconstruct the derivation path. This method defaults to the legacy path unless callers remember to pass opts, but existing helpers in this file still call it with only the index, so shielded UTXOs/input metadata can still expose m/.../0/... instead of m/.../2/....

Please make path resolution address-aware (e.g. derive from addressType/address info) and update the remaining call sites to use that source of truth.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/new/wallet.ts` around lines 861 - 866, getAddressPathForIndex currently
treats index alone as ambiguous for shielded spend addresses; change its
signature to accept an address-aware discriminator (e.g. add an addressType or
address string parameter to IAddressChainOptions) and use that to choose between
the legacy path and the shielded spend path (referencing getAddressPathForIndex,
IAddressChainOptions, and SHIELDED_SPEND_ACCT_PATH). Update all helper call
sites in this file that call getAddressPathForIndex(index) to pass the derived
addressType/address info instead of relying on the default, and ensure the
function returns the correct m/.../2/... shielded spend path when addressType
indicates shielded spend.
🧹 Nitpick comments (1)
__tests__/integration/shielded_transactions.test.ts (1)

1191-1230: Test name doesn't match behavior.

The test claims to verify "read-only (xpub-only) wallet" behavior, but walletB is created with a seed and DEFAULT_PIN_CODE, making it a full wallet capable of decryption. The comment at line 1226-1228 acknowledges this. Either:

  1. Rename the test to reflect what it actually tests (e.g., "should process shielded outputs on receiving wallet"), or
  2. Actually create an xpub-only wallet (no seed/pinCode) to test graceful degradation when decryption is impossible.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@__tests__/integration/shielded_transactions.test.ts` around lines 1191 -
1230, The test currently creates a full wallet (walletB) with a seed so it can
decrypt shielded outputs; update the test to actually create an
xpub-only/read-only wallet or rename the test. If you choose to make walletB
read-only, call generateWalletHelper in xpub-only/read-only mode (use the
precalculated wallet data from
precalculationHelpers.test.getPrecalculatedWallet() and pass only
preCalculatedAddresses/xpub or the helper's xpubOnly flag instead of
seed/pinCode) so walletB cannot decrypt, then adjust the expected unlocked
balance from 50n to 0n when asserting via walletB.getBalance(NATIVE_TOKEN_UID).
Alternatively, rename the test to reflect that it currently verifies processing
on a full receiving wallet (e.g., "should process shielded outputs on receiving
wallet") and keep the existing walletB creation and expectation.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@__tests__/utils/transaction.test.ts`:
- Around line 743-747: The test is pre-deriving child 0 which masks derivation
bugs; instead mock storage.getSpendXPrivKey to return the root spend xpriv (use
spendXpriv.xprivkey) rather than spendXpriv.deriveNonCompliantChild(0).xprivkey,
and change the test's bip32AddressIndex from 0 to a non-zero value (e.g., 5) so
the code path actually performs derivation; update any other similar
mocks/assertions in the block (lines referencing storage.getSpendXPrivKey,
spendXpriv.deriveNonCompliantChild, and bip32AddressIndex) to follow the same
pattern.

In `@src/new/wallet.ts`:
- Around line 3054-3058: fullTx may contain raw shielded outputs (from
getTxById) which are left un-hydrated by the current map and then cast to
IHistoryTx and passed to getTxBalance; getTxBalance expects normalized/decrypted
history-shaped outputs. Fix by normalizing shielded outputs before balance
calculation: when transactionUtils.isShieldedOutputEntry(output) is true,
convert that output into the wallet's internal history shape (the same
normalization/decryption used elsewhere) instead of leaving it raw, or
alternatively skip casting fullTx.tx to IHistoryTx and compute the balance for
shielded entries via a separate shielded-aware path; ensure references to
fullTx, transactionUtils.isShieldedOutputEntry, hydrateWithTokenUid, getTxById
and getTxBalance are updated so getTxBalance only receives normalized
IHistoryTx-shaped data.

In `@src/shielded/provider.ts`:
- Around line 29-47: The createAmountShieldedOutput implementation is missing
TypeScript parameter type annotations required by IShieldedCryptoProvider;
update the signature of createAmountShieldedOutput to explicitly annotate
parameters (value: bigint, recipientPubkey: Buffer, tokenUid: Buffer,
valueBlindingFactor: Buffer) while keeping the existing ICreatedShieldedOutput
return type so the method matches the interface and passes BigInt safely to the
native addon.

In `@src/types.ts`:
- Around line 262-273: The IHistoryInput interface currently requires value,
token_data, script, decoded, and token even though the comment says they may be
absent for shielded inputs; update IHistoryInput so those five properties are
optional (value?, token_data?, script?, decoded?, token?) while keeping tx_id
and index required, and then run/typecheck to find and guard any sites that
assume those fields are always present (e.g., usages of IHistoryInput.value or
.decoded).

---

Outside diff comments:
In `@__tests__/integration/helpers/wallet.helper.ts`:
- Around line 327-338: The timeout happens because processHistory is only called
after waiting for processingStatus === FINISHED, so when the timeout path is
taken line invoking hWallet.storage.processHistory(...) never runs; change the
flow so that after marking inputs spent you always invoke and await
hWallet.storage.processHistory(hWallet.pinCode || undefined) (i.e., call
processHistory immediately after updateInputsSpentBy(hWallet, storageTx)
whenever storageTx.is_voided === false) rather than waiting for processingStatus
to reach FINISHED, ensuring history is processed in the timeout path as well.

---

Duplicate comments:
In `@__tests__/integration/hathorwallet_facade.test.ts`:
- Line 70: The test currently asserts a mined transaction nonce with
expect(tx.nonce).toBeGreaterThan(0), which is flaky because nonce may be 0;
update the assertion for tx.nonce in
__tests__/integration/hathorwallet_facade.test.ts to allow zero (e.g., assert it
is a number and >= 0 or use toBeGreaterThanOrEqual(0)) so the test only fails
for negative or non-numeric nonces; adjust any surrounding comments to reflect
the new expectation.

In `@__tests__/integration/shielded_transactions.test.ts`:
- Around line 33-36: You mutate the module-scoped constant TIMEOUT on the
imported constants object and never restore it, causing test order dependence;
update the test to save the original value of constants.TIMEOUT before setting
(e.g., const originalTimeout = (constants as any).TIMEOUT), set (constants as
any).TIMEOUT = 30000 for this suite, and restore it in an afterAll/afterEach
hook (or a finally block) by reassigning (constants as any).TIMEOUT =
originalTimeout so subsequent tests see the original timeout.
- Around line 608-650: Mark the two failing tests that exercise custom token
shielded outputs as skipped: change the test declarations for "should send
shielded outputs with a custom token (AmountShielded)" and the companion "should
send shielded outputs with a custom token (ValueShielded)" to use it.skip(...)
instead of it(...), preserving the existing TODO comment about phantom outputs
and the balance-equation bug so CI won't run them until the token balance
calculation (including shielded output amounts) is fixed.

In `@__tests__/wallet/walletServiceStorageProxy.test.ts`:
- Around line 18-21: The suite globally mocks isShieldedOutputEntry to always
return false, so add a new positive test in
__tests__/wallet/walletServiceStorageProxy.test.ts that flips the mock to return
true for that case (e.g., using (isShieldedOutputEntry as
jest.Mock).mockReturnValueOnce(true) or
jest.requireMock(...).isShieldedOutputEntry.mockReturnValueOnce(true)), call the
same proxy logic under test (the wallet storage proxy invocation used in other
tests) and assert the proxied transaction outputs match the shielded-output
shape expected (verify fields/structure that differ when isShieldedOutputEntry
returns true).

In `@package.json`:
- Line 17: The package.json currently declares a runtime dependency using a
local file reference "@hathor/ct-crypto-node":
"file:../hathor-core/hathor-ct-crypto/npm"; replace that file: reference with a
published semver (or an internal-registry/tarball URL) suitable for releases,
update any lockfile accordingly, and verify no other dependencies use "file:" by
searching for the string and confirming imports of "@hathor/ct-crypto-node" (to
ensure it's a runtime dependency) before publishing.

In `@src/new/wallet.ts`:
- Around line 861-866: getAddressPathForIndex currently treats index alone as
ambiguous for shielded spend addresses; change its signature to accept an
address-aware discriminator (e.g. add an addressType or address string parameter
to IAddressChainOptions) and use that to choose between the legacy path and the
shielded spend path (referencing getAddressPathForIndex, IAddressChainOptions,
and SHIELDED_SPEND_ACCT_PATH). Update all helper call sites in this file that
call getAddressPathForIndex(index) to pass the derived addressType/address info
instead of relying on the default, and ensure the function returns the correct
m/.../2/... shielded spend path when addressType indicates shielded spend.

In `@src/shielded/processing.ts`:
- Around line 130-145: Check and validate that shieldedOutput.asset_commitment
is present before using the non-null assertion in the isFullShielded branch: in
the block where you call cryptoProvider.rewindFullShieldedOutput (inside the
isFullShielded condition), guard against a missing or falsy
shieldedOutput.asset_commitment and throw or return a clear error (or skip
processing) if it's undefined/invalid, rather than calling
Buffer.from(shieldedOutput.asset_commitment!, 'hex'); this ensures
recoveredAbf/recoveredTokenUid logic only runs when asset_commitment is a valid
hex string.

In `@src/types.ts`:
- Around line 761-765: Update the shielded xpriv accessor signatures so they
match the documented behavior: change getScanXPrivKey(pinCode: string) and
getSpendXPrivKey(pinCode: string) to return Promise<string | undefined> (like
getScanXPubKey/getSpendXPubKey) so callers can handle the legacy-wallet
(undefined) case.

In `@src/utils/bigint.ts`:
- Around line 29-48: bigIntReviver currently calls BigInt(context.source)
without ensuring context or context.source exists, which can throw before the
catch; change bigIntReviver to first determine a safe input string for BigInt by
checking if context and context.source are present (e.g., typeof context?.source
=== 'string') and use that, otherwise fall back to String(value) or another safe
representation, then call BigInt on that input; also update the catch branch to
compare error messages against generic BigInt errors without assuming
context.source is available (remove context?.source usage in error message
comparisons) so the reviver is safe when the third argument is missing.

In `@src/utils/transaction.ts`:
- Around line 708-738: The shielded output mapping in the transaction builder
currently only validates common crypto fields before constructing a
ShieldedOutput; update the validation inside the txData.shieldedOutputs.map
callback (where ShieldedOutput is constructed) to also require assetCommitment
and surjectionProof when so.mode indicates FULLY_SHIELDED (or the equivalent
constant), throwing a clear Error if those fields are missing, so the
getTokenDataFromOutput/ShieldedOutput construction won't proceed with incomplete
data.

In `@src/utils/wallet.ts`:
- Around line 710-729: The code in generateAccessDataFromSeed always derives and
stores shielded keys (scanXpriv, spendXpriv, scanMainKey, spendMainKey)
regardless of wallet type; change it to only derive/encrypt/populate
scanXpubkey/scanMainKey and spendXpubkey/spendMainKey when walletType ===
WalletType.P2PKH (i.e., single-sig). Wrap the existing derivation and the return
fields scanXpubkey, scanMainKey, spendXpubkey, spendMainKey behind that guard
and ensure for non-P2PKH (multisig) those properties are omitted or set to
null/undefined consistently so multisig wallets do not store single-sig shielded
keys.
- Around line 624-637: The code currently derives and stores single-sig shielded
keys from argXpriv (using deriveNonCompliantChild with SHIELDED_SCAN_ACCT_PATH
and SHIELDED_SPEND_ACCT_PATH, then encryptData) regardless of wallet type;
change the condition to skip this entire derivation block when the wallet is a
multisig (check the wallet type variable, e.g., wallet.type or walletType !==
WalletType.MULTISIG) so you do not set accessData.scanXpubkey,
accessData.scanMainKey, accessData.spendXpubkey, or accessData.spendMainKey for
WalletType.MULTISIG; keep the existing behavior for non-multisig wallets and
leave all deriveNonCompliantChild/encryptData calls unchanged inside the guarded
block.

---

Nitpick comments:
In `@__tests__/integration/shielded_transactions.test.ts`:
- Around line 1191-1230: The test currently creates a full wallet (walletB) with
a seed so it can decrypt shielded outputs; update the test to actually create an
xpub-only/read-only wallet or rename the test. If you choose to make walletB
read-only, call generateWalletHelper in xpub-only/read-only mode (use the
precalculated wallet data from
precalculationHelpers.test.getPrecalculatedWallet() and pass only
preCalculatedAddresses/xpub or the helper's xpubOnly flag instead of
seed/pinCode) so walletB cannot decrypt, then adjust the expected unlocked
balance from 50n to 0n when asserting via walletB.getBalance(NATIVE_TOKEN_UID).
Alternatively, rename the test to reflect that it currently verifies processing
on a full receiving wallet (e.g., "should process shielded outputs on receiving
wallet") and keep the existing walletB creation and expectation.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: cc88ebf5-ee9f-4341-a5be-eae6f9b29a42

📥 Commits

Reviewing files that changed from the base of the PR and between 0ccc910 and fd83b54.

⛔ Files ignored due to path filters (1)
  • package-lock.json is excluded by !**/package-lock.json
📒 Files selected for processing (49)
  • __tests__/headers/shielded_outputs.test.ts
  • __tests__/integration/configuration/docker-compose.yml
  • __tests__/integration/configuration/privnet.yml
  • __tests__/integration/hathorwallet_facade.test.ts
  • __tests__/integration/helpers/wallet.helper.ts
  • __tests__/integration/shielded_transactions.test.ts
  • __tests__/models/address.test.ts
  • __tests__/models/network.test.ts
  • __tests__/models/shielded_output.test.ts
  • __tests__/models/transaction.test.ts
  • __tests__/shielded/creation.test.ts
  • __tests__/shielded/processing.test.ts
  • __tests__/storage/memory_store.test.ts
  • __tests__/storage/storage.test.ts
  • __tests__/utils/address.test.ts
  • __tests__/utils/transaction.test.ts
  • __tests__/wallet/walletServiceStorageProxy.test.ts
  • package.json
  • setupTests-integration.js
  • src/constants.ts
  • src/headers/parser.ts
  • src/headers/shielded_outputs.ts
  • src/headers/types.ts
  • src/lib.ts
  • src/models/address.ts
  • src/models/network.ts
  • src/models/shielded_output.ts
  • src/models/transaction.ts
  • src/new/sendTransaction.ts
  • src/new/types.ts
  • src/new/wallet.ts
  • src/schemas.ts
  • src/shielded/creation.ts
  • src/shielded/index.ts
  • src/shielded/processing.ts
  • src/shielded/provider.ts
  • src/shielded/types.ts
  • src/storage/memory_store.ts
  • src/storage/storage.ts
  • src/sync/stream.ts
  • src/types.ts
  • src/utils/address.ts
  • src/utils/bigint.ts
  • src/utils/shieldedAddress.ts
  • src/utils/storage.ts
  • src/utils/transaction.ts
  • src/utils/wallet.ts
  • src/wallet/types.ts
  • src/wallet/walletServiceStorageProxy.ts
💤 Files with no reviewable changes (1)
  • setupTests-integration.js

Comment on lines +743 to +747
jest.spyOn(storage, 'getMainXPrivKey').mockResolvedValue(legacyXpriv.xprivkey);
jest
.spyOn(storage, 'getSpendXPrivKey')
.mockResolvedValue(spendXpriv.deriveNonCompliantChild(0).xprivkey);

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Strengthen this test to avoid false positives on key derivation.

Line 746 pre-derives child 0 and the test also uses bip32AddressIndex: 0, which can pass even if the production path derivation is wrong. Prefer mocking the root spend xpriv and asserting with a non-zero index.

🔧 Suggested test hardening
-  jest
-    .spyOn(storage, 'getSpendXPrivKey')
-    .mockResolvedValue(spendXpriv.deriveNonCompliantChild(0).xprivkey);
+  jest.spyOn(storage, 'getSpendXPrivKey').mockResolvedValue(spendXpriv.xprivkey);

@@
-        bip32AddressIndex: 0,
+        bip32AddressIndex: 7,

@@
-  expect(sigData.inputSignatures[0].addressIndex).toBe(0);
+  expect(sigData.inputSignatures[0].addressIndex).toBe(7);

Also applies to: 753-777

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@__tests__/utils/transaction.test.ts` around lines 743 - 747, The test is
pre-deriving child 0 which masks derivation bugs; instead mock
storage.getSpendXPrivKey to return the root spend xpriv (use
spendXpriv.xprivkey) rather than spendXpriv.deriveNonCompliantChild(0).xprivkey,
and change the test's bip32AddressIndex from 0 to a non-zero value (e.g., 5) so
the code path actually performs derivation; update any other similar
mocks/assertions in the block (lines referencing storage.getSpendXPrivKey,
spendXpriv.deriveNonCompliantChild, and bip32AddressIndex) to follow the same
pattern.

Comment thread src/new/wallet.ts
Comment thread src/shielded/provider.ts
Comment thread src/types.ts
Replace file: symlink with the published npm package.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
@codecov
Copy link
Copy Markdown

codecov bot commented Apr 15, 2026

Codecov Report

❌ Patch coverage is 68.03519% with 218 lines in your changes missing coverage. Please review.
✅ Project coverage is 66.46%. Comparing base (0ccc910) to head (658692a).
⚠️ Report is 1 commits behind head on master.

Files with missing lines Patch % Lines
src/utils/storage.ts 44.66% 54 Missing and 3 partials ⚠️
src/new/sendTransaction.ts 26.00% 36 Missing and 1 partial ⚠️
src/new/wallet.ts 23.91% 34 Missing and 1 partial ⚠️
src/shielded/provider.ts 10.00% 18 Missing ⚠️
src/utils/transaction.ts 66.66% 12 Missing ⚠️
src/storage/storage.ts 50.00% 10 Missing ⚠️
src/shielded/creation.ts 86.66% 3 Missing and 5 partials ⚠️
src/models/address.ts 70.83% 7 Missing ⚠️
src/storage/memory_store.ts 89.06% 6 Missing and 1 partial ⚠️
src/models/shielded_output.ts 89.58% 4 Missing and 1 partial ⚠️
... and 7 more

❌ Your patch check has failed because the patch coverage (68.03%) is below the target coverage (70.00%). You can increase the patch coverage or adjust the target coverage.
❌ Your project check has failed because the head coverage (66.46%) is below the target coverage (75.00%). You can increase the head coverage or adjust the target coverage.

❗ There is a different number of reports uploaded between BASE (0ccc910) and HEAD (658692a). Click for more details.

HEAD has 5 uploads less than BASE
Flag BASE (0ccc910) HEAD (658692a)
6 1
Additional details and impacted files
@@             Coverage Diff             @@
##           master    #1059       +/-   ##
===========================================
- Coverage   87.93%   66.46%   -21.47%     
===========================================
  Files         114      120        +6     
  Lines        8910     9489      +579     
  Branches     2030     2193      +163     
===========================================
- Hits         7835     6307     -1528     
- Misses       1047     3142     +2095     
- Partials       28       40       +12     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.
  • 📦 JS Bundle Analysis: Save yourself from yourself by tracking and limiting bundle sizes in JS merges.

  - Make IHistoryInput fields optional for shielded inputs that lack
    value/token/decoded (spending hidden commitments)
  - Add shielded-input guards in balance calculation loops across
    storage.ts, transaction.ts, wallet.ts, and memory_store.ts
  - Normalize shielded outputs in getTxById before getTxBalance call
  - Add explicit type annotations to all provider.ts method parameters
  - Use non-zero bip32AddressIndex in spend-key derivation test to
    avoid false positives

  Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
  - Make IHistoryInput fields optional for shielded inputs that lack
    value/token/decoded (spending hidden commitments)
  - Add shielded-input guards in balance calculation loops across
    storage.ts, transaction.ts, wallet.ts, and memory_store.ts
  - Normalize shielded outputs in getTxById before getTxBalance call
  - Add explicit type annotations to all provider.ts method parameters
  - Use non-zero bip32AddressIndex in spend-key derivation test to
    avoid false positives

  Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
@pedroferreira1
Copy link
Copy Markdown
Member Author

@coderabbitai the last 2 commits addresses all your 4 comments. Please resolve everything and run a full review again

@coderabbitai
Copy link
Copy Markdown

coderabbitai bot commented Apr 15, 2026

@pedroferreira1 Understood! I'll resolve all my existing comments now and kick off a fresh full review of the latest changes right away.

✅ Actions performed

Full review triggered.

  Shielded transaction tests create many wallets and perform crypto
  operations that take significantly longer than the previous 50-minute
  limit allowed.

  Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
  proof domain, idempotent decryption, and address iteration

  - Fix surjection proof domain for FullShielded inputs: use asset_commitment
    (blinded generator) instead of ZERO_TWEAK, matching fullnode verification
  - Fix idempotent shielded decryption: skip re-decryption in processNewTx
    when outputs were already decoded (prevents duplicate append on reprocess)
  - Fix processSingleTx bounds check to account for shielded outputs
  - Fix saveAddress: exclude shielded-spend from legacy cursor tracking (#19)
  - Fix addressIter/addressCount: filter shielded addresses from legacy iteration
  - Fix pinCode || to ?? for nullish coalescing (#26)
  - Fix getCurrentAddress error message with chain type and index (#28)
  - Add FullShielded-to-FullShielded chain integration test
  - Increase CI integration test timeout to 120 minutes
  - Document FullShielded fee constraint (TODO_FIX_30)

  Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

Status: In Progress (WIP)

Development

Successfully merging this pull request may close these issues.

1 participant