feat: add limit option for error/warning display in publish command#2634
feat: add limit option for error/warning display in publish command#2634JivusAyrus merged 10 commits intomainfrom
Conversation
Router-nonroot image scan passed✅ No security vulnerabilities found in image: |
Codecov Report❌ Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #2634 +/- ##
===========================================
- Coverage 62.90% 39.90% -23.00%
===========================================
Files 244 969 +725
Lines 25826 121820 +95994
Branches 0 5469 +5469
===========================================
+ Hits 16245 48609 +32364
- Misses 8209 71517 +63308
- Partials 1372 1694 +322
🚀 New features to boost your workflow:
|
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
WalkthroughAdds a numeric Changes
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Possibly related PRs
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. Comment |
There was a problem hiding this comment.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
cli/src/commands/subgraph/commands/publish.ts (1)
110-144:⚠️ Potential issue | 🟠 MajorMake
--limitrequire an integer value.With
[number], Commander.js treats this as an optional option-argument that can accept no value and becometrue. SinceNumber(true)evaluates to1,--limitwithout an argument silently defaults to1instead of the provided default of50. Additionally, the current validation accepts decimal numbers like1.5even though the value is sent asint32. Change to<number>to require a value, and add!Number.isInteger(limit)to reject non-integers.Suggested fix
command.option( - '-l, --limit [number]', + '-l, --limit <number>', 'The maximum number of composition errors, warnings, and deployment errors to display.', - '50' + '50', ); @@ - if (Number.isNaN(limit) || limit <= 0 || limit > limitMaxValue) { + if (!Number.isInteger(limit) || limit <= 0 || limit > limitMaxValue) { program.error( pc.red(`The limit must be a valid number between 1 and ${limitMaxValue}. Received: '${options.limit}'`), ); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cli/src/commands/subgraph/commands/publish.ts` around lines 110 - 144, Change the Commander option definition for limit from an optional argument to a required numeric argument by replacing '[number]' with '<number>' in the command.option call that defines '-l, --limit'; then, in the publish command action where you parse and validate the limit (the const limit = Number(options.limit) block), add an additional check rejecting non-integer values by including !Number.isInteger(limit) alongside the existing NaN/range checks so decimals like 1.5 or a bare `--limit` (which currently becomes true) are rejected.
🧹 Nitpick comments (1)
cli/test/publish-schema.test.ts (1)
25-50: Add one test that exercises--limititself.This helper never appends
--limit, so the new cases only verify how the command formats an already-truncated response. A regression in CLI parsing/validation or request wiring for the new flag would still pass here.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cli/test/publish-schema.test.ts` around lines 25 - 50, The test helper runPublish never appends the new --limit flag, so tests that intend to exercise the CLI's limit parsing only validate handling of truncated responses; update runPublish to accept an optional limit (e.g., add failOnCompositionError?: boolean; failOnAdmissionWebhookError?: boolean; suppressWarnings?: boolean; limit?: number) and when opts.limit is provided push both '--limit' and String(opts.limit) onto the args array before creating the mock client and invoking program.parseAsync; reference the runPublish function and the args array so tests can pass limit to exercise real CLI flag wiring.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@cli/src/commands/subgraph/commands/publish.ts`:
- Around line 225-230: The truncation warning is using raw response lengths
(resp.compositionErrors.length, etc.) which can misreport items that were
suppressed/not actually rendered; update the three calls to
printTruncationWarning (the one guarded by options.failOnCompositionError and
the other two around lines ~262-266 and ~306-310) to pass a computed
displayedCounts object instead of raw resp counts—compute displayedCounts from
the same branches/flags that control actual output (e.g., respect
--suppress-warnings, whichTable/printing branches, and the program.error paths
that short-circuit printing) so the numbers reflect only items that were
actually rendered, then call printTruncationWarning(displayedCounts, {...}) in
each place.
---
Outside diff comments:
In `@cli/src/commands/subgraph/commands/publish.ts`:
- Around line 110-144: Change the Commander option definition for limit from an
optional argument to a required numeric argument by replacing '[number]' with
'<number>' in the command.option call that defines '-l, --limit'; then, in the
publish command action where you parse and validate the limit (the const limit =
Number(options.limit) block), add an additional check rejecting non-integer
values by including !Number.isInteger(limit) alongside the existing NaN/range
checks so decimals like 1.5 or a bare `--limit` (which currently becomes true)
are rejected.
---
Nitpick comments:
In `@cli/test/publish-schema.test.ts`:
- Around line 25-50: The test helper runPublish never appends the new --limit
flag, so tests that intend to exercise the CLI's limit parsing only validate
handling of truncated responses; update runPublish to accept an optional limit
(e.g., add failOnCompositionError?: boolean; failOnAdmissionWebhookError?:
boolean; suppressWarnings?: boolean; limit?: number) and when opts.limit is
provided push both '--limit' and String(opts.limit) onto the args array before
creating the mock client and invoking program.parseAsync; reference the
runPublish function and the args array so tests can pass limit to exercise real
CLI flag wiring.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 1652a1f6-9f34-44f1-9aca-62cf975156ab
⛔ Files ignored due to path filters (1)
connect-go/gen/proto/wg/cosmo/platform/v1/platform.pb.gois excluded by!**/*.pb.go,!**/gen/**
📒 Files selected for processing (5)
cli/src/commands/subgraph/commands/publish.tscli/test/publish-schema.test.tsconnect/src/wg/cosmo/platform/v1/platform_pb.tscontrolplane/src/core/bufservices/subgraph/publishFederatedSubgraph.tsproto/wg/cosmo/platform/v1/platform.proto
There was a problem hiding this comment.
🧹 Nitpick comments (1)
controlplane/test/subgraph/publish-subgraph.test.ts (1)
1369-1371: Prefer using Vitest'stoBeGreaterThanOrEqualmatcher for clearer assertions and better failure messages.The current boolean comparison pattern works but produces less informative failure messages when tests fail.
♻️ Suggested improvement
// When limit is 1, only 1 error should be returned but counts reflects the total expect(publishResp.compositionErrors.length).toBe(1); // The total count should be equal to or greater than what's returned - expect(publishResp.counts!.compositionErrors! >= publishResp.compositionErrors.length).toBe(true); + expect(publishResp.counts!.compositionErrors!).toBeGreaterThanOrEqual(publishResp.compositionErrors.length);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@controlplane/test/subgraph/publish-subgraph.test.ts` around lines 1369 - 1371, Replace the boolean comparison assertion with Vitest's clearer matcher: update the assertion that currently checks "expect(publishResp.counts!.compositionErrors! >= publishResp.compositionErrors.length).toBe(true)" to use toBeGreaterThanOrEqual so it reads as an explicit numeric comparison between publishResp.counts!.compositionErrors! and publishResp.compositionErrors.length; keep the existing first assertion for publishResp.compositionErrors.length and only change the second to use expect(publishResp.counts!.compositionErrors!).toBeGreaterThanOrEqual(publishResp.compositionErrors.length) to improve failure messages.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@controlplane/test/subgraph/publish-subgraph.test.ts`:
- Around line 1369-1371: Replace the boolean comparison assertion with Vitest's
clearer matcher: update the assertion that currently checks
"expect(publishResp.counts!.compositionErrors! >=
publishResp.compositionErrors.length).toBe(true)" to use toBeGreaterThanOrEqual
so it reads as an explicit numeric comparison between
publishResp.counts!.compositionErrors! and publishResp.compositionErrors.length;
keep the existing first assertion for publishResp.compositionErrors.length and
only change the second to use
expect(publishResp.counts!.compositionErrors!).toBeGreaterThanOrEqual(publishResp.compositionErrors.length)
to improve failure messages.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: af8e4232-8737-402d-a4eb-614d44dfddf6
📒 Files selected for processing (2)
cli/src/commands/subgraph/commands/publish.tscontrolplane/test/subgraph/publish-subgraph.test.ts
🚧 Files skipped from review as they are similar to previous changes (1)
- cli/src/commands/subgraph/commands/publish.ts
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@cli/src/commands/subgraph/commands/publish.ts`:
- Around line 110-114: The --limit option is defined as an optional value
('[number]') which allows boolean/float values; change the option signature to
required-value ('-l, --limit <number>') and ensure the parsed value is converted
to a Number and validated with Number.isInteger and the existing min/max checks
(the option is declared via command.option and later validated), rejecting
non-integers (e.g., fractional "3.5") and preserving the default 50 when absent;
update the parsing/validation logic that currently only checks isNaN/range to
also assert Number.isInteger(parsedLimit) and throw or exit with a clear error
if it fails.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 5061365e-c7cc-4e1f-8a4c-e636e3323847
⛔ Files ignored due to path filters (1)
connect-go/gen/proto/wg/cosmo/platform/v1/platform.pb.gois excluded by!**/*.pb.go,!**/gen/**
📒 Files selected for processing (3)
cli/src/commands/subgraph/commands/publish.tsconnect/src/wg/cosmo/platform/v1/platform_pb.tsproto/wg/cosmo/platform/v1/platform.proto
🚧 Files skipped from review as they are similar to previous changes (1)
- proto/wg/cosmo/platform/v1/platform.proto
|
We should probably document the |
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (1)
controlplane/src/core/bufservices/subgraph/publishFederatedSubgraph.ts (1)
757-767: Consider the counts consistency in the composition error path.When composition fails,
counts.deploymentErrorsreflectsdeploymentErrors.lengthfromsubgraphRepo.update(), but the response returnsdeploymentErrors: []. This is fine for backward compatibility, but clients may find it unexpected thatcounts.deploymentErrorscould be non-zero while the returneddeploymentErrorsarray is empty.If this is intentional (showing total counts regardless of what's displayed), consider documenting this behavior. If counts should only reflect what's returned, then
deploymentErrors: 0would be more consistent here.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@controlplane/src/core/bufservices/subgraph/publishFederatedSubgraph.ts` around lines 757 - 767, When returning early on composition failure in publishFederatedSubgraph (the branch checking compositionErrors.length > 0), ensure counts and the returned deploymentErrors are consistent: either set counts.deploymentErrors = 0 before the return so it matches the returned deploymentErrors: [] (preferred), or return the actual deploymentErrors array from subgraphRepo.update() instead of an empty array; update the code that computes or overrides counts (the variable named counts) in this branch to reflect the chosen behavior.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@controlplane/test/subgraph/publish-subgraph.test.ts`:
- Around line 1311-1371: The test currently doesn't verify that the limit
actually truncated results because the fixture might only produce one
composition error; update the assertion after the publishFederatedSubgraph call
(where publishResp is created) to ensure the total reported count is strictly
greater than the number of returned errors (e.g., assert
publishResp.counts!.compositionErrors! > publishResp.compositionErrors.length)
so the test fails if limit had no truncation effect; alternatively, if you
prefer to guarantee multiple errors from the fixture, expand the schema passed
to client.publishFederatedSubgraph for subgraphName2 to include multiple
conflicting definitions so publishResp.counts!.compositionErrors! > 1 and then
assert counts is greater than the returned compositionErrors length.
---
Nitpick comments:
In `@controlplane/src/core/bufservices/subgraph/publishFederatedSubgraph.ts`:
- Around line 757-767: When returning early on composition failure in
publishFederatedSubgraph (the branch checking compositionErrors.length > 0),
ensure counts and the returned deploymentErrors are consistent: either set
counts.deploymentErrors = 0 before the return so it matches the returned
deploymentErrors: [] (preferred), or return the actual deploymentErrors array
from subgraphRepo.update() instead of an empty array; update the code that
computes or overrides counts (the variable named counts) in this branch to
reflect the chosen behavior.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 1da8d43e-5fb7-4b4e-8a4d-f07e35b0319c
📒 Files selected for processing (2)
controlplane/src/core/bufservices/subgraph/publishFederatedSubgraph.tscontrolplane/test/subgraph/publish-subgraph.test.ts
There was a problem hiding this comment.
🧹 Nitpick comments (3)
cli/test/publish-schema.test.ts (2)
250-270: Test could also verify no warning truncation message is shown.The test verifies the warnings table isn't displayed when
--suppress-warningsis set, but it doesn't explicitly assert that the truncation warning doesn't mention composition warnings. Consider adding an assertion to confirm no "composition warnings" appears in any truncation message.Suggested additional assertion
const warningTableCalls = logSpy.mock.calls.filter( ([arg]) => typeof arg === 'string' && arg.includes('warnings were produced'), ); expect(warningTableCalls).toHaveLength(0); + + // Verify truncation warning doesn't mention composition warnings when suppressed + const truncationCalls = logSpy.mock.calls.filter( + ([arg]) => typeof arg === 'string' && arg.includes('truncated'), + ); + expect(truncationCalls).toHaveLength(0);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cli/test/publish-schema.test.ts` around lines 250 - 270, The test 'does not show warnings truncation when suppressWarnings is set' currently only asserts no "warnings were produced" truncation calls; update the test (the call to runPublish and subsequent checks using logSpy and warningTableCalls) to also assert that no truncation message mentions "composition warnings" (e.g., filter logSpy.mock.calls for strings containing "composition warnings" or similar truncation phrasing and expect length 0) so the test explicitly verifies suppression hides composition warning truncation text.
86-101: Consider enablingrestoreMocks: truein Vitest config.The test setup spies on
console.log,process.stderr.write, andprocess.exitbut doesn't explicitly restore them inafterEach. While this may work if Vitest is configured to auto-restore mocks, explicitly addingrestoreMocks: truetovite.config.ts(as suggested in a prior review) would be more robust and eliminate the need for manual restoration across all test files.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cli/test/publish-schema.test.ts` around lines 86 - 101, The tests in publish-schema.test.ts call vi.spyOn(console, 'log'), vi.spyOn(process.stderr, 'write') and vi.spyOn(process, 'exit') but rely on global restoration; enable automatic mock restoration by setting restoreMocks: true in the Vitest configuration (test.restoreMocks = true) so vi.spyOn mocks are reset between tests; after adding this setting you can drop any manual restore logic tied to these spies in tests (they will be auto-restored), ensuring consistent cleanup for tests that use vi.spyOn.cli/vite.config.ts (1)
11-11: Verify thatrestoreMocksis the intended scope.
restoreMocksis broader than just clearing leaked mock calls: Vitest runs.mockRestore()before each test, which also restores original spy implementations/descriptors. If this package only needs per-test call isolation,clearMocksis the less disruptive setting. (v3.vitest.dev)Optional narrower config
- restoreMocks: true, + clearMocks: true,🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cli/vite.config.ts` at line 11, The config currently sets restoreMocks: true which restores original implementations before each test; if you only need per-test call isolation swap it to clearMocks: true in the Vite/Vitest config (replace the restoreMocks property with clearMocks: true in the exported config). If you actually need restoring of spies/implementations keep restoreMocks but add a comment explaining why; reference the restoreMocks / clearMocks keys in vite.config.ts to make the change.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@cli/test/publish-schema.test.ts`:
- Around line 250-270: The test 'does not show warnings truncation when
suppressWarnings is set' currently only asserts no "warnings were produced"
truncation calls; update the test (the call to runPublish and subsequent checks
using logSpy and warningTableCalls) to also assert that no truncation message
mentions "composition warnings" (e.g., filter logSpy.mock.calls for strings
containing "composition warnings" or similar truncation phrasing and expect
length 0) so the test explicitly verifies suppression hides composition warning
truncation text.
- Around line 86-101: The tests in publish-schema.test.ts call vi.spyOn(console,
'log'), vi.spyOn(process.stderr, 'write') and vi.spyOn(process, 'exit') but rely
on global restoration; enable automatic mock restoration by setting
restoreMocks: true in the Vitest configuration (test.restoreMocks = true) so
vi.spyOn mocks are reset between tests; after adding this setting you can drop
any manual restore logic tied to these spies in tests (they will be
auto-restored), ensuring consistent cleanup for tests that use vi.spyOn.
In `@cli/vite.config.ts`:
- Line 11: The config currently sets restoreMocks: true which restores original
implementations before each test; if you only need per-test call isolation swap
it to clearMocks: true in the Vite/Vitest config (replace the restoreMocks
property with clearMocks: true in the exported config). If you actually need
restoring of spies/implementations keep restoreMocks but add a comment
explaining why; reference the restoreMocks / clearMocks keys in vite.config.ts
to make the change.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 1f62b58a-e4bb-4ea2-8477-1b57fe671c7f
📒 Files selected for processing (4)
cli/src/commands/subgraph/commands/publish.tscli/test/check-schema.test.tscli/test/publish-schema.test.tscli/vite.config.ts
💤 Files with no reviewable changes (1)
- cli/test/check-schema.test.ts
Summary by CodeRabbit
New Features
Tests
Checklist
COMPLETES ENG-9168