Skip to content

feat: track failed rules in AnalysisResult#64

Merged
let-sunny merged 3 commits intomainfrom
feat/track-failed-rules
Mar 25, 2026
Merged

feat: track failed rules in AnalysisResult#64
let-sunny merged 3 commits intomainfrom
feat/track-failed-rules

Conversation

@let-sunny
Copy link
Copy Markdown
Owner

@let-sunny let-sunny commented Mar 25, 2026

Summary

  • Add RuleFailure type and failedRules: RuleFailure[] to AnalysisResult — rule execution errors are now collected instead of silently swallowed via console.error
  • Include failedRules in buildResultJson output when non-empty, so CLI --json and MCP consumers can detect incomplete analysis
  • Export RuleFailure type from browser entry point for web app / plugin consumers

Changes

File Change
rule-engine.ts Add RuleFailure interface, collect failures in traverseAndCheck, return in AnalysisResult
scoring.ts buildResultJson includes failedRules when array is non-empty
browser.ts Export RuleFailure type
rule-engine.test.ts Updated error resilience test to verify failedRules tracking; added empty case test
scoring.test.ts Updated mock to include failedRules: []
analysis-agent.test.ts Updated mock to include failedRules: []

Test plan

  • pnpm lint — type check passes
  • pnpm test:run — all 427 tests pass (28 files)
  • pnpm build — production build succeeds
  • Error resilience test verifies failedRules contains failure details (ruleId, nodeName, nodeId, error message)
  • Normal analysis returns failedRules: []

Closes #49

🤖 Generated with Claude Code

Summary by CodeRabbit

Release Notes

  • New Features

    • Rule failures are now captured and returned in analysis results instead of only being logged to console, enabling programmatic error handling and reporting.
    • New RuleFailure type exported for API consumers, providing structured access to failed rule details.
  • Bug Fixes

    • Analysis output now conditionally includes failedRules data when rule errors occur, improving error visibility and result completeness.

…wing

Rule execution errors were caught and logged to console.error but never
surfaced to consumers. This meant users could get incomplete analysis
without knowing it — a silent failure that undermines trust in results.

Add `failedRules: RuleFailure[]` to AnalysisResult, collected during
traversal. Included in JSON output (buildResultJson) when non-empty.
Exported RuleFailure type via browser entry point.

Closes #49

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai bot commented Mar 25, 2026

Warning

Rate limit exceeded

@let-sunny has exceeded the limit for the number of commits that can be reviewed per hour. Please wait 4 minutes and 13 seconds before requesting another review.

⌛ How to resolve this issue?

After the wait time has elapsed, a review can be triggered using the @coderabbitai review command as a PR comment. Alternatively, push new commits to this PR.

We recommend that you space out your commits to avoid hitting the rate limit.

🚦 How do rate limits work?

CodeRabbit enforces hourly rate limits for each developer per organization.

Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout.

Please see our FAQ for further information.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: ASSERTIVE

Plan: Pro

Run ID: a0320e3e-2951-4ce9-9165-0f153480f1e0

📥 Commits

Reviewing files that changed from the base of the PR and between 58242ff and bea45d1.

📒 Files selected for processing (1)
  • .coderabbit.yaml
📝 Walkthrough

Walkthrough

The changes implement rule execution failure tracking within the analysis engine. A new RuleFailure interface captures details of failed rules (ruleId, nodeName, nodeId, error), the AnalysisResult structure gains a failedRules array field, and error handling logic now records failures instead of solely logging them. The feature propagates through type exports, test helpers, and JSON serialization.

Changes

Cohort / File(s) Summary
Rule Engine Core Implementation
src/core/engine/rule-engine.ts
Introduced new exported RuleFailure interface; extended AnalysisResult with failedRules: RuleFailure[] field; replaced console.error logging with error capture logic that records rule failures into the array during traversal.
Result Serialization
src/core/engine/scoring.ts
Updated buildResultJson to conditionally include failedRules in the returned JSON object only when it contains entries.
Type Exports
src/browser.ts
Added RuleFailure type to the re-exported type list from ./core/engine/rule-engine.js.
Rule Engine Tests
src/core/engine/rule-engine.test.ts
Updated error resilience test to validate failedRules array population instead of console error logging; added test case verifying empty failedRules when no rules fail.
Test Helpers
src/core/engine/scoring.test.ts, src/agents/analysis-agent.test.ts
Updated makeResult and createMockResult test helpers to include failedRules: [] in constructed AnalysisResult objects.

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~22 minutes

Possibly related PRs

  • PR #58 — Modifies rule-engine error-resilience behavior and its tests; directly aligns with this PR's replacement of console error logging with failure tracking.

Poem

🐰 A rabbit's ode to better debugging
Rules that stumble, now we see,
Caught and tracked so carefully,
No more silent errors creeping—
Failures logged, transparency beeping! 📋✨

🚥 Pre-merge checks | ✅ 3 | ❌ 2

❌ Failed checks (2 warnings)

Check name Status Explanation Resolution
Linked Issues check ⚠️ Warning The PR partially implements issue #49 objectives: it successfully tracks rule failures in AnalysisResult with ruleId, error message, nodeName, and nodeId. However, it omits surface failures in reports/UI warning and telemetry/events for rule execution failures. Complete the implementation by adding UI warning display in HTML reports when rules fail and adding telemetry events (e.g., RULE_EXECUTION_FAILED) to monitor rule execution failures in production.
Docstring Coverage ⚠️ Warning Docstring coverage is 33.33% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (3 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title 'feat: track failed rules in AnalysisResult' accurately and concisely describes the primary change—adding failure tracking to capture rule execution errors in the AnalysisResult structure.
Out of Scope Changes check ✅ Passed All changes are directly related to rule failure tracking: rule-engine updates to capture failures, AnalysisResult type extension, type exports, scoring JSON output, and test updates. No unrelated changes detected.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch feat/track-failed-rules

Comment @coderabbitai help to get the list of available commands and usage tips.

let-sunny and others added 2 commits March 25, 2026 20:49
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
TypeScript strict mode types serve as documentation; 80% default
was triggering false warnings on well-typed code.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
@let-sunny let-sunny merged commit f8a395a into main Mar 25, 2026
3 checks passed
@let-sunny let-sunny deleted the feat/track-failed-rules branch March 25, 2026 11:58
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

feat: 룰 실행 실패 추적 및 리포트에 포함

1 participant