Skip to content

feat: adjust rule scores based on AI implementation interview (#200)#203

Merged
let-sunny merged 8 commits intomainfrom
feat/200-interview-score-adjustments
Mar 30, 2026
Merged

feat: adjust rule scores based on AI implementation interview (#200)#203
let-sunny merged 8 commits intomainfrom
feat/200-interview-score-adjustments

Conversation

@let-sunny
Copy link
Copy Markdown
Owner

@let-sunny let-sunny commented Mar 30, 2026

Closes #200

Summary

  • interview with ai #200 AI implementation interview에서 도출된 기존 룰 score/severity 조정 4건 적용
  • non-semantic-name: -1 → -4, suggestion → risk (non-semantic names block element purpose identification)
  • irregular-spacing: -2 → -5, missing-info → risk (irregular spacing actively causes errors)
  • variant-structure-mismatch: -4 → -6 (MO/PC variant matching cost)
  • missing-interaction-state: -3 → -5 (wrong guess > omission)
  • Category "minor" → "semantic" (name contradicted elevated severity)

What's NOT in this PR

Test plan

  • pnpm test:run — 673/673 passed
  • CUSTOMIZATION.md sync test passed
  • Full codebase grep for "minor" remnants — clean
  • /calibrate-loop to validate score change impact (follow-up)

🤖 Generated with Claude Code

Summary by CodeRabbit

  • Configuration

    • Increased severity and adjusted scores for several rules (variant-structure-mismatch, irregular-spacing, missing-interaction-state, non-semantic-name) and reclassified naming-related rules from "minor" to "semantic".
  • Documentation

    • Updated customization guide, examples, README, and skill/docs to reflect new severities, scores, and the "Semantic" grouping.
  • Contracts / UI

    • Replaced "minor" with "semantic" in category schema, labels, descriptions, and related comments.
  • Tests

    • Updated test fixtures and expectations to use "semantic" instead of "minor".

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai bot commented Mar 30, 2026

Warning

Rate limit exceeded

@let-sunny has exceeded the limit for the number of commits that can be reviewed per hour. Please wait 3 minutes and 29 seconds before requesting another review.

Your organization is not enrolled in usage-based pricing. Contact your admin to enable usage-based pricing to continue reviews beyond the rate limit, or try again in 3 minutes and 29 seconds.

⌛ How to resolve this issue?

After the wait time has elapsed, a review can be triggered using the @coderabbitai review command as a PR comment. Alternatively, push new commits to this PR.

We recommend that you space out your commits to avoid hitting the rate limit.

🚦 How do rate limits work?

CodeRabbit enforces hourly rate limits for each developer per organization.

Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout.

Please see our FAQ for further information.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: ASSERTIVE

Plan: Pro

Run ID: 2d366bfb-2e6f-4783-bf48-2d8516698936

📥 Commits

Reviewing files that changed from the base of the PR and between 67e289a and 0d7c189.

📒 Files selected for processing (1)
  • src/core/engine/scoring.ts
📝 Walkthrough

Walkthrough

This PR renames the "minor" rule category to "semantic" and updates category mappings, rule metadata, default scores, and severities across configs, tests, docs, and UI constants.

Changes

Cohort / File(s) Summary
Core Category & Config
src/core/contracts/category.ts, src/core/rules/rule-config.ts, src/core/ui-constants.ts
Replaced "minor" with "semantic" in category schema/labels/descriptions and RULE_ID_CATEGORY; adjusted default severities/scores for several rules (e.g., variant-structure-mismatch -4 → -6, irregular-spacing missing-info → risk and -2 → -5, missing-interaction-state -3 → -5, non-semantic-name suggestion → risk and -1 → -4).
Rule Definitions
src/core/rules/naming/index.ts
Three naming rules (non-semantic-name, non-standard-naming, inconsistent-naming-convention) reclassified to category "semantic".
Rule Configs & Docs
docs/CUSTOMIZATION.md, README.md, .claude/agents/calibration/converter.md, .claude/skills/canicode/SKILL.md, CLAUDE.md, src/core/contracts/rule.ts
Documentation, examples, and comment strings updated to replace "Minor" with "Semantic" and reflect new defaults and labels; designer prompt example list updated to include semantic.
Tests & Fixtures
src/core/engine/scoring.test.ts, src/core/engine/integration.test.ts, src/core/report-html/render.test.ts, src/core/rules/naming/*.test.ts, src/agents/orchestrator.test.ts, src/cli/commands/internal/rule-discovery.test.ts
Test fixtures and assertions updated to use category: "semantic" and to expect semantic keys/tabs where previously minor was used.
Comments / Docs in Code
src/core/engine/scoring.ts
Inline example comments updated to use semantic -4 examples (no functional scoring changes).

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~35 minutes

Possibly related PRs

Poem

🐰 I hopped through docs and rule arrays,
Switched "minor" labels for semantic days.
Scores nudged deeper, names realigned,
A rabbit cheers for tidy design. 🥕✨

🚥 Pre-merge checks | ✅ 2
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title accurately summarizes the main change—adjusting rule scores based on interview findings—and directly reflects the primary objective of the PR.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch feat/200-interview-score-adjustments

Comment @coderabbitai help to get the list of available commands and usage tips.

@let-sunny let-sunny marked this pull request as ready for review March 30, 2026 14:22
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
docs/CUSTOMIZATION.md (1)

148-150: 🧹 Nitpick | 🔵 Trivial

Example config now shows a less-strict override for non-semantic-name.

The "Strict" example sets "non-semantic-name": { "score": -3, "severity": "risk" }. With the new default of -4, this example now demonstrates a relaxation (from -4 to -3) rather than the previous stricter override (from -1 to -3).

This may be intentional to show that overrides work in both directions, but it contradicts the "Strict (for production-ready designs)" label. Consider updating the example score to something stricter than -4 (e.g., -6) to maintain the "strict" semantics, or add a comment clarifying the intent.

📝 Suggested fix to maintain "strict" semantics
 {
   "rules": {
     "no-auto-layout": { "score": -15 },
     "raw-value": { "score": -5, "severity": "risk" },
-    "non-semantic-name": { "score": -3, "severity": "risk" }
+    "non-semantic-name": { "score": -6, "severity": "blocking" }
   }
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@docs/CUSTOMIZATION.md` around lines 148 - 150, The example in
docs/CUSTOMIZATION.md currently shows "non-semantic-name": { "score": -3,
"severity": "risk" } which is less strict than the new default (-4) and
therefore contradicts the "Strict" label; update the example so the override is
stricter than the default (e.g., change the score for "non-semantic-name" to -6)
or alternatively add a short inline comment after the "non-semantic-name" entry
clarifying that this example intentionally relaxes or tightens rules; edit the
example block containing "raw-value" and "non-semantic-name" to reflect the
chosen fix.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@src/core/rules/rule-config.ts`:
- Around line 144-148: The rule "non-semantic-name" now has severity: "risk" but
remains listed under the Minor category in RULE_ID_CATEGORY; either move
"non-semantic-name" to a higher-impact category in RULE_ID_CATEGORY to match its
elevated severity, or add a brief comment/docstring near RULE_ID_CATEGORY
explaining why this rule intentionally resides in Minor despite risk severity
(reference the RULE_ID_CATEGORY structure and the "non-semantic-name" entry to
locate the change).

---

Outside diff comments:
In `@docs/CUSTOMIZATION.md`:
- Around line 148-150: The example in docs/CUSTOMIZATION.md currently shows
"non-semantic-name": { "score": -3, "severity": "risk" } which is less strict
than the new default (-4) and therefore contradicts the "Strict" label; update
the example so the override is stricter than the default (e.g., change the score
for "non-semantic-name" to -6) or alternatively add a short inline comment after
the "non-semantic-name" entry clarifying that this example intentionally relaxes
or tightens rules; edit the example block containing "raw-value" and
"non-semantic-name" to reflect the chosen fix.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: ASSERTIVE

Plan: Pro

Run ID: f51188ba-f3d1-446e-a767-c20f55510d93

📥 Commits

Reviewing files that changed from the base of the PR and between 3c82e96 and 7d82214.

📒 Files selected for processing (3)
  • .claude/settings.json
  • docs/CUSTOMIZATION.md
  • src/core/rules/rule-config.ts

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@src/core/rules/rule-config.ts`:
- Line 145: Remove the inline calibration rationale appended to the severity
line (the trailing comment after severity: "risk") in rule-config's
auto-calibrated block; leave the code line comment-free (e.g., keep only
severity: "risk",) and move the explanatory rationale into the PR description or
calibration evidence artifacts instead, and ensure no other inline rationale
comments are added to the auto-calibrated entries such as the rule-config.ts
severity declarations.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: ASSERTIVE

Plan: Pro

Run ID: 509f4881-7608-423b-840b-f470f59d60f5

📥 Commits

Reviewing files that changed from the base of the PR and between 7d82214 and 54a77a7.

📒 Files selected for processing (2)
  • docs/CUSTOMIZATION.md
  • src/core/rules/rule-config.ts

"non-semantic-name": {
severity: "suggestion",
score: -1,
severity: "risk", // category is "minor" (ΔV < 2%) but severity elevated per interview (#200): non-semantic names block element purpose identification
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick | 🔵 Trivial

Avoid inline calibration rationale comments in this auto-calibrated config file.

Line 145 adds rationale text that is likely to create churn in a file frequently modified by calibration automation; keep rationale in PR/evidence artifacts instead.

♻️ Proposed cleanup
-    severity: "risk", // category is "minor" (ΔV < 2%) but severity elevated per interview (`#200`): non-semantic names block element purpose identification
+    severity: "risk",

Based on learnings: "src/core/rules/rule-config.ts is automatically adjusted by a nightly calibration pipeline... Do NOT suggest adding inline comments to this file for calibration rationale".

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
severity: "risk", // category is "minor" (ΔV < 2%) but severity elevated per interview (#200): non-semantic names block element purpose identification
severity: "risk",
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/core/rules/rule-config.ts` at line 145, Remove the inline calibration
rationale appended to the severity line (the trailing comment after severity:
"risk") in rule-config's auto-calibrated block; leave the code line comment-free
(e.g., keep only severity: "risk",) and move the explanatory rationale into the
PR description or calibration evidence artifacts instead, and ensure no other
inline rationale comments are added to the auto-calibrated entries such as the
rule-config.ts severity declarations.

let-sunny and others added 4 commits March 30, 2026 23:45
4 rules adjusted per real implementation experience:
- non-semantic-name: -1 → -4, suggestion → risk (causes actual impl errors)
- irregular-spacing: -2 → -5, missing-info → risk (actively causes errors)
- variant-structure-mismatch: -4 → -6 (MO/PC variant matching cost)
- missing-interaction-state: -3 → -5 (wrong guess > omission)

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
"minor" implies low importance, contradicts non-semantic-name's elevated
severity (risk) and score (-4). "semantic" accurately describes the 3 naming
rules and allows future expansion.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
@let-sunny let-sunny force-pushed the feat/200-interview-score-adjustments branch from 3f9467c to 2d63a19 Compare March 30, 2026 14:46
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (2)
src/core/engine/scoring.test.ts (1)

340-349: ⚠️ Potential issue | 🟡 Minor

formatScoreSummary “all categories” assertion is incomplete.

This test block still omits interaction, so it does not fully validate the category list it describes.

Proposed test fix
-  it("includes all 5 categories", () => {
+  it("includes all categories", () => {
     const scores = calculateScores(makeResult([]));
     const summary = formatScoreSummary(scores);

     expect(summary).toContain("pixel-critical:");
     expect(summary).toContain("responsive-critical:");
     expect(summary).toContain("code-quality:");
     expect(summary).toContain("token-management:");
+    expect(summary).toContain("interaction:");
     expect(summary).toContain("semantic:");
   });
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/core/engine/scoring.test.ts` around lines 340 - 349, The test in
scoring.test.ts calls calculateScores(makeResult([])) and checks
formatScoreSummary but misses the "interaction" category; update the "includes
all 5 categories" test (which references calculateScores, formatScoreSummary,
and makeResult) to also assert that the summary contains "interaction:" so all
expected categories are validated.
src/core/engine/scoring.ts (1)

88-97: ⚠️ Potential issue | 🟡 Minor

Update the evidence comment to use semantic terminology.

The evidence list still says minor while the active category key is now semantic, which makes the rationale block internally inconsistent.

As per coding guidelines: **/*.{ts,js} changes should keep behavior and documentation in sync.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/core/engine/scoring.ts` around lines 88 - 97, The evidence comment in the
category weights block is inconsistent: change the "minor" label to the active
category key "semantic" so the rationale matches the code; update the list item
text (the line that reads "- minor: ΔV <2%, negligible code difference") to "-
semantic: ΔV <2%, negligible code difference" in src/core/engine/scoring.ts so
documentation and category keys (e.g., "semantic") stay in sync with the scoring
configuration.
♻️ Duplicate comments (2)
src/core/rules/rule-config.ts (2)

145-146: 🛠️ Refactor suggestion | 🟠 Major

Remove the inline rationale comment from this auto-calibrated config entry.

Keeping calibration rationale inline here tends to create churn in a file that is frequently automation-updated.

Proposed cleanup
-    severity: "risk", // category is "semantic" (ΔV < 2%) but severity elevated per interview (`#200`): non-semantic names block element purpose identification
+    severity: "risk",

Based on learnings: src/core/rules/rule-config.ts is automatically adjusted by a nightly calibration pipeline, so inline calibration rationale comments should be avoided.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/core/rules/rule-config.ts` around lines 145 - 146, Remove the inline
rationale comment that follows the severity property in the auto-calibrated
config entry—leave the properties (severity: "risk", score: -4) intact but
delete the trailing comment text ("// category is "semantic" (ΔV < 2%) but
severity elevated per interview (`#200`): non-semantic names block element purpose
identification") so the automated calibration file no longer contains manual
rationale comments; target the object containing the severity and score keys
(the auto-calibrated rule config entry) and ensure no other comments or code are
changed.

97-101: ⚠️ Potential issue | 🔴 Critical

Remove calibration rationale comments and ensure Arbitrator decision evidence is added before merge.

Inline comments explaining calibration decisions (lines 143, 147–148) must be removed from this file. Per project guidelines, calibration evidence must be tracked in PR descriptions, commit messages, and data/calibration-evidence.json—not as code comments, as this file is frequently auto-modified by the calibration pipeline.

Additionally, data/calibration-evidence.json is currently empty. Per the Arbitrator review requirement, all final score/severity adjustments in rule-config.ts must have explicit Arbitrator decision evidence before merge. This applies to the changes at lines 99–103, 119–126, 129–133, and 146–150.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/core/rules/rule-config.ts` around lines 97 - 101, Remove any inline
calibration rationale comments from this file (notably the comment near the
"variant-structure-mismatch" rule entry) and do not leave calibration
explanations in code; instead, add the required Arbitrator decision evidence for
every final score/severity adjustment (including the adjustments made to
"variant-structure-mismatch" and the other adjusted rule entries) into
data/calibration-evidence.json, update the PR description and commit message to
reference those exact evidence entries, and ensure the Arbitrator name, date,
decision, and a brief rationale are included in each evidence record so the
change can be merged.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@src/core/report-html/render.test.ts`:
- Around line 389-390: Rename the misleading variable minorPos to a name that
reflects the semantic tab key (e.g., semanticPos or semanticTabPos) where
html.indexOf('data-tab="semantic"') is used; update any references (including
the related assertion/comparison with interactionPos) so the code reads clearly
that the index is for the semantic tab instead of "minor".

---

Outside diff comments:
In `@src/core/engine/scoring.test.ts`:
- Around line 340-349: The test in scoring.test.ts calls
calculateScores(makeResult([])) and checks formatScoreSummary but misses the
"interaction" category; update the "includes all 5 categories" test (which
references calculateScores, formatScoreSummary, and makeResult) to also assert
that the summary contains "interaction:" so all expected categories are
validated.

In `@src/core/engine/scoring.ts`:
- Around line 88-97: The evidence comment in the category weights block is
inconsistent: change the "minor" label to the active category key "semantic" so
the rationale matches the code; update the list item text (the line that reads
"- minor: ΔV <2%, negligible code difference") to "- semantic: ΔV <2%,
negligible code difference" in src/core/engine/scoring.ts so documentation and
category keys (e.g., "semantic") stay in sync with the scoring configuration.

---

Duplicate comments:
In `@src/core/rules/rule-config.ts`:
- Around line 145-146: Remove the inline rationale comment that follows the
severity property in the auto-calibrated config entry—leave the properties
(severity: "risk", score: -4) intact but delete the trailing comment text ("//
category is "semantic" (ΔV < 2%) but severity elevated per interview (`#200`):
non-semantic names block element purpose identification") so the automated
calibration file no longer contains manual rationale comments; target the object
containing the severity and score keys (the auto-calibrated rule config entry)
and ensure no other comments or code are changed.
- Around line 97-101: Remove any inline calibration rationale comments from this
file (notably the comment near the "variant-structure-mismatch" rule entry) and
do not leave calibration explanations in code; instead, add the required
Arbitrator decision evidence for every final score/severity adjustment
(including the adjustments made to "variant-structure-mismatch" and the other
adjusted rule entries) into data/calibration-evidence.json, update the PR
description and commit message to reference those exact evidence entries, and
ensure the Arbitrator name, date, decision, and a brief rationale are included
in each evidence record so the change can be merged.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: ASSERTIVE

Plan: Pro

Run ID: d7202a1a-5473-4ea7-b731-19b4a6363fbf

📥 Commits

Reviewing files that changed from the base of the PR and between 54a77a7 and 3f9467c.

📒 Files selected for processing (16)
  • .claude/agents/calibration/converter.md
  • .claude/agents/rule-discovery/designer.md
  • docs/CUSTOMIZATION.md
  • src/agents/orchestrator.test.ts
  • src/cli/commands/internal/rule-discovery.test.ts
  • src/core/contracts/category.ts
  • src/core/engine/integration.test.ts
  • src/core/engine/scoring.test.ts
  • src/core/engine/scoring.ts
  • src/core/report-html/render.test.ts
  • src/core/rules/naming/inconsistent-naming-convention.test.ts
  • src/core/rules/naming/index.ts
  • src/core/rules/naming/non-semantic-name.test.ts
  • src/core/rules/naming/non-standard-naming.test.ts
  • src/core/rules/rule-config.ts
  • src/core/ui-constants.ts

let-sunny and others added 2 commits March 30, 2026 23:52
- scoring.ts: "minor" → "semantic" in comment
- scoring.test.ts: add missing "interaction" category assertion
- render.test.ts: rename minorPos → semanticPos

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
src/core/engine/scoring.ts (1)

59-65: ⚠️ Potential issue | 🟡 Minor

Update stale score value in the scoring rationale comment.

The example still states non-semantic-name is -1, but this PR’s calibration objective moves it to -4. Keeping this stale value will mislead future calibration/debug reads.

✏️ Proposed comment fix
- * Category importance is already encoded in rule scores (pixel-critical -10
- * vs semantic -1), so per-category weighting is unnecessary.
+ * Category importance is already encoded in rule scores (pixel-critical -10
+ * vs semantic -4), so per-category weighting is unnecessary.

As per coding guidelines: If the code change introduces behavior that contradicts existing documentation (README.md, CLAUDE.md, JSDoc comments), flag it and suggest updating the relevant documentation to stay in sync.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/core/engine/scoring.ts` around lines 59 - 65, Update the stale comment in
src/core/engine/scoring.ts so the example reflects the new calibrated score for
the rule name "non-semantic-name" (change its listed score from -1 to -4) — keep
the rest of the rationale intact (e.g., "no-auto-layout" (score: -10,
depthWeight: 1.5) at root contributes 15 to density) and ensure the comment
refers to the current calibration objective that moved "non-semantic-name" to -4
so documentation and code remain in sync.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Outside diff comments:
In `@src/core/engine/scoring.ts`:
- Around line 59-65: Update the stale comment in src/core/engine/scoring.ts so
the example reflects the new calibrated score for the rule name
"non-semantic-name" (change its listed score from -1 to -4) — keep the rest of
the rationale intact (e.g., "no-auto-layout" (score: -10, depthWeight: 1.5) at
root contributes 15 to density) and ensure the comment refers to the current
calibration objective that moved "non-semantic-name" to -4 so documentation and
code remain in sync.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: ASSERTIVE

Plan: Pro

Run ID: 071d7660-8576-4fa6-93f7-d8b3f12eef62

📥 Commits

Reviewing files that changed from the base of the PR and between 3f9467c and 67e289a.

📒 Files selected for processing (20)
  • .claude/agents/calibration/converter.md
  • .claude/agents/rule-discovery/designer.md
  • .claude/skills/canicode/SKILL.md
  • CLAUDE.md
  • README.md
  • docs/CUSTOMIZATION.md
  • src/agents/orchestrator.test.ts
  • src/cli/commands/internal/rule-discovery.test.ts
  • src/core/contracts/category.ts
  • src/core/contracts/rule.ts
  • src/core/engine/integration.test.ts
  • src/core/engine/scoring.test.ts
  • src/core/engine/scoring.ts
  • src/core/report-html/render.test.ts
  • src/core/rules/naming/inconsistent-naming-convention.test.ts
  • src/core/rules/naming/index.ts
  • src/core/rules/naming/non-semantic-name.test.ts
  • src/core/rules/naming/non-standard-naming.test.ts
  • src/core/rules/rule-config.ts
  • src/core/ui-constants.ts

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
@let-sunny let-sunny merged commit 532df3c into main Mar 30, 2026
2 checks passed
@let-sunny let-sunny deleted the feat/200-interview-score-adjustments branch March 30, 2026 15:02
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

interview with ai

1 participant