Skip to content

Mirror: Fix Kilo Quota OpenRouter error handling and retry flow (#5847)#29

Merged
jeremylongshore merged 4 commits intomainfrom
mirror/PR-5847
Feb 15, 2026
Merged

Mirror: Fix Kilo Quota OpenRouter error handling and retry flow (#5847)#29
jeremylongshore merged 4 commits intomainfrom
mirror/PR-5847

Conversation

@jeremylongshore
Copy link
Copy Markdown
Owner

Mirror of upstream PR #5847

Verdict: APPROVE | Tier 5

Review: kilocode-5847-review.md

@coderabbitai
Copy link
Copy Markdown

coderabbitai Bot commented Feb 15, 2026

Warning

Rate limit exceeded

@jeremylongshore has exceeded the limit for the number of commits that can be reviewed per hour. Please wait 29 minutes and 22 seconds before requesting another review.

⌛ How to resolve this issue?

After the wait time has elapsed, a review can be triggered using the @coderabbitai review command as a PR comment. Alternatively, push new commits to this PR.

We recommend that you space out your commits to avoid hitting the rate limit.

🚦 How do rate limits work?

CodeRabbit enforces hourly rate limits for each developer per organization.

Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout.

Please see our FAQ for further information.

✨ Finishing touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch mirror/PR-5847

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@gemini-code-assist
Copy link
Copy Markdown

Summary of Changes

Hello @jeremylongshore, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly improves the robustness and user experience of the application by enhancing error handling for API providers and refining how slash commands are processed within AI tool responses. It also introduces dynamic updates for Model Context Protocol (MCP) server capabilities, ensuring the application stays synchronized with backend changes.

Highlights

  • Enhanced API Error Handling: Improved the handling of streaming API errors, particularly for quota and rate limit issues from providers like OpenRouter, by introducing more robust error parsing and status code resolution.
  • Slash Command Processing in Tool Results: Fixed a bug where slash commands embedded within tool_result blocks were ignored, ensuring commands like /newtask are now correctly processed and transformed.
  • Dynamic MCP Capability Refresh: Implemented dynamic refreshing of Model Context Protocol (MCP) server capabilities (tools, resources, prompts) in response to server-side notifications, improving the application's responsiveness to backend changes.
  • Refined MCP Notification Logging: Adjusted MCP notification handling to prevent unnecessary user-facing pop-ups for internal notifications, logging them silently instead.
Changelog
  • .changeset/free-toes-hammer.md
    • Added a changeset entry documenting the fix for processing slash commands in tool_result blocks.
  • .changeset/slow-pillows-tap.md
    • Added a changeset entry documenting the fix for Kilo Quota and OpenRouter error handling.
  • src/api/providers/tests/moonshot.spec.ts
    • Added two new test cases for streaming error handling, covering normalized stream errors and nested provider errors in Moonshot API.
  • src/api/providers/openai-compatible.ts
    • Integrated enhanced error handling for streaming API responses from OpenAI-compatible providers, wrapping stream processing in a try-catch block and handling error parts.
  • src/api/providers/utils/tests/error-handler.spec.ts
    • Expanded error handling utility tests to cover status code preservation from various error structures and payload extraction from response bodies.
  • src/api/providers/utils/error-handler.ts
    • Refactored and enhanced the error handling utility with new helper functions (e.g., extractErrorPayload, resolveErrorStatus) to parse diverse error payloads and determine status codes more reliably.
  • src/core/mentions/tests/processKiloUserContentMentions.spec.ts
    • Added a new regression test suite to validate the correct processing of slash commands within tool_result blocks.
  • src/core/mentions/processKiloUserContentMentions.ts
    • Updated to correctly process slash commands embedded in tool_result blocks by introducing a shared processTextContent helper function.
  • src/services/mcp/McpHub.ts
    • Modified the MCP Hub to pass a RefreshCapabilitiesCallback to the NotificationService, enabling dynamic refreshing of server capabilities upon notification.
  • src/services/mcp/kilocode/NotificationService.ts
    • Extended the MCP Notification Service to listen for ResourceListChanged, ToolListChanged, and PromptListChanged notifications, triggering capability refreshes, and improved fallback notification logging to console.
  • src/services/mcp/kilocode/tests/NotificationService.spec.ts
    • Added comprehensive tests for the MCP Notification Service, covering logging messages, dynamic capability refresh callbacks, and silent fallback logging.
Ignored Files
  • Ignored by pattern: .github/workflows/** (1)
    • .github/workflows/auto-triage.yml
Activity
  • This pull request is a mirror of an upstream PR (Fix Kilo Quota | OpenRouter error handling and retry flow Kilo-Org/kilocode#5847) from the Kilo-Org/kilocode repository.
  • The pull request has been reviewed and received an 'APPROVE' verdict with a Tier 5 rating.
  • A detailed review document, 'kilocode-5847-review.md', is linked in the pull request description.
  • Several internal .beads files were added, likely reflecting local development environment setup or generated artifacts during the PR's creation.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@qodo-code-review
Copy link
Copy Markdown

Review Summary by Qodo

Mirror: Fix Kilo Quota OpenRouter error handling and retry flow with enhanced MCP notifications

🐞 Bug fix ✨ Enhancement 🧪 Tests

Grey Divider

Walkthroughs

Description
• **Fixed slash command processing in tool_result blocks**: Extended parseKiloSlashCommands to
  handle both string and array content formats in tool responses, addressing a regression where slash
  commands were only processed in text blocks
• **Enhanced error handling for streaming operations**: Added robust error payload extraction with
  multiple fallback locations (metadata, responseBody, cause, nested errors) and improved status code
  resolution including AWS metadata support
• **Improved MCP notification handling**: Added capability refresh handlers for resource, tool, and
  prompt list changes with automatic server capability updates when MCP servers notify of changes
• **Expanded test coverage**: Added comprehensive test suites for NotificationService, slash
  command processing in tool responses, and error handler status extraction
• **Added automated PR/issue triaging workflow**: Implemented GitHub Actions workflow for automatic
  labeling and categorization of issues and pull requests
Diagram
flowchart LR
  A["Tool Result Blocks"] -->|"parseKiloSlashCommands"| B["Processed Slash Commands"]
  C["Stream Errors"] -->|"handleProviderError"| D["Normalized Errors"]
  E["MCP Notifications"] -->|"RefreshCapabilitiesCallback"| F["Updated Server Capabilities"]
  G["Error Payloads"] -->|"extractErrorPayload"| H["Resolved Error Details"]
Loading

Grey Divider

File Changes

1. src/services/mcp/kilocode/__tests__/NotificationService.spec.ts 🧪 Tests +346/-0

Add comprehensive NotificationService test suite

• Added comprehensive test suite for NotificationService with 346 lines of test coverage
• Tests verify handler registration for all notification types (logging, resources, tools, prompts)
• Tests validate logging message display at different severity levels (info, warning, error,
 critical)
• Tests ensure refresh callbacks are invoked for capability changes and errors are properly logged

src/services/mcp/kilocode/tests/NotificationService.spec.ts


2. src/core/mentions/__tests__/processKiloUserContentMentions.spec.ts 🧪 Tests +191/-0

Add regression tests for slash commands in tool responses

• Added regression test suite for slash command processing in tool_result blocks
• Tests verify that parseKiloSlashCommands is called for tool responses containing user messages
• Tests validate both string and array content formats in tool_result blocks
• Tests ensure slash commands are properly transformed and processed in tool responses

src/core/mentions/tests/processKiloUserContentMentions.spec.ts


3. src/api/providers/utils/error-handler.ts Error handling +112/-8

Enhance error handling with robust payload extraction

• Added helper functions safeJsonParse, getFirstNonEmptyString, extractErrorPayload, and
 resolveErrorStatus for robust error parsing
• Enhanced error message extraction to check multiple payload locations (metadata, responseBody,
 cause, nested error objects)
• Improved status code resolution to check multiple fields including AWS metadata httpStatusCode
• Fixed handling of generic "No output generated" messages by falling back to nested provider error
 details

src/api/providers/utils/error-handler.ts


View more (18)
4. src/core/mentions/processKiloUserContentMentions.ts 🐞 Bug fix +51/-38

Process slash commands in tool_result blocks

• Added processTextContent helper function to consolidate text processing logic for both
 parseMentions and parseKiloSlashCommands
• Extended slash command processing to tool_result blocks (both string and array content formats)
• Refactored text block processing to use the new helper function, reducing code duplication
• Added proper handling of needsRulesFileCheck flag for tool_result blocks

src/core/mentions/processKiloUserContentMentions.ts


5. src/api/providers/openai-compatible.ts Error handling +57/-46

Add error handling for streaming operations

• Added import of handleProviderError for consistent error handling
• Wrapped stream processing in try-catch block to catch and normalize stream errors
• Added detection of error parts in fullStream and throw them as provider errors
• Enhanced error handling to surface nested provider errors when usage promise rejects

src/api/providers/openai-compatible.ts


6. src/services/mcp/kilocode/NotificationService.ts ✨ Enhancement +56/-4

Add capability refresh handlers for MCP notifications

• Added support for ResourceListChangedNotificationSchema, ToolListChangedNotificationSchema,
 and PromptListChangedNotificationSchema
• Introduced RefreshCapabilitiesCallback type for handling capability refresh on list changes
• Added handlers for resource, tool, and prompt list changes that invoke refresh callback silently
• Changed fallback notification handler to log unhandled notifications instead of showing user
 notifications

src/services/mcp/kilocode/NotificationService.ts


7. src/api/providers/__tests__/moonshot.spec.ts 🧪 Tests +67/-0

Add stream error handling tests for Moonshot provider

• Added test for throwing normalized stream errors from fullStream error parts with status and
 errorDetails
• Added test for surfacing nested provider errors when usage promise rejects with generic message
• Tests verify proper error message extraction and status code preservation in error scenarios

src/api/providers/tests/moonshot.spec.ts


8. src/api/providers/utils/__tests__/error-handler.spec.ts 🧪 Tests +40/-0

Add error handler tests for status extraction

• Added test for preserving status code from Error with statusCode field
• Added test for extracting message and details from responseBody payload structure
• Added test for falling back to AWS metadata httpStatusCode when status fields are missing

src/api/providers/utils/tests/error-handler.spec.ts


9. src/services/mcp/McpHub.ts ✨ Enhancement +6/-1

Connect capability refresh callback to MCP notifications

• Modified kiloNotificationService.connect call to pass refresh callback function
• Callback invokes fetchAvailableServerCapabilities and notifyWebviewOfServerChanges when server
 capabilities change
• Enables automatic capability refresh when MCP servers notify of tool, resource, or prompt list
 changes

src/services/mcp/McpHub.ts


10. .changeset/free-toes-hammer.md 📝 Documentation +14/-0

Add changeset for tool_result slash command fix

• Added changeset documenting fix for slash command processing in tool_result blocks
• Describes the bug where parseKiloSlashCommands was only called for text blocks
• Notes the regression test ensures proper processing of slash commands in tool responses

.changeset/free-toes-hammer.md


11. .changeset/slow-pillows-tap.md 📝 Documentation +5/-0

Add changeset for error handling fix

• Added changeset for Kilo Quota and OpenRouter error handling improvements

.changeset/slow-pillows-tap.md


12. .beads/beads.db-shm Miscellaneous +0/-0

Update beads database shared memory

• Binary database shared memory file updated with new content

.beads/beads.db-shm


13. .beads/sync-state.json ⚙️ Configuration changes +7/-0

Add beads sync state tracking file

• Added sync state tracking file with failure information
• Records last failure timestamp, failure count, and backoff timing
• Indicates manual sync is needed due to git pull failure

.beads/sync-state.json


14. .beads/beads.left.meta.json ⚙️ Configuration changes +1/-0

Add beads left metadata file

• Added metadata file tracking beads version 0.49.1 and commit hash

.beads/beads.left.meta.json


15. .beads/metadata.json ⚙️ Configuration changes +4/-0

Add beads metadata configuration

• Added beads metadata configuration file specifying database and export file names

.beads/metadata.json


16. .beads/.local_version ⚙️ Configuration changes +1/-0

Add beads local version file

• Added local version file tracking beads version 0.49.1

.beads/.local_version


17. .beads/last-touched ⚙️ Configuration changes +1/-0

Add beads last-touched tracking file

• Added file tracking last touched beads component identifier

.beads/last-touched


18. .beads/daemon.pid ⚙️ Configuration changes +1/-0

Add beads daemon process ID file

• Added daemon process ID file for beads service

.beads/daemon.pid


19. .github/workflows/auto-triage.yml ⚙️ Configuration changes +126/-0

Add GitHub Actions workflow for automated issue and PR triaging

• Added new GitHub Actions workflow for automatic issue and PR triaging
• Triggers on newly opened issues and pull requests (excluding bot-created items)
• Integrates with Kilo Code CLI to analyze content and apply appropriate labels
• Includes comprehensive label categories (component, type, platform, provider, accessibility)
• Implements safety measures including body sanitization and command allowlist restrictions

.github/workflows/auto-triage.yml


20. .beads/beads.db-wal Additional files +0/-0

...

.beads/beads.db-wal


21. .beads/beads.left.jsonl Additional files +0/-0

...

.beads/beads.left.jsonl


Grey Divider

Qodo Logo

Copy link
Copy Markdown

@gemini-code-assist gemini-code-assist Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces several valuable improvements. The core changes focus on enhancing error handling for API providers, fixing an issue with slash command processing, and improving MCP notification handling. The error handling refactoring is particularly well-done, centralizing logic and making it more robust for various provider responses. The fix for slash commands in tool_result blocks is also a great addition, and the refactoring to a helper function improves code clarity. The changes to MCP notifications, especially silencing unhandled ones, will improve the user experience. I've included one suggestion to reduce some code duplication in the notification service. Overall, these are high-quality changes with good test coverage.

Comment on lines +55 to +83
// Handle resource list changes - refresh capabilities silently
client.setNotificationHandler(ResourceListChangedNotificationSchema, async () => {
console.log(`MCP ${name}: resources list changed, refreshing capabilities`)
try {
await onRefreshCapabilities?.(name)
} catch (error) {
console.error(`MCP ${name}: failed to refresh capabilities after resource list change:`, error)
}
})

// Handle tool list changes - refresh capabilities silently
client.setNotificationHandler(ToolListChangedNotificationSchema, async () => {
console.log(`MCP ${name}: tools list changed, refreshing capabilities`)
try {
await onRefreshCapabilities?.(name)
} catch (error) {
console.error(`MCP ${name}: failed to refresh capabilities after tool list change:`, error)
}
})

// Handle prompt list changes - refresh capabilities silently
client.setNotificationHandler(PromptListChangedNotificationSchema, async () => {
console.log(`MCP ${name}: prompts list changed, refreshing capabilities`)
try {
await onRefreshCapabilities?.(name)
} catch (error) {
console.error(`MCP ${name}: failed to refresh capabilities after prompt list change:`, error)
}
})
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The notification handlers for resource, tool, and prompt list changes are very similar. To reduce code duplication and improve maintainability, you could extract this logic into a helper function.

const createRefreshHandler = (listType: string) => async () => {
	console.log(`MCP ${name}: ${listType} list changed, refreshing capabilities`)
	try {
		await onRefreshCapabilities?.(name)
	} catch (error) {
		console.error(`MCP ${name}: failed to refresh capabilities after ${listType} list change:`, error)
	}
}

// Handle resource list changes - refresh capabilities silently
client.setNotificationHandler(ResourceListChangedNotificationSchema, createRefreshHandler("resources"))

// Handle tool list changes - refresh capabilities silently
client.setNotificationHandler(ToolListChangedNotificationSchema, createRefreshHandler("tools"))

// Handle prompt list changes - refresh capabilities silently
client.setNotificationHandler(PromptListChangedNotificationSchema, createRefreshHandler("prompts"))

@jeremylongshore jeremylongshore merged commit 231f9f2 into main Feb 15, 2026
11 of 15 checks passed
@github-actions
Copy link
Copy Markdown

Failed to generate code suggestions for PR

@qodo-code-review
Copy link
Copy Markdown

Code Review by Qodo

🐞 Bugs (3) 📘 Rule violations (2) 📎 Requirement gaps (0)

Grey Divider


Action required

1. Unmarked edits in processKiloUserContentMentions 📘 Rule violation ⛯ Reliability
Description
Changes were made in upstream-shared src/ code without kilocode_change markers, increasing the
risk of merge conflicts during upstream syncs. This makes it harder to identify Kilo-specific deltas
when mirroring with Roo Code.
Code

src/core/mentions/processKiloUserContentMentions.ts[R45-72]

+	// Helper function to process text through parseMentions and parseKiloSlashCommands
+	// Returns the processed text and whether a kilorules check is needed
+	const processTextContent = async (
+		text: string,
+		localWorkflowToggles: ClineRulesToggles,
+		globalWorkflowToggles: ClineRulesToggles,
+	): Promise<{ processedText: string; needsRulesFileCheck: boolean }> => {
+		const parsedText = await parseMentions(
+			text,
+			cwd,
+			urlContentFetcher,
+			fileContextTracker,
+			rooIgnoreController,
+			showRooIgnoredFiles,
+			includeDiagnosticMessages,
+			maxDiagnosticMessages,
+			maxReadFileLine,
+		)
+
+		// when parsing slash commands, we still want to allow the user to provide their desired context
+		const { processedText, needsRulesFileCheck: needsCheck } = await parseKiloSlashCommands(
+			parsedText.text,
+			localWorkflowToggles,
+			globalWorkflowToggles,
+		)
+
+		return { processedText, needsRulesFileCheck: needsCheck }
+	}
Evidence
PR Compliance ID 7 requires kilocode_change markers for modifications in upstream-shared core
extension code under src/. The newly added helper processTextContent and related changes in
src/core/mentions/processKiloUserContentMentions.ts are not wrapped/annotated with
kilocode_change markers.

AGENTS.md
src/core/mentions/processKiloUserContentMentions.ts[45-72]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
Edits in upstream-shared `src/` code were introduced without `kilocode_change` markers, which violates the upstream-sync annotation requirement and can increase merge conflicts.

## Issue Context
`src/core/mentions/processKiloUserContentMentions.ts` contains newly added helper logic and refactors that are Kilo-specific but not annotated.

## Fix Focus Areas
- src/core/mentions/processKiloUserContentMentions.ts[45-72]
- src/core/mentions/processKiloUserContentMentions.ts[92-150]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


2. Local .beads artifacts committed 🐞 Bug ⛨ Security
Description
The PR adds multiple .beads/* runtime/state files (PID, sync backoff state, SQLite shm) that are
machine-specific and not appropriate for source control, risking privacy leaks, repo bloat, and
constant churn/merge conflicts.
Code

.beads/sync-state.json[R1-7]

+{
+  "last_failure": "2026-02-14T20:42:41.363720889-06:00",
+  "failure_count": 7,
+  "backoff_until": "2026-02-14T21:12:41.36372132-06:00",
+  "needs_manual_sync": true,
+  "failure_reason": "git pull failed: exit status 1\nfatal: couldn't find remote ref review/PR-5667\n"
+}
Evidence
.beads/sync-state.json and .beads/daemon.pid are clearly ephemeral runtime state (timestamps,
failure reasons, process IDs). Additionally, .beads/daemon.lock contains an absolute local
filesystem path and PIDs, demonstrating that committing .beads can leak machine/user-specific
information. The root .gitignore does not ignore .beads, so these files will keep getting
tracked/modified.

.beads/sync-state.json[1-7]
.beads/daemon.pid[1-1]
.beads/daemon.lock[1-7]
.gitignore[1-80]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

### Issue description
The PR includes `.beads/*` runtime artifacts (PID, SQLite shm/WAL state, sync failure/backoff state). These are machine-specific, can leak local paths/PIDs, and will create ongoing repo churn and bloat.

### Issue Context
`.beads/daemon.lock` includes an absolute local path (`/home/...`) and PIDs, and other `.beads` files encode transient operational state.

### Fix Focus Areas
- .gitignore[1-80]
- .beads/sync-state.json[1-7]
- .beads/daemon.pid[1-1]
- .beads/beads.db-shm[1-5]

### Suggested approach
1. Remove `.beads/*` runtime files from git (delete or `git rm --cached` as appropriate).
2. Add `.beads/` (or at least `.beads/beads.db*`, `.beads/*.pid`, `.beads/*state*.json`, `.beads/*.lock`) to the root `.gitignore`.
3. If a specific export (e.g. `issues.jsonl`) is intentionally tracked, use negation rules (e.g. ignore `.beads/**` then `!.beads/issues.jsonl`).

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools



Remediation recommended

3. NotificationService logs raw payloads 📘 Rule violation ⛨ Security
Description
New logging statements output whole notification objects and use unstructured
console.log/console.error, which can leak sensitive content carried in MCP notifications and
makes logs harder to audit. This violates secure logging expectations for avoiding sensitive data in
logs and keeping logs structured.
Code

src/services/mcp/kilocode/NotificationService.ts[R55-89]

+		// Handle resource list changes - refresh capabilities silently
+		client.setNotificationHandler(ResourceListChangedNotificationSchema, async () => {
+			console.log(`MCP ${name}: resources list changed, refreshing capabilities`)
+			try {
+				await onRefreshCapabilities?.(name)
+			} catch (error) {
+				console.error(`MCP ${name}: failed to refresh capabilities after resource list change:`, error)
+			}
+		})
+
+		// Handle tool list changes - refresh capabilities silently
+		client.setNotificationHandler(ToolListChangedNotificationSchema, async () => {
+			console.log(`MCP ${name}: tools list changed, refreshing capabilities`)
+			try {
+				await onRefreshCapabilities?.(name)
+			} catch (error) {
+				console.error(`MCP ${name}: failed to refresh capabilities after tool list change:`, error)
+			}
+		})
+
+		// Handle prompt list changes - refresh capabilities silently
+		client.setNotificationHandler(PromptListChangedNotificationSchema, async () => {
+			console.log(`MCP ${name}: prompts list changed, refreshing capabilities`)
+			try {
+				await onRefreshCapabilities?.(name)
+			} catch (error) {
+				console.error(`MCP ${name}: failed to refresh capabilities after prompt list change:`, error)
+			}
+		})
+
+		// Fallback for any other unhandled notifications - log silently, don't notify user
+		// This prevents raw JSON-RPC messages from being displayed as VS Code notifications
		client.fallbackNotificationHandler = async (notification) => {
-			vscode.window.showInformationMessage(`MCP ${name}: ${JSON.stringify(notification)}`)
+			console.log(`MCP ${name}: unhandled notification`, notification)
		}
Evidence
PR Compliance ID 5 requires avoiding sensitive data in logs and encourages structured logging; the
new code logs entire notification objects (potentially containing tool/resource/prompt data) and
uses unstructured console logging.

Rule 5: Generic: Secure Logging Practices
src/services/mcp/kilocode/NotificationService.ts[55-83]
src/services/mcp/kilocode/NotificationService.ts[85-89]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
`NotificationService` logs full MCP notification objects and uses unstructured console logging, which may leak sensitive information and reduces auditability.

## Issue Context
MCP notifications can include prompts/resources/tool payloads; logging them verbatim can inadvertently capture user or model content.

## Fix Focus Areas
- src/services/mcp/kilocode/NotificationService.ts[55-83]
- src/services/mcp/kilocode/NotificationService.ts[85-89]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


4. errorDetails may break retries 🐞 Bug ⛯ Reliability
Description
handleProviderError can attach errorDetails from payload fields with an unconstrained type, but
retry/backoff code calls .find(...) on error.errorDetails, which will throw if errorDetails is
not an array.
Code

src/api/providers/utils/error-handler.ts[R73-77]

+	return {
+		message,
+		errorDetails: nestedError?.details ?? root.errorDetails ?? root.details,
+		status,
+	}
Evidence
The error handler explicitly sources errorDetails from root.details/nestedError.details as
unknown without normalization. The retry/backoff path assumes error.errorDetails is an array and
calls .find(...) when status is 429; if errorDetails is anything other than an array, this will
throw and can break retry handling.

src/api/providers/utils/error-handler.ts[73-77]
src/api/providers/utils/error-handler.ts[174-183]
src/core/task/Task.ts[4868-4873]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

### Issue description
`handleProviderError` may set `wrapped.errorDetails` to a non-array value (it is sourced from `root.details`/`nestedError.details` as `unknown`). Retry logic in `Task.backoffAndAnnounce` calls `.find(...)` on `error.errorDetails`, which can throw `TypeError: errorDetails.find is not a function`.

### Issue Context
This affects retry/backoff behavior for 429s (and potentially other status-aware logic), where robustness is important.

### Fix Focus Areas
- src/api/providers/utils/error-handler.ts[73-77]
- src/api/providers/utils/error-handler.ts[174-183]
- src/core/task/Task.ts[4868-4873]

### Suggested approach
Option A (preferred): normalize in `handleProviderError`:
- If `errorDetails` is an array, keep it.
- If it is a non-null object/string, wrap into a single-element array or drop it.

Option B (additional hardening): update `Task.backoffAndAnnounce` to guard:
- `const details = Array.isArray(error?.errorDetails) ? error.errorDetails : []`
- then call `details.find(...)`.

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


5. Auto-triage execute too broad 🐞 Bug ⛨ Security
Description
The new auto-triage workflow enables LLM command execution with allowed patterns gh issue edit/`gh
pr edit, but the CLI allowlist matches by prefix/word-boundary—so the agent could run any gh
issue/pr edit ...` variant if prompt-injected, with write permissions.
Code

.github/workflows/auto-triage.yml[R40-44]

+          KILO_AUTO_APPROVAL_ENABLED: "true"
+          KILO_AUTO_APPROVAL_EXECUTE_ENABLED: "true"
+          KILO_AUTO_APPROVAL_EXECUTE_ALLOWED: "gh issue edit,gh pr edit"
+          KILO_AUTO_APPROVAL_EXECUTE_DENIED: "gh issue close,gh issue delete,gh issue transfer,gh issue lock,gh issue unlock,gh pr close,gh pr merge,gh repo,gh auth,gh secret,gh variable,rm,sudo,curl,wget,bash,sh,python,node,npm,npx"
+          KILO_TELEMETRY: "false"
Evidence
The workflow turns on auto-approval execution and allows gh issue edit and gh pr edit. In the
CLI, command matching is hierarchical prefix-based (not an exact subcommand+flag allowlist), so
allowing gh issue edit effectively allows any edit operation under that prefix. The workflow also
embeds untrusted issue/PR body content into the prompt, which is a classic prompt-injection surface
(even with some sanitization).

.github/workflows/auto-triage.yml[40-44]
cli/src/services/approvalDecision.ts[28-56]
.github/workflows/auto-triage.yml[63-74]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

### Issue description
The workflow enables auto-execution and allows `gh issue edit` / `gh pr edit` via a prefix-based allowlist. Because the CLI matcher is hierarchical prefix-based, this can permit broader edits than intended if the prompt is injected.

### Issue Context
Even with `SAFE_BODY` sanitization and prompt instructions, untrusted content is still fed into the model; defense-in-depth suggests constraining what can be executed.

### Fix Focus Areas
- .github/workflows/auto-triage.yml[40-44]
- .github/workflows/auto-triage.yml[63-74]
- cli/src/services/approvalDecision.ts[28-56]

### Suggested approach
1. Disable execute mode in this workflow (`KILO_AUTO_APPROVAL_EXECUTE_ENABLED=false`).
2. Change the prompt so the model outputs only labels as data (e.g., JSON array of exact label strings).
3. Add a small shell/node step that:
  - parses the model output,
  - filters labels against the allowed label set,
  - applies them using `gh issue edit &lt;n&gt; --add-label ...` (and nothing else).
4. Optionally gate execution for PRs from forks / missing secrets to avoid noisy failures.

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


Grey Divider

ⓘ The new review experience is currently in Beta. Learn more

Grey Divider

Qodo Logo

Comment on lines +45 to +72
// Helper function to process text through parseMentions and parseKiloSlashCommands
// Returns the processed text and whether a kilorules check is needed
const processTextContent = async (
text: string,
localWorkflowToggles: ClineRulesToggles,
globalWorkflowToggles: ClineRulesToggles,
): Promise<{ processedText: string; needsRulesFileCheck: boolean }> => {
const parsedText = await parseMentions(
text,
cwd,
urlContentFetcher,
fileContextTracker,
rooIgnoreController,
showRooIgnoredFiles,
includeDiagnosticMessages,
maxDiagnosticMessages,
maxReadFileLine,
)

// when parsing slash commands, we still want to allow the user to provide their desired context
const { processedText, needsRulesFileCheck: needsCheck } = await parseKiloSlashCommands(
parsedText.text,
localWorkflowToggles,
globalWorkflowToggles,
)

return { processedText, needsRulesFileCheck: needsCheck }
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Action required

1. Unmarked edits in processkilousercontentmentions 📘 Rule violation ⛯ Reliability

Changes were made in upstream-shared src/ code without kilocode_change markers, increasing the
risk of merge conflicts during upstream syncs. This makes it harder to identify Kilo-specific deltas
when mirroring with Roo Code.
Agent Prompt
## Issue description
Edits in upstream-shared `src/` code were introduced without `kilocode_change` markers, which violates the upstream-sync annotation requirement and can increase merge conflicts.

## Issue Context
`src/core/mentions/processKiloUserContentMentions.ts` contains newly added helper logic and refactors that are Kilo-specific but not annotated.

## Fix Focus Areas
- src/core/mentions/processKiloUserContentMentions.ts[45-72]
- src/core/mentions/processKiloUserContentMentions.ts[92-150]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools

Comment thread .beads/sync-state.json
Comment on lines +1 to +7
{
"last_failure": "2026-02-14T20:42:41.363720889-06:00",
"failure_count": 7,
"backoff_until": "2026-02-14T21:12:41.36372132-06:00",
"needs_manual_sync": true,
"failure_reason": "git pull failed: exit status 1\nfatal: couldn't find remote ref review/PR-5667\n"
} No newline at end of file
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Action required

2. Local .beads artifacts committed 🐞 Bug ⛨ Security

The PR adds multiple .beads/* runtime/state files (PID, sync backoff state, SQLite shm) that are
machine-specific and not appropriate for source control, risking privacy leaks, repo bloat, and
constant churn/merge conflicts.
Agent Prompt
### Issue description
The PR includes `.beads/*` runtime artifacts (PID, SQLite shm/WAL state, sync failure/backoff state). These are machine-specific, can leak local paths/PIDs, and will create ongoing repo churn and bloat.

### Issue Context
`.beads/daemon.lock` includes an absolute local path (`/home/...`) and PIDs, and other `.beads` files encode transient operational state.

### Fix Focus Areas
- .gitignore[1-80]
- .beads/sync-state.json[1-7]
- .beads/daemon.pid[1-1]
- .beads/beads.db-shm[1-5]

### Suggested approach
1. Remove `.beads/*` runtime files from git (delete or `git rm --cached` as appropriate).
2. Add `.beads/` (or at least `.beads/beads.db*`, `.beads/*.pid`, `.beads/*state*.json`, `.beads/*.lock`) to the root `.gitignore`.
3. If a specific export (e.g. `issues.jsonl`) is intentionally tracked, use negation rules (e.g. ignore `.beads/**` then `!.beads/issues.jsonl`).

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools

@jeremylongshore jeremylongshore deleted the mirror/PR-5847 branch February 15, 2026 03:12
@jeremylongshore
Copy link
Copy Markdown
Owner Author

Review: kilocode Kilo-Org#5847

Fix Kilo Quota | OpenRouter error handling and retry flow by @Neonsy

Methodology

Checklist

Check Result Notes
Correctness PASS Fixes real user-reported error where quota exhaustion showed generic "No output generated"
Conventions PASS Uses kilocode_change markers, follows error handler patterns
Changeset PASS Patch changeset included
Tests PASS 107 new test lines across moonshot + error-handler specs
i18n N/A Error messages are provider-originated, not user-facing i18n keys
Types PASS No type changes needed
Security PASS No security surface changes
Scope PASS Focused on stream error escalation and error payload normalization

Findings

1. Title is misleading -- this fixes Moonshot/AI SDK stream errors, not OpenRouter/Kilo Quota (severity: gray)

The PR title says "Fix Kilo Quota | OpenRouter error handling" but the PR description and actual changes fix Moonshot stream error handling via the AI SDK path. The author notes this explicitly: "This PR focuses on the Moonshot/AI-SDK stream error path and shared error normalization hardening, not the older OpenRouter/Kilo quota branch." The title likely reflects the original branch intent before scope refinement.

2. Stream error escalation is the right fix (severity: gray)

The core problem: AI SDK's fullStream can emit part.type === "error" events that were silently dropped because the stream processing loop had no handler for them. Then await result.usage would reject with a generic "No output generated" message. The fix throws immediately on error stream parts, surfacing the real provider error:

if (part.type === "error") {
    const streamError =
        (part as { error?: unknown }).error ??
        new Error(String((part as { message?: unknown }).message ?? "Unknown stream error"))
    throw streamError
}

This is correct -- error stream parts are terminal and should halt processing.

3. Error handler hardening is thorough (severity: gray)

The handleProviderError function now:

  • Extracts status from statusCode, status, and AWS $metadata.httpStatusCode
  • Digs into responseBody, data, and cause for nested error payloads
  • Replaces generic "No output generated" messages with actual provider messages when available
  • Preserves structured errorDetails for retry/backoff logic

The extractErrorPayload utility handles both string and object payloads with safe JSON parsing. The getFirstNonEmptyString helper prioritizes nested error messages over root-level ones.

4. Try/catch wrapping preserves tool stream behavior (severity: gray)

The entire stream processing loop and await result.usage are wrapped in a single try/catch that calls handleProviderError. Importantly, the tool-input accumulation/flush logic remains unchanged within the try block. This is a clean refactor that adds error handling without modifying the happy path.

5. The NO_OUTPUT_GENERATED_MESSAGE constant could be fragile (severity: gray)

The error handler checks if the message includes the exact string "No output generated. Check the stream for errors." to decide whether to prefer nested provider messages. If the AI SDK changes this message, the fallback would stop working. However, the consequence of this check failing is only that users would see the generic message -- the same behavior as before this PR. So this is a progressive enhancement, not a regression risk.

CI Status

Check Result
compile PASS
test-extension (ubuntu) PASS
test-extension (windows) PASS
test-webview (ubuntu) PASS
test-webview (windows) PASS
test-cli PASS
test-jetbrains PASS
build-cli PASS
check-translations PASS

Code Snippets

Error payload extraction priority chain:

const nestedMessage = responsePayload.message ?? nestedPayload.message
let msg = metadataRaw || nestedMessage || error.message || ""

// AI SDK can emit generic "No output generated" while nested payload has the real failure.
if (msg.includes(NO_OUTPUT_GENERATED_MESSAGE) && nestedMessage) {
    msg = nestedMessage
}

Before/after user experience (from Discord user @smetanokr):

// Before: "No output generated. Check the stream for errors."
// After:  "403 - moonshot streaming error: You've reached your usage limit..."

Verdict

APPROVE -- This is a well-targeted fix for a real user-reported issue. The stream error escalation correctly handles AI SDK error parts that were being silently dropped. The error handler hardening is thorough and preserves backwards compatibility. All CI checks pass. A Discord user confirmed the fix produces actionable error messages instead of generic ones. The misleading title is cosmetic and does not affect the code quality.

@jeremylongshore
Copy link
Copy Markdown
Owner Author

Review Journal: kilocode Kilo-Org#5847

PR: #5847 |
Title: Fix Kilo Quota | OpenRouter error handling and retry flow |
Author: @Neonsy |
Category: feature | Tier: 5 | Size: 335 lines, 5 files


Summary

Fixes silent error dropping in AI SDK stream processing that caused Moonshot quota errors to show as generic "No output generated" messages. Hardens the shared error handler to extract nested provider error payloads. Real user confirmed the fix via before/after screenshots. Approve.

First Impressions

Title is misleading (mentions OpenRouter/Kilo Quota) but the actual changes fix Moonshot stream errors and harden the shared error handler. The PR description is honest about the scope shift. The author explicitly notes this is not the OpenRouter quota fix.

What I Looked At

  • src/api/providers/openai-compatible.ts -- stream processing loop refactor with error handling
  • src/api/providers/utils/error-handler.ts -- substantial hardening (new helper functions, nested payload extraction)
  • src/api/providers/__tests__/moonshot.spec.ts -- regression tests for stream error parts and usage rejection
  • src/api/providers/utils/__tests__/error-handler.spec.ts -- tests for status preservation and payload extraction
  • Cross-referenced existing handleProviderError function on main
  • Reviewed Discord user screenshots confirming the fix

Analysis

The root cause chain:

  1. Moonshot uses OpenAICompatibleHandler which uses AI SDK's streamText
  2. When the provider returns a quota error, AI SDK emits a part.type === "error" event in fullStream
  3. The stream processing loop had no handler for error parts -- they were dropped
  4. await result.usage then rejects with generic "No output generated. Check the stream for errors."
  5. Task does not handle chunk.type === "error" in its stream switch either

The fix addresses both layers:

  • Stream level: Throws immediately on error parts, surfacing the real error
  • Error handler level: Digs into responseBody, data, cause to extract nested provider messages and status codes

The error handler additions are substantial (80 new lines of helper code) but well-structured:

  • extractErrorPayload() normalizes both string and object payloads
  • resolveErrorStatus() checks status, statusCode, and $metadata.httpStatusCode
  • getFirstNonEmptyString() is a clean priority-chain helper

Verification

  • All CI checks pass
  • Discord user @smetanokr provided before/after screenshots showing:
    • Before: "No output generated. Check the stream for errors."
    • After: "403 - moonshot streaming error: You've reached your usage limit for this billing cycle..."
  • Error handler tests cover statusCode preservation, responseBody extraction, and AWS metadata fallback

Lessons Learned

  • AI SDK error stream parts (part.type === "error") are terminal events that must be handled explicitly -- they are not informational
  • Error normalization needs to dig multiple levels deep since different SDKs nest error details differently
  • The NO_OUTPUT_GENERATED_MESSAGE check is a practical workaround for AI SDK's generic error messages

Review methodology: AI PR Review Case Studies | Reviewed with GWI + Claude Code

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant