Skip to content

Conversation

@rayvoidx
Copy link

@rayvoidx rayvoidx commented Jan 6, 2026

[FEATURE]: add self-reflection mechanism to improve code reliability #7047

Feature Description

To mitigate common AI coding errors such as overwrite conflicts and type mismatches, I propose adding a "Self-Reflection" mechanism to the main system prompt (anthropic.txt).

Problem

The current agent sometimes executes tool calls impulsively without verifying the context, leading to:

  • Overwrite Conflicts: Editing an outdated version of a file.
  • Type Errors: Breaking existing TypeScript definitions.
  • Over-engineering: Implementing complex solutions when simple ones exist.

Proposed Solution

Add a Critical Thinking & Self-Correction section to the system prompt that forces the agent to ask itself three critical questions before executing Edit or Write tools:

  1. "Did I read the latest version of the file?"
  2. "Does this change break existing types or tests?"
  3. "Is there a simpler way to do this?"

Expected Benefit

  • Reliability: Significantly reduces the rate of buggy code generation.
  • Safety: Prevents accidental data loss from overwriting.
  • Quality: Encourages idiomatic and simple code.

I have implemented this change in my fork and verified it locally. A PR will follow shortly.

@github-actions
Copy link
Contributor

github-actions bot commented Jan 6, 2026

The following comment was made by an LLM, it may be inaccurate:

Duplicate PR Analysis

Based on my search, I found one potentially related PR (excluding the current PR #7037):

Related PR:

  • #5808 - feat: optimize gemini system prompt for autonomy and engineering excellence
    • Reason: This PR addresses system prompt optimization for better LLM performance, which is directly related to your current PR's focus on optimizing system prompt for better reasoning.

Additionally, there are other LLM-related PRs that may be contextually relevant:

  • #4710 - feat: support ZAI token metadata, trigger compaction on idle sessions, add GLM system prompt - adds system prompt configurations for GLM
  • #5422 - feat(provider): add provider-specific cache configuration system - focuses on provider optimization (token usage reduction)

Recommendation: Check PR #5808 most closely to ensure your optimization approach doesn't duplicate existing work or conflicts with Gemini's specific optimizations.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant