Skip to content

Conversation

@Skyline-23
Copy link

@Skyline-23 Skyline-23 commented Jan 23, 2026

What does this PR do?

How did you verify your code works?

Copilot AI review requested due to automatic review settings January 23, 2026 09:53
@github-actions
Copy link
Contributor

Thanks for your contribution!

This PR doesn't have a linked issue. All PRs must reference an existing issue.

Please:

  1. Open an issue describing the bug/feature (if one doesn't exist)
  2. Add Fixes #<number> or Closes #<number> to this PR description

See CONTRIBUTING.md for details.

@github-actions
Copy link
Contributor

The following comment was made by an LLM, it may be inaccurate:

Potential Duplicate/Related PRs Found:

  1. PR feat(session): add custom compaction thresholds along with prevention of continuous compaction #10123 - "feat(session): add custom compaction thresholds along with prevention of continuous compaction"

    • Related to token overflow handling and compaction triggering mechanisms
  2. PR feat(opencode): add context overflow prevention with configurable thresholds #8810 - "feat(opencode): add context overflow prevention with configurable thresholds"

    • Addresses context overflow prevention with thresholds
  3. PR fix(session): prevent context overflow by adding safety margin to compaction check #6562 - "fix(session): prevent context overflow by adding safety margin to compaction check"

    • Fixes overflow prevention in compaction logic
  4. PR feat(opencode): trigger compaction earlier and add multi-file read #9656 - "feat(opencode): trigger compaction earlier and add multi-file read"

    • Related to compaction triggering conditions
  5. PR feat(compaction): Hybrid compaction pipeline with deterministic extraction + LLM #7104 - "feat(compaction): Hybrid compaction pipeline with deterministic extraction + LLM"

    • Core compaction pipeline implementation

These PRs all relate to token management, compaction triggering, and overflow prevention. PR #10123 seems most closely related as it deals with custom compaction thresholds and token overflow. Recommend reviewing these to ensure PR #10215's token accumulation logic integrates properly with existing compaction mechanisms.

Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This pull request fixes token accumulation in the OpenCode session processor to properly track cumulative token usage across multi-step loops and enable correct overflow detection for compaction triggers.

Changes:

  • Accumulate step-level token usage instead of overwriting per-step values in the assistant message
  • Use cumulative tokens for overflow checks to properly trigger compaction in multi-step scenarios

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Codex model response latency increases significantly as conversation grows

1 participant