feat(retry): add configurable delay and retry limits#4704
feat(retry): add configurable delay and retry limits#4704dannycreations wants to merge 6 commits intoKilo-Org:mainfrom
Conversation
🦋 Changeset detectedLatest commit: 389a4ec The changes in this PR will be included in the next version bump. This PR includes changesets to release 1 package
Not sure what this means? Click here to learn what changesets are. Click here if you're a maintainer who wants to add another changeset to this PR |
105c827 to
0c7fbe4
Compare
4c06668 to
0c2e8e2
Compare
|
Thanks @dannycreations! Due to the merge from upstream there are some merge conflicts. I will resolve those and review soon. |
ccde0ab to
26082b6
Compare
|
@kevinvandijk done pushing new code already tested all working |
26082b6 to
1ce2e8c
Compare
|
@marius-kilocode done implement every problem, im also remove all strategy to be "constant" by default since user can now change the retry delay, im also not touch any cli things and it should be fallback to default |
0510ff9 to
1c3fb73
Compare
1c3fb73 to
1b692bf
Compare
8bde225 to
bff1245
Compare
bff1245 to
e954c91
Compare
- Add `alwaysApproveResubmit`, `requestDelaySeconds`, and `requestRetryMax` to global settings and state. - Update `Task` logic to respect maximum retry limits and use configurable delays instead of hardcoded exponential backoff. - Integrate retry configuration sliders and toggles into the Auto-Approve settings UI. - Add comprehensive unit tests for auto-retry logic, delay calculations, and state constraints. - Update localization files for all supported languages with new retry-related labels and descriptions.
e954c91 to
30bd8e4
Compare
Code Review SummaryStatus: 6 Issues Found (from prior reviews) | Recommendation: Address before merge Fix these issues in Kilo Cloud Overview
Issue Details (click to expand)CRITICAL
WARNING
Other Observations (not in diff)No additional issues found outside the diff. Files Reviewed (30 files)
|
Co-authored-by: kiloconnect[bot] <240665456+kiloconnect[bot]@users.noreply.github.com>
Co-authored-by: kiloconnect[bot] <240665456+kiloconnect[bot]@users.noreply.github.com>
Co-authored-by: kiloconnect[bot] <240665456+kiloconnect[bot]@users.noreply.github.com>
Co-authored-by: kiloconnect[bot] <240665456+kiloconnect[bot]@users.noreply.github.com>
| startTask: false, | ||
| }) | ||
|
|
||
| const delay = (task as any).backoffAndAnnounce(1, new Error("test")) |
There was a problem hiding this comment.
WARNING: Test has no assertions — this test calls backoffAndAnnounce but never awaits the result and contains zero expect() calls. It will always pass regardless of actual behavior.
The comment on line 79 acknowledges this limitation. Consider either:
- Properly mocking
delayand awaiting the result to verify the delay value - Removing this test until it can be made meaningful (a test with no assertions gives false confidence in coverage)
| requestRetryMax: 2 | ||
| } | ||
|
|
||
| const shouldRetry = (attempt: number) => |
There was a problem hiding this comment.
WARNING: Tests re-implement retry logic locally instead of testing the actual Task class — the shouldRetry function defined here duplicates the condition from Task.ts rather than exercising the real code path. If the logic in Task.ts diverges from this local copy, these tests will still pass while the actual behavior is broken.
Consider testing the real Task methods (e.g., verifying that attemptApiRequest or the streaming loop actually stops retrying when requestRetryMax is exceeded) rather than testing a standalone reimplementation of the condition.
| stateForBackoff?.autoApprovalEnabled && | ||
| stateForBackoff?.alwaysApproveResubmit && // kilocode_change | ||
| (retryMax === 0 || (currentItem.retryAttempt ?? 0) < retryMax) | ||
| ) { |
There was a problem hiding this comment.
CRITICAL: The retryMax check here only gates the backoff delay — it does NOT gate the actual retry. Lines 3619–3627 (stack.push + continue) execute unconditionally outside this if block, meaning when retryMax is reached, the backoff is skipped but the retry still happens. This creates an infinite retry loop without any delay.
The stack.push + continue block should be moved inside this if, with an else branch that either breaks or prompts the user (similar to the empty-assistant-response path at line 3965 which correctly uses if/else).
|
|
||
| // Shared exponential backoff for retries (first-chunk and mid-stream) | ||
| // Shared backoff for retries (first-chunk and mid-stream) | ||
| private async backoffAndAnnounce(retryAttempt: number, error: any): Promise<void> { |
There was a problem hiding this comment.
WARNING: The retryAttempt parameter is now unused in this method body. It was previously used for the exponential backoff calculation (Math.pow(2, retryAttempt)), but since the delay is now constant, nothing references it. Consider removing it to avoid confusion.
Also, the comment on line 4925 ("Show countdown timer with exponential backoff") and the error message on line 4938 ("Exponential backoff failed") are stale — the method no longer uses exponential backoff.
| const delay = (task as any).backoffAndAnnounce(1, new Error("test")) | ||
| // We can't easily await this because it has a loop with delay() | ||
| // but we can check the internal logic if we expose it or mock delay better | ||
| }) |
There was a problem hiding this comment.
WARNING: No-op test — this test creates a Task and calls backoffAndAnnounce but never awaits the returned promise and has no assertions. The comment acknowledges this limitation but the test still passes vacuously, giving false confidence in coverage.
Consider either:
- Awaiting the promise and asserting on the mocked
delaycalls - Removing this test until it can be properly implemented
- At minimum, adding
awaitand an assertion on the mock
Co-authored-by: kiloconnect[bot] <240665456+kiloconnect[bot]@users.noreply.github.com>
Implementation
Screenshots