t1251: Fix frequent stale-evaluating recovery — heartbeat, configurable timeout, PR fast-path#1952
t1251: Fix frequent stale-evaluating recovery — heartbeat, configurable timeout, PR fast-path#1952marcusquinn merged 1 commit intomainfrom
Conversation
…, and PR fast-path (t1251) Root causes identified (73% stale rate): 1. evaluate_with_ai() had hardcoded 60s timeout — too short under load, causing silent timeouts that left tasks stuck in 'evaluating' state for 120s grace period 2. No heartbeat during AI eval — Phase 0.7 couldn't distinguish 'eval in progress' from 'eval died', triggering unnecessary recovery for active evaluations 3. Phase 1 called AI eval even when PR was already in DB (t1245 early-persist), wasting 60-90s on an AI call whose answer was already known Fixes: - evaluate_with_ai(): configurable timeout via SUPERVISOR_EVAL_TIMEOUT (default 90s) - evaluate_with_ai(): heartbeat — updates updated_at at eval start so Phase 0.7 grace window doesn't fire while evaluation is actively running - evaluate_with_ai(): logs eval start for better stale diagnosis - _diagnose_stale_root_cause(): detects eval_in_progress_heartbeat_Ns when updated_at is within eval_timeout+30s window - Phase 0.7 + Phase 1c: skip recovery when root_cause=eval_in_progress_heartbeat_* - Phase 1: skip AI eval when PR already in DB (t1251 fast-path, saves 60-90s)
|
Important Review skippedDraft detected. Please check the settings in the CodeRabbit UI or the You can disable this status message by setting the Use the checkbox below for a quick retry:
✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
🔍 Code Quality Report�[0;35m[MONITOR]�[0m Code Review Monitoring Report �[0;34m[INFO]�[0m Latest Quality Status: �[0;34m[INFO]�[0m Recent monitoring activity: 📈 Current Quality Metrics
Generated on: Thu Feb 19 09:10:40 UTC 2026 Generated by AI DevOps Framework Code Review Monitoring |
Summary of ChangesHello @marcusquinn, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request significantly improves the robustness and efficiency of the supervisor's task evaluation process by addressing the root causes of frequent "stale-evaluating" recoveries. It introduces a configurable timeout for AI evaluations, implements a heartbeat mechanism to prevent premature recovery of active evaluations, and optimizes the evaluation flow by skipping AI processing when a pull request has already been persisted. These changes aim to reduce false positives in stale task detection and streamline the overall task lifecycle. Highlights
Changelog
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
|
There was a problem hiding this comment.
Code Review
This pull request introduces several important fixes to address tasks getting stuck in an 'evaluating' state. The changes include making the evaluation timeout configurable, adding a heartbeat mechanism to signal that an evaluation is still active, and adding a fast-path to skip AI evaluation if a PR already exists. The changes are well-commented and align with the stated goals.



WIP - incremental commits
Root causes of 73% stale-evaluating rate:
Fixes: