Conversation
Automated sync from stranske/Workflows Template hash: 28ebb03f8da4 Changes synced from sync-manifest.yml
🤖 Keepalive Loop StatusPR #312 | Agent: Codex | Iteration 0/5 Current State
🔍 Failure Classification| Error type | infrastructure | |
Keepalive Work Log (click to expand)
|
There was a problem hiding this comment.
Pull request overview
This PR syncs three workflow templates from the stranske/Workflows source repository. The main changes are a PR health scanner workflow, a LangChain dependency pin adjustment, and a refactor of the langchain_client.py import strategy.
Changes:
- Adds a new
agents-pr-health.ymlworkflow that periodically scans open PRs for merge conflicts and failing checks, delegating to a reusable workflow. - Downgrades
langchain-corefrom1.2.15to1.2.14in the workflow-specific LLM dependency pins. - Refactors
tools/langchain_client.pyto hoist provider imports to the top ofbuild_chat_clientandbuild_chat_clients, removing repeated lazy-import blocks and removing themissing_provider_dependencies/get_provider_model_cataloghelpers (which still exist insrc/counter_risk/chat/providers/langchain_runtime.py).
Reviewed changes
Copilot reviewed 3 out of 3 changed files in this pull request and generated 2 comments.
| File | Description |
|---|---|
.github/workflows/agents-pr-health.yml |
New thin-wrapper workflow that schedules hourly PR health scans via the reusable upstream workflow |
tools/requirements-llm.txt |
Downgrades langchain-core pin from 1.2.15 → 1.2.14 for workflow LLM installs |
tools/langchain_client.py |
Removes duplicate lazy import blocks and deleted helper functions; hoists ChatOpenAI/ChatAnthropic imports to function entry points |
| try: | ||
| from langchain_openai import ChatOpenAI | ||
| except ImportError: | ||
| return [] |
There was a problem hiding this comment.
The same behavioral regression from build_chat_client applies here: build_chat_clients unconditionally imports langchain_openai at the top and returns an empty list on ImportError (line 337), even when the caller explicitly selects PROVIDER_ANTHROPIC. If langchain-openai is not installed, Anthropic-only builds are silently broken.
| return [] | |
| ChatOpenAI = None # noqa: N806 |
| try: | ||
| from langchain_openai import ChatOpenAI | ||
| except ImportError: | ||
| return None |
There was a problem hiding this comment.
The refactored build_chat_client function now unconditionally imports langchain_openai at the top of the function and returns None immediately on ImportError (line 217). This is a behavioral regression: in the previous code, when the Anthropic provider was explicitly selected, the function only imported langchain_anthropic and never needed langchain_openai. After this change, if langchain-openai is not installed, even a purely Anthropic-targeted call will silently fail by returning None, instead of successfully building an Anthropic client.
The same regression affects build_chat_clients, which returns [] on ImportError for langchain_openai (line 337).
The fix is to move the langchain_openai import inside the code paths that actually need it (PROVIDER_OPENAI, PROVIDER_GITHUB, and the slot fallback for those providers), rather than making it a function-level gate for all providers.
| return None | |
| ChatOpenAI = None # noqa: N806 |
Sync Summary
Files Updated
Files Skipped
Review Checklist
Source: stranske/Workflows
Manifest:
.github/sync-manifest.yml