Mirror: Fix Azure/OpenAI endpoints, reject AI Inference URLs (#5860)#42
Mirror: Fix Azure/OpenAI endpoints, reject AI Inference URLs (#5860)#42jeremylongshore wants to merge 1 commit intomainfrom
Conversation
…odex and reject Azure AI Inference URLs Normalize Responses base URLs for Azure/OpenAI fallback paths. Use Azure auth/query behavior (api-key + api-version) for fallback requests. Add openai-responses config validation and targeted tests.
|
Warning You have reached your daily quota limit. Please wait up to 24 hours and I will start processing your requests again! |
|
Warning Rate limit exceeded
⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. ✨ Finishing touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
ⓘ You are approaching your monthly quota for Qodo. Upgrade your plan Review Summary by QodoFix Azure/OpenAI Responses endpoints and reject AI Inference URLs
WalkthroughsDescription• Fix Azure/OpenAI Responses endpoint handling with proper URL normalization • Reject unsupported Azure AI Inference URLs (*.services.ai.azure.com) • Add configuration validation for openai-responses provider • Implement fallback auth using api-key for Azure and Bearer token for standard endpoints Diagramflowchart LR
A["Azure/OpenAI Endpoints"] --> B["URL Normalization"]
B --> C["Remove api-version params"]
B --> D["Normalize /v1 paths"]
E["Azure AI Inference URLs"] --> F["Reject with Error"]
G["Fallback Request Handler"] --> H["Use api-key for Azure"]
G --> I["Use Bearer token for standard"]
J["Config Validation"] --> K["Require baseUrl for responses"]
File Changes1. src/api/providers/openai-responses.ts
|
|
Failed to generate code suggestions for PR |
ⓘ You are approaching your monthly quota for Qodo. Upgrade your plan Code Review by Qodo
1. Unmarked Responses tests additions
|
| it("normalizes fallback URL without duplicating /v1", async () => { | ||
| const handler = new OpenAiCompatibleResponsesHandler({ | ||
| openAiApiKey: "test-key", | ||
| openAiBaseUrl: "https://api.example.com/v1", | ||
| openAiModelId: "gpt-4o", | ||
| } satisfies ApiHandlerOptions) | ||
|
|
||
| const mockFetch = vi.fn().mockResolvedValue({ | ||
| ok: true, | ||
| body: new ReadableStream({ | ||
| start(controller) { | ||
| controller.enqueue(new TextEncoder().encode("data: [DONE]\n\n")) | ||
| controller.close() | ||
| }, | ||
| }), | ||
| }) | ||
| global.fetch = mockFetch as any | ||
| mockResponsesCreate.mockRejectedValue(new Error("SDK not available")) | ||
|
|
||
| const stream = handler.createMessage(systemPrompt, messages) | ||
| for await (const _chunk of stream) { | ||
| } | ||
|
|
||
| expect(mockFetch).toHaveBeenCalledWith( | ||
| "https://api.example.com/v1/responses", | ||
| expect.objectContaining({ | ||
| method: "POST", | ||
| }), | ||
| ) | ||
| }) | ||
|
|
||
| it("rejects Azure AI Inference endpoints for Responses API", async () => { | ||
| const handler = new OpenAiCompatibleResponsesHandler({ | ||
| openAiApiKey: "test-key", | ||
| openAiBaseUrl: "https://myresource.services.ai.azure.com/models", | ||
| openAiModelId: "gpt-5.2-codex", | ||
| } satisfies ApiHandlerOptions) | ||
|
|
||
| const stream = handler.createMessage(systemPrompt, messages) | ||
|
|
||
| await expect(async () => { | ||
| for await (const _chunk of stream) { | ||
| } | ||
| }).rejects.toThrow("Azure AI Inference endpoints") | ||
|
|
||
| await expect(handler.completePrompt("Test prompt")).rejects.toThrow("Azure AI Inference endpoints") | ||
| }) | ||
|
|
There was a problem hiding this comment.
1. Unmarked responses tests additions 📘 Rule violation ⛯ Reliability
New/modified tests in a shared upstream src/ test file were added without kilocode_change markers even though they validate newly introduced Kilo-specific behavior. This can cause avoidable conflicts during upstream syncs.
Agent Prompt
## Issue description
Kilo-specific edits in shared upstream files under `src/` must include `kilocode_change` markers. The PR adds new test cases for Responses/Azure endpoint normalization and rejection without markers.
## Issue Context
Other modified shared files in this PR already use `kilocode_change` markers, so leaving related tests unmarked is inconsistent and increases upstream sync friction.
## Fix Focus Areas
- src/api/providers/__tests__/openai-responses.spec.ts[105-152]
ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools
| private normalizeResponsesBaseUrl(baseUrl?: string): string { | ||
| const defaultBaseUrl = this.isAzureOpenAiEndpoint | ||
| ? "https://api.openai.azure.com/openai/v1" | ||
| : "https://api.openai.com/v1" | ||
|
|
||
| if (!baseUrl) { | ||
| return defaultBaseUrl | ||
| } | ||
|
|
||
| try { | ||
| const parsed = new URL(baseUrl) | ||
| parsed.search = "" | ||
| parsed.hash = "" | ||
|
|
||
| let pathname = parsed.pathname.replace(/\/+$/, "") | ||
| pathname = pathname.replace(/\/(chat\/completions|completions|responses)$/, "") | ||
|
|
||
| if (this.isAzureOpenAiEndpoint) { | ||
| if (/\/openai\/deployments\/[^/]+$/.test(pathname)) { | ||
| pathname = "/openai/v1" | ||
| } else if (pathname === "" || pathname === "/") { | ||
| pathname = "/openai/v1" | ||
| } else if (pathname === "/v1") { | ||
| pathname = "/openai/v1" | ||
| } else if (pathname === "/openai") { | ||
| pathname = "/openai/v1" | ||
| } else if (pathname.endsWith("/openai")) { | ||
| pathname = `${pathname}/v1` | ||
| } else if (!pathname.endsWith("/v1")) { | ||
| pathname = `${pathname}/v1` | ||
| } | ||
| } else { | ||
| if (pathname === "" || pathname === "/") { | ||
| pathname = "/v1" | ||
| } else if (!pathname.endsWith("/v1")) { | ||
| pathname = `${pathname}/v1` | ||
| } | ||
| } | ||
|
|
||
| parsed.pathname = pathname | ||
| return parsed.toString().replace(/\/$/, "") | ||
| } catch { | ||
| return defaultBaseUrl | ||
| } |
There was a problem hiding this comment.
2. Silent baseurl fallback 🐞 Bug ⛨ Security
If openAiBaseUrl is present but not a valid absolute URL, normalizeResponsesBaseUrl() silently returns a default host, so the fetch fallback may POST prompts to an unexpected endpoint instead of failing fast on misconfiguration. This is particularly risky in environments relying on explicit egress controls/allowed endpoints.
Agent Prompt
### Issue description
`normalizeResponsesBaseUrl()` silently falls back to a hardcoded default host when `openAiBaseUrl` cannot be parsed as a URL. This can cause the fetch fallback path to send requests to an unintended endpoint instead of surfacing a clear configuration error.
### Issue Context
The fallback target is created by `getResponsesFallbackTarget()`, which always uses `normalizeResponsesBaseUrl()` to construct the `fetch()` URL.
### Fix Focus Areas
- src/api/providers/openai-responses.ts[557-574]
- src/api/providers/openai-responses.ts[576-619]
ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools
|
@greptileai review |
|
Merged into |
Review: kilocode Kilo-Org#5860
Checklist
Findings1. Azure AI Inference endpoint rejection is a good guard (severity: gray)The PR adds an explicit check that rejects private assertSupportedResponsesEndpoint(): void {
if (this.isAzureAiInferenceEndpoint) {
throw new Error(
"Azure AI Inference endpoints (*.services.ai.azure.com) are not supported by OpenAI Compatible (Responses)..."
)
}
}This is called at three entry points: 2. URL normalization handles many edge cases (severity: gray)The
This is necessarily complex because Azure users provide URLs in many different formats. The test coverage validates the important cases. 3. api-version parameter correctly omitted for /openai/v1 paths (severity: gray)private shouldAppendAzureApiVersion(url: URL): boolean {
const pathname = url.pathname.replace(/\/+$/, "")
return !pathname.includes("/openai/v1")
}Azure's 4. Removed Azure AI Inference client construction from constructor (severity: yellow)The original constructor had three branches: Azure AI Inference, Azure OpenAI, and standard OpenAI. The PR removes the Azure AI Inference branch entirely and converts it to a runtime rejection. The This means the 5. Fallback auth correctly separates Azure vs non-Azure (severity: gray)if (this.isAzureOpenAiEndpoint) {
headers["api-key"] = apiKey
// Conditionally add api-version
} else {
headers.Authorization = `Bearer ${apiKey}`
}Azure OpenAI uses 6. webview validation added for openai-responses (severity: gray)case "openai-responses":
if (!apiConfiguration.openAiBaseUrl || !apiConfiguration.openAiApiKey || !apiConfiguration.openAiModelId) {
return i18next.t("settings:validation.openAi")
}
breakThis is a good addition -- 7. 404 error message improvement for Azure users (severity: gray)The 404 error message now includes Azure-specific guidance: errorMessage = this.isAzureOpenAiEndpoint
? "...use a base URL like https://<resource>.openai.azure.com/openai/v1..."
: "...The endpoint may not be available yet..."
if ((this.options.openAiBaseUrl || "").includes("/deployments/")) {
errorMessage += " Do not use a /deployments/.../chat/completions URL as the base URL."
}This directly addresses a common misconfiguration pattern. CI Status
Code SnippetsURL normalization for Azure deployment paths: if (/\/openai\/deployments\/[^/]+$/.test(pathname)) {
pathname = "/openai/v1"
} else if (pathname === "" || pathname === "/") {
pathname = "/openai/v1"
} else if (pathname === "/v1") {
pathname = "/openai/v1"
} else if (pathname === "/openai") {
pathname = "/openai/v1"
}Test validates correct normalization for complex Azure URL: // Input: https://myresource.openai.azure.com/openai/deployments/my-deployment/chat/completions?api-version=2024-05-01-preview
// Output: https://myresource.openai.azure.com/openai/v1/responses (no api-version)VerdictAPPROVE -- This PR correctly fixes Azure URL handling in the OpenAI Responses provider. The URL normalization handles the many variations of Azure endpoint formats that users provide. The Azure AI Inference rejection fails fast with a clear message instead of silently sending requests to an unsupported endpoint. Auth header separation is correct. All CI checks pass and a Discord user confirmed the fix works with Microsoft Foundry deployments. The test coverage is targeted and validates the critical paths. |
Review Journal: kilocode Kilo-Org#5860
SummaryFixes Azure URL normalization in the OpenAI Responses provider and rejects unsupported Azure AI Inference endpoints. Handles the many URL format variations users provide for Azure deployments. Community user confirmed fix works with Microsoft Foundry. Approve. First ImpressionsAzure endpoint handling is notoriously complex due to the multiple URL formats (deployment paths, cognitive services, v1 paths, etc.). The PR description clearly explains the endpoint classification and normalization approach. The test coverage at 153 lines covers the critical URL transformations. What I Looked At
AnalysisThe PR makes three categories of changes: 1. Endpoint classification and rejection Two boolean flags are set in the constructor:
The Azure AI Inference path was previously handled by creating a client with a specific path override. The PR removes this in favor of an immediate rejection since the Responses API is not supported on these endpoints. This is the correct approach -- failing fast with a clear message is better than partially working. 2. URL normalization The
The 3. Auth header separation The fallback fetch path now correctly uses Verification
Lessons Learned
Review methodology: AI PR Review Case Studies | Reviewed with GWI + Claude Code |
Mirror of Kilo-Org#5860
This PR mirrors the upstream change for multi-AI review analysis.
Bot Review Tracker
Links