Skip to content

fix(gemini): propagate safety filtering info for Vertex AI and Gemini responses#2590

Open
astralhpi wants to merge 1 commit intomaximhq:mainfrom
watnee-ai:fix/gemini-vertex-safety-filtering
Open

fix(gemini): propagate safety filtering info for Vertex AI and Gemini responses#2590
astralhpi wants to merge 1 commit intomaximhq:mainfrom
watnee-ai:fix/gemini-vertex-safety-filtering

Conversation

@astralhpi
Copy link
Copy Markdown

Summary

Gemini/Vertex AI safety filtering info (finishReason: SAFETY, promptFeedback, etc.) was silently dropped during response conversion — clients always saw finish_reason: "stop" instead of "content_filter".

Changes

  • Add Vertex AI-specific finish reasons (MODEL_ARMOR, IMAGE_PROHIBITED_CONTENT, IMAGE_RECITATION, IMAGE_OTHER, NO_IMAGE) to type definitions and mapping tables
  • Check promptFeedback before falling back to MalformedFunctionCall when candidates are empty (Chat non-stream + stream)
  • Map finishReasonStopReason in Responses non-stream path (was completely missing)
  • Emit response.completed in Responses stream path even when no content was emitted before filtering
  • Forward finishMessage as error text in Responses non-stream path
  • NO_IMAGE maps to "stop" not "content_filter" — it's a generation failure, not safety
  • Streaming promptFeedback uses actual BlockReason with SAFETY fallback for unknown values

Verified against a real Gemini 2.5 Pro API response where content is nil (not empty object).

571 lines total, but ~450 are tests. ~120 lines of production code.

Type of change

  • Bug fix
  • Feature
  • Refactor
  • Documentation
  • Chore/CI

Affected areas

  • Core (Go)
  • Transports (HTTP)
  • Providers/Integrations
  • Plugins
  • UI (Next.js)
  • Docs

How to test

cd core
go test ./providers/gemini/ -run "TestChatSafetyFilteringGap|TestRealAPISafetyResponse|TestResponsesSafetyFilteringGap|TestEmptyCandidatesRegression" -v

19 tests, including the existing PR #1018 regression suite.

Breaking changes

  • Yes
  • No

finish_reason changes from "stop" to "content_filter" for safety-filtered responses. Previous value was a bug.

Related issues

N/A

Security considerations

finishMessage / blockReasonMessage from the provider are now forwarded verbatim as error text in the Responses path.

Checklist

  • I read docs/contributing/README.md and followed the guidelines
  • I added/updated tests where appropriate
  • I updated documentation where needed
  • I verified builds succeed (Go and UI)
  • I verified the CI pipeline passes locally if applicable

@CLAassistant
Copy link
Copy Markdown

CLAassistant commented Apr 9, 2026

CLA assistant check
All committers have signed the CLA.

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented Apr 9, 2026

📝 Walkthrough

Summary by CodeRabbit

  • Bug Fixes

    • Improved safety content-filter detection and consistent finish/reason mapping for Vertex AI/Gemini responses, including cases with no response candidates
    • Ensured streams emit proper completion events for safety and certain non-image failure reasons
  • Tests

    • Added comprehensive tests covering safety filtering across Chat and Responses, streaming and non-streaming paths
  • Documentation

    • Updated changelog to reflect safety propagation fixes and restored the previous feature note

Walkthrough

Updated Gemini/Vertex safety handling: new finish-reason constants, mappings, and conversion logic now propagate prompt-block and Vertex-specific safety reasons into Chat and Responses outputs (streaming and non-streaming). Added tests and a changelog entry.

Changes

Cohort / File(s) Summary
Type Definitions
core/providers/gemini/types.go
Added Vertex/ Gemini-specific FinishReason constants: MODEL_ARMOR, IMAGE_PROHIBITED_CONTENT, IMAGE_RECITATION, IMAGE_OTHER, NO_IMAGE.
Finish Reason Mapping
core/providers/gemini/utils.go
Extended geminiFinishReasonToBifrost to map new Vertex finish reasons to Bifrost "content_filter" (and NO_IMAGE"stop").
Chat API Safety Handling
core/providers/gemini/chat.go
Detects prompt-blocked responses with empty Candidates and PromptFeedback.BlockReason; returns error responses/stream finishes with content_filter. isErrorFinishReason treats new Vertex finish reasons as error/filter states.
Responses API Safety Handling
core/providers/gemini/responses.go
Handles prompt-blocked Responses (no candidates) by setting StopReason="content_filter"/Status="failed", emits assistant output when present, maps candidate finish reasons to stop statuses, and ensures streams close/emit response.completed even when no prior items were opened.
Test Coverage
core/providers/gemini/gemini_test.go
Added tests: TestChatSafetyFilteringGap, TestRealAPISafetyResponse, TestResponsesSafetyFilteringGap validating safety propagation across streaming and non-streaming Chat and Responses paths.
Changelog
core/changelog.md
Updated changelog entry to document propagation of safety filtering information for Vertex AI / Gemini and re-added prior azure passthrough feature entry.

Sequence Diagrams

sequenceDiagram
    participant Client
    participant ChatAPI as Chat API
    participant Gemini as Gemini Provider
    participant Conv as Converter

    Client->>ChatAPI: Send chat request
    ChatAPI->>Gemini: GenerateContent (sync)
    Gemini-->>ChatAPI: Response (Candidates, PromptFeedback)
    ChatAPI->>Conv: Convert to Bifrost chat response
    alt No candidates + PromptFeedback.BlockReason
        Conv-->>ChatAPI: createErrorResponse(..., "content_filter")
        ChatAPI-->>Client: finish_reason="content_filter"
    else Candidate finish reason is Vertex safety
        Conv-->>ChatAPI: Map finish reason -> "content_filter"
        ChatAPI-->>Client: finish_reason="content_filter"
    else Normal candidate(s)
        Conv-->>ChatAPI: Normal candidate conversion
        ChatAPI-->>Client: Normal response
    end
Loading
sequenceDiagram
    participant Client
    participant ResponsesAPI as Responses API
    participant Gemini as Gemini Provider
    participant Stream as Stream Manager
    participant Conv as Converter

    Client->>ResponsesAPI: Send request (stream)
    ResponsesAPI->>Gemini: GenerateContent (stream)
    Gemini-->>Stream: Emit chunk(s) with Candidates / PromptFeedback
    Stream->>Conv: Process chunk
    alt PromptFeedback.BlockReason && no candidates
        Conv-->>Stream: Derive finish reason (or default), call closeGeminiOpenItems with block message
        Stream-->>Client: Emit response.completed (StopReason="content_filter")
    else Candidate has safety finish reason or finishMessage
        Conv-->>Stream: Close items, emit assistant text (if present), mark StopReason appropriately
        Stream-->>Client: Emit error/stop then response.completed
    else Normal streaming chunks
        Stream-->>Client: Emit content chunks until normal completion
    end
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

Poem

🐰 A nibble, a hop, a careful inspect,

When Gemini flags, we catch and reflect.
Finish reasons mapped, streams closed with care,
Safety delivered — handled everywhere. ✨

🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 71.43% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Title check ✅ Passed The title directly and accurately summarizes the main change: fixing propagation of safety filtering information for Vertex AI and Gemini responses.
Description check ✅ Passed The description comprehensively covers required sections: a clear summary of the problem and solution, detailed changes, correct bug fix type selection, affected areas, testing instructions, breaking change acknowledgment, and security considerations.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@greptile-apps
Copy link
Copy Markdown
Contributor

greptile-apps Bot commented Apr 9, 2026

Confidence Score: 5/5

Safe to merge — all fixes are correct and the single remaining finding is a style-only simplification.

The bug fixes are sound: promptFeedback is checked before the MalformedFunctionCall fallback, the streaming else-branch prevents clients from hanging on NO_IMAGE/other terminal reasons, and the non-stream Responses path now maps finishReason to StopReason. The 19 tests cover the real-API nil-content shape. The only open finding is a P2 style nit (two identical else branches in the streaming block).

core/providers/gemini/responses.go — minor redundancy in the three-branch streaming block around lines 997–1020

Important Files Changed

Filename Overview
core/providers/gemini/responses.go Non-stream path gains finishReason-to-stopReason mapping and error output propagation; streaming path now always emits response.completed — but the three-branch if/else if/else block has two identical branches that could be simplified.
core/providers/gemini/chat.go Adds promptFeedback check before the MalformedFunctionCall fallback in both stream and non-stream paths; isErrorFinishReason extended with four Vertex reasons.
core/providers/gemini/gemini_test.go Adds 19 tests covering prompt-blocked, safety-filtered, Vertex-specific, and streaming scenarios; test coverage is thorough.
core/providers/gemini/utils.go Extends geminiFinishReasonToBifrost map with the new Vertex constants; NO_IMAGE deliberately maps to "stop" rather than "content_filter".
core/providers/gemini/types.go Adds five new Vertex AI-specific FinishReason constants; definitions are accurate and well-documented.

Reviews (2): Last reviewed commit: "[fix]: propagate safety filtering info f..." | Re-trigger Greptile

Comment thread core/providers/gemini/responses.go
Comment thread core/providers/gemini/gemini_test.go
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@core/providers/gemini/chat.go`:
- Around line 89-92: The current check in core/providers/gemini/chat.go uses
response.PromptFeedback != nil && response.PromptFeedback.BlockReason != "" to
mark prompt-blocked responses as "content_filter", but that misses cases where
PromptFeedback exists and Candidates is empty; change the condition(s) (both the
one around response handling and the later occurrence at lines ~306-309) to
detect prompt-blocking when response.PromptFeedback != nil AND
(len(response.Candidates) == 0 || response.PromptFeedback.BlockReason != ""),
then call createErrorResponse(response, "content_filter", false) as before so
empty-candidate prompt feedback is classified correctly.

In `@core/providers/gemini/responses.go`:
- Around line 1021-1026: Save the original provider reason string
(response.PromptFeedback.BlockReason) into a local variable (e.g.,
rawBlockReason), then keep the existing mapping logic that converts it to
blockFinishReason using geminiFinishReasonToBifrost and falls back to
FinishReasonSafety when unknown; when calling closeGeminiOpenItems (the function
that emits the visible error), pass the original rawBlockReason into the message
(or prefix the existing BlockReasonMessage with rawBlockReason) so the emitted
error text preserves the provider’s actual reason like "JAILBREAK" while still
using the mapped blockFinishReason for internal handling.
- Around line 996-1010: The pooled GeminiResponsesStreamState is not clearing
HasEmittedWebSearch, causing subsequent requests to skip
emitWebSearchFromGroundingMetadata; update GeminiResponsesStreamState.flush()
(or wherever the state is returned to the pool via pool.Put()) to reset
HasEmittedWebSearch = false (and ensure any other per-request flags are cleared)
before putting the state back so emitWebSearchFromGroundingMetadata will run
correctly for the next user.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: ef236ea7-3780-400c-a71e-be82ba57452c

📥 Commits

Reviewing files that changed from the base of the PR and between a3fb2e4 and 43a6c95.

📒 Files selected for processing (6)
  • core/changelog.md
  • core/providers/gemini/chat.go
  • core/providers/gemini/gemini_test.go
  • core/providers/gemini/responses.go
  • core/providers/gemini/types.go
  • core/providers/gemini/utils.go

Comment on lines +89 to +92
// If promptFeedback is present, the prompt was blocked by safety filters
if response.PromptFeedback != nil && response.PromptFeedback.BlockReason != "" {
return createErrorResponse(response, "content_filter", false)
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Don’t require BlockReason to classify prompt-blocked empty responses.

When Candidates is empty, PromptFeedback itself already indicates prompt blocking. Requiring BlockReason != "" can still fall back to malformed-function-call handling and return "stop" instead of "content_filter".

Suggested fix
-		if response.PromptFeedback != nil && response.PromptFeedback.BlockReason != "" {
+		if response.PromptFeedback != nil {
 			return createErrorResponse(response, "content_filter", false)
 		}
-		if response.PromptFeedback != nil && response.PromptFeedback.BlockReason != "" {
+		if response.PromptFeedback != nil {
 			return createErrorResponse(response, "content_filter", true), nil, true
 		}

Also applies to: 306-309

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@core/providers/gemini/chat.go` around lines 89 - 92, The current check in
core/providers/gemini/chat.go uses response.PromptFeedback != nil &&
response.PromptFeedback.BlockReason != "" to mark prompt-blocked responses as
"content_filter", but that misses cases where PromptFeedback exists and
Candidates is empty; change the condition(s) (both the one around response
handling and the later occurrence at lines ~306-309) to detect prompt-blocking
when response.PromptFeedback != nil AND (len(response.Candidates) == 0 ||
response.PromptFeedback.BlockReason != ""), then call
createErrorResponse(response, "content_filter", false) as before so
empty-candidate prompt feedback is classified correctly.

Comment on lines +996 to +1010
if candidate.FinishReason != "" {
if len(state.ItemIDs) > 0 {
// Close any open items normally
// Check for grounding metadata (web search results)
if candidate.GroundingMetadata != nil && !state.HasEmittedWebSearch {
webSearchResponses := emitWebSearchFromGroundingMetadata(
candidate.GroundingMetadata,
state,
sequenceNumber+len(responses),
)
responses = append(responses, webSearchResponses...)
}

closeResponses := closeGeminiOpenItems(state, candidate.GroundingMetadata, response.UsageMetadata, sequenceNumber+len(responses), candidate.FinishReason, candidate.FinishMessage)
responses = append(responses, closeResponses...)
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Reset HasEmittedWebSearch before reusing the pooled stream state.

This new guard depends on state.HasEmittedWebSearch, but GeminiResponsesStreamState.flush() still never clears that field. After one request emits grounding metadata, a reused pooled state can skip emitWebSearchFromGroundingMetadata() for the next request and silently drop the web-search events.

Suggested fix
func (state *GeminiResponsesStreamState) flush() {
	// Clear maps
	...
	state.HasEmittedCompleted = false
	state.HasEmittedInProgress = false
	state.TextItemClosed = false
	state.HasStartedText = false
	state.HasStartedToolCall = false
+	state.HasEmittedWebSearch = false
	state.TextBuffer.Reset()
}

As per coding guidelines, "Always reset ALL fields of pooled objects before calling pool.Put() to prevent stale data from leaking to subsequent users."

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@core/providers/gemini/responses.go` around lines 996 - 1010, The pooled
GeminiResponsesStreamState is not clearing HasEmittedWebSearch, causing
subsequent requests to skip emitWebSearchFromGroundingMetadata; update
GeminiResponsesStreamState.flush() (or wherever the state is returned to the
pool via pool.Put()) to reset HasEmittedWebSearch = false (and ensure any other
per-request flags are cleared) before putting the state back so
emitWebSearchFromGroundingMetadata will run correctly for the next user.

Comment thread core/providers/gemini/responses.go
…nses

Gemini and Vertex AI return safety/content filtering information via
finishReason, promptFeedback, and finishMessage fields, but this information
was being lost across multiple conversion paths — causing content_filter
finish reasons to be reported as "stop" or dropped entirely.

Changes:
- Add Vertex AI-specific FinishReason constants (MODEL_ARMOR,
  IMAGE_PROHIBITED_CONTENT, IMAGE_RECITATION, IMAGE_OTHER, NO_IMAGE)
- Map Vertex safety finish reasons to "content_filter" in
  geminiFinishReasonToBifrost and isErrorFinishReason
- Handle promptFeedback in empty-candidates case for Chat Completions
  (non-stream and stream), preserving the actual block reason
- Add StopReason mapping and promptFeedback handling in Responses
  non-stream path (ToResponsesBifrostResponsesResponse)
- Fix Responses stream path to emit response.completed with
  content_filter when safety-filtered with no prior content
- Propagate finishMessage as error output text in Responses path

Refs:
- Vertex AI: https://docs.cloud.google.com/vertex-ai/generative-ai/docs/reference/rest/v1/GenerateContentResponse
- Gemini API: https://ai.google.dev/api/generate-content

Affected packages:
- core/providers/gemini/
@astralhpi astralhpi force-pushed the fix/gemini-vertex-safety-filtering branch from 43a6c95 to 2493f01 Compare April 10, 2026 02:56
@greptile-apps
Copy link
Copy Markdown
Contributor

greptile-apps Bot commented Apr 10, 2026

Tip:

Greploop — Automatically fix all review issues by running /greploops in Claude Code. It iterates: fix, push, re-review, repeat until 5/5 confidence.

Use the Greptile plugin for Claude Code to query reviews, search comments, and manage custom context directly from your terminal.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
core/providers/gemini/chat.go (1)

507-520: ⚠️ Potential issue | 🟠 Major

Keep the existing non-safety terminal errors in isErrorFinishReason.

ToResponsesBifrostResponsesResponse and closeGeminiOpenItems both rely on this helper to mark a response as failed and surface finishMessage as error text. The new Vertex entries are fine, but MALFORMED, PERMISSION_DENIED, QUOTA_EXCEEDED, and OTHER are no longer classified as errors here, so those finish reasons now fall through as successful completions.

Suggested fix
 func isErrorFinishReason(reason FinishReason) bool {
 	return reason == FinishReasonSafety ||
+		reason == FinishReasonMalformed ||
+		reason == FinishReasonPermissionDenied ||
+		reason == FinishReasonQuotaExceeded ||
+		reason == FinishReasonOther ||
 		reason == FinishReasonRecitation ||
 		reason == FinishReasonMalformedFunctionCall ||
 		reason == FinishReasonBlocklist ||
 		reason == FinishReasonProhibitedContent ||
 		reason == FinishReasonSPII ||
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@core/providers/gemini/chat.go` around lines 507 - 520, The helper
isErrorFinishReason erroneously omitted non-safety terminal error FinishReason
values (MALFORMED, PERMISSION_DENIED, QUOTA_EXCEEDED, OTHER), causing those
completions to be treated as successes; update the isErrorFinishReason function
to include FinishReasonMalformed, FinishReasonPermissionDenied,
FinishReasonQuotaExceeded and FinishReasonOther (or the exact enum names for
MALFORMED, PERMISSION_DENIED, QUOTA_EXCEEDED, OTHER used in your FinishReason
type) alongside the existing Vertex-specific entries so that
ToResponsesBifrostResponsesResponse and closeGeminiOpenItems will mark those
finish reasons as errors.
♻️ Duplicate comments (2)
core/providers/gemini/chat.go (1)

89-92: ⚠️ Potential issue | 🟠 Major

Drop the BlockReason != "" gate for empty-candidate prompt blocks.

Inside the len(response.Candidates) == 0 branch, PromptFeedback alone is enough to identify a blocked prompt. Keeping the extra BlockReason check still routes some blocked responses through the malformed-function-call fallback, so clients can keep seeing the wrong finish reason in both chat paths.

Suggested fix
-		if response.PromptFeedback != nil && response.PromptFeedback.BlockReason != "" {
+		if response.PromptFeedback != nil {
 			return createErrorResponse(response, "content_filter", false)
 		}
-		if response.PromptFeedback != nil && response.PromptFeedback.BlockReason != "" {
+		if response.PromptFeedback != nil {
 			return createErrorResponse(response, "content_filter", true), nil, true
 		}

Also applies to: 306-309

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@core/providers/gemini/chat.go` around lines 89 - 92, The prompt-block
detection currently requires response.PromptFeedback != nil &&
response.PromptFeedback.BlockReason != "" which misses empty-candidate prompt
blocks; update both checks that guard createErrorResponse(response,
"content_filter", false) to use only response.PromptFeedback != nil (i.e., drop
the BlockReason != "" condition) so any non-nil PromptFeedback triggers the
content_filter error path in the chat handling code (look for the branches that
call createErrorResponse with "content_filter").
core/providers/gemini/responses.go (1)

996-1007: ⚠️ Potential issue | 🟠 Major

Reset HasEmittedWebSearch before reusing the pooled stream state.

This new guard depends on state.HasEmittedWebSearch, but GeminiResponsesStreamState.flush() still never clears that field. After one request emits grounding metadata, a reused pooled state can skip the web-search events for the next request and only emit the closing lifecycle events.

Suggested fix
 	state.HasEmittedCompleted = false
 	state.HasEmittedInProgress = false
 	state.TextItemClosed = false
 	state.HasStartedText = false
 	state.HasStartedToolCall = false
+	state.HasEmittedWebSearch = false
 	state.TextBuffer.Reset()

As per coding guidelines, "Always reset ALL fields of pooled objects before calling pool.Put() to prevent stale data from leaking to subsequent users."

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@core/providers/gemini/responses.go` around lines 996 - 1007, The pooled
stream state fails to clear HasEmittedWebSearch, causing subsequent requests to
skip emitting grounding metadata; update GeminiResponsesStreamState.flush() (or
the state reset/cleanup path invoked before pool.Put()) to set
state.HasEmittedWebSearch = false (and ensure any other lifecycle flags are
reset) so that emitWebSearchFromGroundingMetadata and the
candidate.GroundingMetadata guard behave correctly for each new request before
returning the state to the pool.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@core/providers/gemini/responses.go`:
- Around line 161-167: The current check only treats a
PromptFeedback-without-candidates as a content filter failure when
PromptFeedback.BlockReason is non-empty; update the logic in the non-stream
handler (the block that currently checks "if len(response.Candidates) == 0 &&
response.PromptFeedback != nil && response.PromptFeedback.BlockReason != \"\"")
to consider any non-nil response.PromptFeedback with zero candidates as a
content_filter failure even if BlockReason is blank: set bifrostResp.StopReason
to "content_filter", Status to "failed", and still populate errorText using
BlockReasonMessage when present (or a generic message when both fields are
empty). Apply the same change to the corresponding streaming branch (the handler
that watches response.in_progress/response.completed) so that a PromptFeedback
with no candidates triggers the failure and final completed emission rather than
leaving the stream hanging.

---

Outside diff comments:
In `@core/providers/gemini/chat.go`:
- Around line 507-520: The helper isErrorFinishReason erroneously omitted
non-safety terminal error FinishReason values (MALFORMED, PERMISSION_DENIED,
QUOTA_EXCEEDED, OTHER), causing those completions to be treated as successes;
update the isErrorFinishReason function to include FinishReasonMalformed,
FinishReasonPermissionDenied, FinishReasonQuotaExceeded and FinishReasonOther
(or the exact enum names for MALFORMED, PERMISSION_DENIED, QUOTA_EXCEEDED, OTHER
used in your FinishReason type) alongside the existing Vertex-specific entries
so that ToResponsesBifrostResponsesResponse and closeGeminiOpenItems will mark
those finish reasons as errors.

---

Duplicate comments:
In `@core/providers/gemini/chat.go`:
- Around line 89-92: The prompt-block detection currently requires
response.PromptFeedback != nil && response.PromptFeedback.BlockReason != ""
which misses empty-candidate prompt blocks; update both checks that guard
createErrorResponse(response, "content_filter", false) to use only
response.PromptFeedback != nil (i.e., drop the BlockReason != "" condition) so
any non-nil PromptFeedback triggers the content_filter error path in the chat
handling code (look for the branches that call createErrorResponse with
"content_filter").

In `@core/providers/gemini/responses.go`:
- Around line 996-1007: The pooled stream state fails to clear
HasEmittedWebSearch, causing subsequent requests to skip emitting grounding
metadata; update GeminiResponsesStreamState.flush() (or the state reset/cleanup
path invoked before pool.Put()) to set state.HasEmittedWebSearch = false (and
ensure any other lifecycle flags are reset) so that
emitWebSearchFromGroundingMetadata and the candidate.GroundingMetadata guard
behave correctly for each new request before returning the state to the pool.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: f849ea2a-c8f7-45ca-a6c3-55c069eefe64

📥 Commits

Reviewing files that changed from the base of the PR and between 43a6c95 and 2493f01.

📒 Files selected for processing (6)
  • core/changelog.md
  • core/providers/gemini/chat.go
  • core/providers/gemini/gemini_test.go
  • core/providers/gemini/responses.go
  • core/providers/gemini/types.go
  • core/providers/gemini/utils.go
✅ Files skipped from review due to trivial changes (2)
  • core/changelog.md
  • core/providers/gemini/utils.go
🚧 Files skipped from review as they are similar to previous changes (2)
  • core/providers/gemini/types.go
  • core/providers/gemini/gemini_test.go

Comment on lines +161 to +167
// Handle prompt blocked by safety filters (empty candidates with promptFeedback)
if len(response.Candidates) == 0 && response.PromptFeedback != nil && response.PromptFeedback.BlockReason != "" {
bifrostResp.StopReason = schemas.Ptr("content_filter")
bifrostResp.Status = schemas.Ptr("failed")
// Include block reason message as error output if available
if response.PromptFeedback.BlockReasonMessage != "" {
errorText := fmt.Sprintf("Error: %s - %s", response.PromptFeedback.BlockReason, response.PromptFeedback.BlockReasonMessage)
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Handle empty-candidate PromptFeedback even when BlockReason is blank.

These branches still require a non-empty BlockReason. When Gemini returns PromptFeedback with no candidates but omits the reason string, the non-stream path returns a normal response without failed/content_filter, and the stream path can stop after response.in_progress without ever emitting response.completed.

Suggested fix
-	if len(response.Candidates) == 0 && response.PromptFeedback != nil && response.PromptFeedback.BlockReason != "" {
+	if len(response.Candidates) == 0 && response.PromptFeedback != nil {
+		blockFinishReason := FinishReason(response.PromptFeedback.BlockReason)
+		if blockFinishReason == "" {
+			blockFinishReason = FinishReasonSafety
+		} else if _, ok := geminiFinishReasonToBifrost[blockFinishReason]; !ok {
+			blockFinishReason = FinishReasonSafety
+		}
 		bifrostResp.StopReason = schemas.Ptr("content_filter")
 		bifrostResp.Status = schemas.Ptr("failed")
 		// Include block reason message as error output if available
 		if response.PromptFeedback.BlockReasonMessage != "" {
-			errorText := fmt.Sprintf("Error: %s - %s", response.PromptFeedback.BlockReason, response.PromptFeedback.BlockReasonMessage)
+			reasonText := response.PromptFeedback.BlockReason
+			if reasonText == "" {
+				reasonText = string(blockFinishReason)
+			}
+			errorText := fmt.Sprintf("Error: %s - %s", reasonText, response.PromptFeedback.BlockReasonMessage)
 			bifrostResp.Output = []schemas.ResponsesMessage{
-	} else if response.PromptFeedback != nil && response.PromptFeedback.BlockReason != "" {
+	} else if response.PromptFeedback != nil {
 		// Prompt blocked — no candidates at all. Emit completed with content_filter.
 		// Use the actual BlockReason as the finish reason if it's a known safety reason,
 		// otherwise fall back to SAFETY so it still maps to content_filter.
 		blockFinishReason := FinishReason(response.PromptFeedback.BlockReason)
-		if _, ok := geminiFinishReasonToBifrost[blockFinishReason]; !ok {
+		if blockFinishReason == "" {
+			blockFinishReason = FinishReasonSafety
+		} else if _, ok := geminiFinishReasonToBifrost[blockFinishReason]; !ok {
 			blockFinishReason = FinishReasonSafety
 		}

Also applies to: 1022-1030

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@core/providers/gemini/responses.go` around lines 161 - 167, The current check
only treats a PromptFeedback-without-candidates as a content filter failure when
PromptFeedback.BlockReason is non-empty; update the logic in the non-stream
handler (the block that currently checks "if len(response.Candidates) == 0 &&
response.PromptFeedback != nil && response.PromptFeedback.BlockReason != \"\"")
to consider any non-nil response.PromptFeedback with zero candidates as a
content_filter failure even if BlockReason is blank: set bifrostResp.StopReason
to "content_filter", Status to "failed", and still populate errorText using
BlockReasonMessage when present (or a generic message when both fields are
empty). Apply the same change to the corresponding streaming branch (the handler
that watches response.in_progress/response.completed) so that a PromptFeedback
with no candidates triggers the failure and final completed emission rather than
leaving the stream hanging.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants