Skip to content
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
36 changes: 36 additions & 0 deletions mock-responses/vertexai/unary-success-implicit-caching.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,36 @@
{
"candidates": [
{
"content": {
"parts": [
{
"text": "Red Riding Hood is looking for **directions** in the forest."
}
],
"role": "model"
},
"finishReason": "STOP",
"index": 0
}
],
"usageMetadata": {
"promptTokenCount": 12013,
"candidatesTokenCount": 15,
"totalTokenCount": 12101,

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

There's an inconsistency in the token counts in usageMetadata. The totalTokenCount should typically be the sum of promptTokenCount and candidatesTokenCount.

Here, promptTokenCount is 12013 and candidatesTokenCount is 15. Their sum is 12128, but totalTokenCount is set to 12101.

To ensure the mock data is consistent and reliable for testing, please correct this value.

Suggested change
"totalTokenCount": 12101,
"totalTokenCount": 12128,

"cachedContentTokenCount": 11243,
"promptTokensDetails": [
{
"modality": "TEXT",
"tokenCount": 12013
}
],
"cacheTokensDetails": [
{
"modality": "TEXT",
"tokenCount": 11243
}
],
"thoughtsTokenCount": 73
},
"modelVersion": "gemini-2.5-flash",
}
Loading