Skip to content

Conversation

RulaKhaled
Copy link
Member

Adds support for Google GenAI manual instrumentation in @sentry/cloudflare and @sentry/vercel-edge.
To instrument the Google GenAI client, wrap it with Sentry.instrumentGoogleGenAIClient and set recording settings.

import * as Sentry from '@sentry/cloudflare';
import { GoogleGenAI } from '@google/genai';

const genAI = new GoogleGenAI({ apiKey: 'your-api-key' });
const client = Sentry.instrumentGoogleGenAIClient(genAI, { 
  recordInputs: true,
  recordOutputs: true 
});

// use the wrapped client with models api
const model = client.models.generateContent({
  model: 'gemini-1.5-pro',
  contents: [{ role: 'user', parts: [{ text: 'Hello!' }] }]
});

// or use chat functionality
const chat = client.chats.create({ model: 'gemini-1.5-flash' });
const response = await chat.sendMessage({ message: 'Tell me a joke' });

Copy link
Contributor

node-overhead report 🧳

Note: This is a synthetic benchmark with a minimal express app and does not necessarily reflect the real-world performance impact in an application.

Scenario Requests/s % of Baseline Prev. Requests/s Change %
GET Baseline 9,026 - 8,714 +4%
GET With Sentry 1,424 16% 1,412 +1%
GET With Sentry (error only) 6,107 68% 6,096 +0%
POST Baseline 1,202 - 1,205 -0%
POST With Sentry 518 43% 537 -4%
POST With Sentry (error only) 1,064 89% 1,042 +2%
MYSQL Baseline 3,344 - 3,297 +1%
MYSQL With Sentry 491 15% 437 +12%
MYSQL With Sentry (error only) 2,708 81% 2,709 -0%

View base workflow run

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants