Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG] Requesting support for function tools in the evaluation script #219

Open
rahulmukherji1 opened this issue Aug 21, 2024 · 0 comments
Open
Assignees
Labels
bug Something isn't working Evaluations Generative Features Any features that utilizes LLMs

Comments

@rahulmukherji1
Copy link

Expected Behavior

I'm trying to use the evals script to evaluate the performance of an agent that uses client-side function tools. This doesn't seem to be supported as of right now.

Current Behavior

The script throws errors, since the API needs an ACK from the function call.

Context (Environment)

I'm working on an agent that's meant to be deployed as a Prebuilt on the Vertex Console. Evaluations are necessary to test the agent's quality.

@rahulmukherji1 rahulmukherji1 added the bug Something isn't working label Aug 21, 2024
@kmaphoenix kmaphoenix self-assigned this Aug 23, 2024
@kmaphoenix kmaphoenix added Generative Features Any features that utilizes LLMs Evaluations labels Aug 23, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working Evaluations Generative Features Any features that utilizes LLMs
Projects
None yet
Development

No branches or pull requests

2 participants