Skip to content

[Bench][gsm8k] add openai-compatible backend#19231

Open
vladnosiv wants to merge 9 commits intosgl-project:mainfrom
vladnosiv:gsm8k-openai-compatible
Open

[Bench][gsm8k] add openai-compatible backend#19231
vladnosiv wants to merge 9 commits intosgl-project:mainfrom
vladnosiv:gsm8k-openai-compatible

Conversation

@vladnosiv
Copy link
Copy Markdown
Contributor

Motivation

This PR adds support for running benchmark/*/bench_other.py against an OpenAI compatible API endpoint.

The main motivation is enabling accuracy testing on top of NVIDIA Dynamo deployments, where only an OpenAI-compatible endpoint is available. Without this, existing bench_other.py scripts cannot be used directly for those environments.

Accuracy Tests

Ran GSM8K benchmark via OpenAI-compatible endpoint:

python3 benchmark/gsm8k/bench_other.py \
      --num-questions 500 \
      --num-shots 48 \
      --parallel 100 \
      --backend openai-compatible \
      --openai-base-url http://deepseek-v32-frontend.dynamo:8000/v1 \
      --openai-model deepseek-ai/DeepSeek-V3.2

Result:

Accuracy: 0.956
Invalid: 0.000
Latency: 53.243 s

Signed-off-by: Vladislav Nosivskoy <vladnosiv@gmail.com>
@github-actions github-actions bot added the documentation Improvements or additions to documentation label Feb 24, 2026
@gemini-code-assist
Copy link
Copy Markdown
Contributor

Summary of Changes

Hello @vladnosiv, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly enhances the benchmarking capabilities by introducing support for OpenAI-compatible API endpoints. This allows users to evaluate models deployed behind such interfaces, like NVIDIA Dynamo, using existing benchmark scripts. The changes involve adding new configuration options and a dedicated API interaction layer, making the benchmarking process more versatile and adaptable to various deployment environments.

Highlights

  • OpenAI-compatible Backend Support: Added a new backend option, 'openai-compatible', to the GSM8K benchmark script, enabling testing against OpenAI-compatible API endpoints.
  • New Command-Line Arguments: Introduced --openai-base-url, --openai-model, --openai-api-key, and --top-p arguments to configure the OpenAI-compatible backend.
  • API Integration Function: Implemented a dedicated call_generate_openai_compatible function to handle requests and responses for OpenAI-compatible chat completion APIs.
  • Documentation Update: Updated the README.md with detailed instructions and example commands for using the new OpenAI-compatible benchmarking feature.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Changelog
  • benchmark/gsm8k/README.md
    • Added example usage for benchmarking with an OpenAI-compatible endpoint.
    • Included details on --openai-base-url, --openai-model, --openai-api-key, --num-questions, --num-shots, --parallel, and --top-p.
    • Noted that --openai-api-key can be omitted if not required.
  • benchmark/gsm8k/bench_other.py
    • Introduced get_generate_kwargs function to centralize generation parameters, allowing top_p for OpenAI-compatible backend.
    • Updated call_generate calls to use get_generate_kwargs for temperature, max_tokens, and stop.
    • Increased default max_tokens from 256 to 512 in get_generate_kwargs.
    • Added --top-p command-line argument.
    • Added 'openai-compatible' to the list of supported backends.
    • Added --openai-base-url, --openai-model, and --openai-api-key arguments.
    • Configured the call_generate function to use call_generate_openai_compatible when the backend is 'openai-compatible', handling URL, model, and API key resolution.
  • python/sglang/test/test_utils.py
    • Implemented call_generate_openai_compatible function to interact with OpenAI-compatible chat completion APIs.
    • Added logic to construct HTTP headers (including Authorization if api_key is provided) and JSON payload for the API request.
    • Included handling for stop sequences and top_p parameter in the payload.
    • Provided text extraction logic from the API response, supporting both single and multiple choices.
    • Added 'openai-compatible' as a valid backend option in add_common_other_args_and_parse.
    • Assigned a default port (30000) for the 'openai-compatible' backend.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces support for an OpenAI-compatible backend for benchmarking, which is a valuable addition for testing in diverse environments. The implementation is well-structured and the changes are clear. I have provided a couple of suggestions for the python/sglang/test/test_utils.py file to enhance code conciseness and robustness.

vladnosiv and others added 2 commits February 24, 2026 11:03
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
@ishandhanani
Copy link
Copy Markdown
Collaborator

/tag-and-rerun-ci

Signed-off-by: Vladislav Nosivskoy <vladnosiv@gmail.com>
@vladnosiv
Copy link
Copy Markdown
Contributor Author

sorry, gemini broke code format, need restart ci

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

documentation Improvements or additions to documentation run-ci

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants