Skip to content

Conversation

@weedge
Copy link
Collaborator

@weedge weedge commented Aug 7, 2025


feat:

  • add vllm + openai_gpt_oss on modal to run
  • use modal queue to local input chat, local input --- queue --> remote to loop chat
# download hf tranformers weight(safetensors) for vllm to load
modal run src/download_models.py --repo-ids "openai/gpt-oss-20b"
modal run src/download_models.py --repo-ids "openai/gpt-oss-120b" --ignore-patterns "*.pt|*.bin|*original*|*metal*"

# see help
modal run src/llm/vllm/openai_gpt_oss.py --help 


# tokenizer
modal run src/llm/vllm/openai_gpt_oss.py --task tokenizer
modal run src/llm/vllm/openai_gpt_oss.py --task harmony_chat_tokenizer

# generate
IMAGE_GPU=A100 modal run src/llm/vllm/openai_gpt_oss.py --task generate
IMAGE_GPU=L40s modal run src/llm/vllm/openai_gpt_oss.py --task generate
IMAGE_GPU=H100 modal run src/llm/vllm/openai_gpt_oss.py --task generate

IMAGE_GPU=A100 modal run src/llm/vllm/openai_gpt_oss.py --task generate_stream
IMAGE_GPU=L40s modal run src/llm/vllm/openai_gpt_oss.py --task generate_stream
IMAGE_GPU=H100 modal run src/llm/vllm/openai_gpt_oss.py --task generate_stream

# chat
IMAGE_GPU=A100 modal run src/llm/vllm/openai_gpt_oss.py --task chat_stream
IMAGE_GPU=L40s modal run src/llm/vllm/openai_gpt_oss.py --task chat_stream
IMAGE_GPU=H100 modal run src/llm/vllm/openai_gpt_oss.py --task chat_stream

# local input --- queue --> remote to loop chat

## use browser tool(find,open,search), need env EXA_API_KEY from https://exa.ai
IMAGE_GPU=L40s modal run src/llm/vllm/openai_gpt_oss.py --task chat_tool_stream --build-in-tool browser
IMAGE_GPU=H100 modal run src/llm/vllm/openai_gpt_oss.py --task chat_tool_stream --build-in-tool browser
IMAGE_GPU=L40s modal run src/llm/vllm/openai_gpt_oss.py --task chat_tool_stream \
    --max-tokens 2048 --temperature=1.0 --top-p=1.0 \
    --build-in-tool browser --show-browser-results --model-identity "你是一名聊天助手,请用中文回复。"
IMAGE_GPU=H100 modal run src/llm/vllm/openai_gpt_oss.py --task chat_tool_stream \
    --max-tokens 2048 --temperature=1.0 --top-p=1.0 \
    --build-in-tool browser --show-browser-results --model-identity "你是一名聊天助手,请用中文回复。"
IMAGE_GPU=H100 modal run src/llm/vllm/openai_gpt_oss.py --task chat_tool_stream --build-in-tool browser --is-apply-patch --show-browser-results
IMAGE_GPU=H100 modal run src/llm/vllm/openai_gpt_oss.py --task chat_tool_stream --build-in-tool browser --raw --is-apply-patch --show-browser-results

## need python tool to run script need change python docker, u can change python tools to do local env python on sandbox or use serverless function
IMAGE_GPU=H100 modal run src/llm/vllm/openai_gpt_oss.py --task chat_tool_stream --build-in-tool python
IMAGE_GPU=H100 modal run src/llm/vllm/openai_gpt_oss.py --task chat_tool_stream --build-in-tool python --is-apply-patch 
IMAGE_GPU=H100 modal run src/llm/vllm/openai_gpt_oss.py --task chat_tool_stream --build-in-tool python --raw --is-apply-patch

reference

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Summary of Changes

Hello @weedge, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

I've added a new feature that integrates vLLM with OpenAI GPT-OSS models on Modal, enabling various text generation and chat functionalities. A key aspect is the implementation of a local-to-remote chat loop using Modal queues, facilitating interactive sessions with the deployed models and supporting built-in tools like a browser and Python interpreter.

Highlights

  • vLLM Integration on Modal: I've set up a Modal application to deploy and run vLLM with OpenAI GPT-OSS models, leveraging GPU resources for efficient inference.
  • Custom Tokenizer: A tiktoken-based custom tokenizer (o200k_harmony) is included to handle specific tokenization needs for the GPT-OSS models.
  • Diverse Generation Modes: The implementation supports various text generation tasks, including basic generation, streaming generation, and interactive chat streams.
  • Tool-Augmented Chat: The chat_tool_stream function allows the model to interact with external tools, specifically a browser tool (using ExaBackend for web search) and a Python tool for executing code.
  • Local-to-Remote Chat Loop: I've introduced a mechanism using Modal's ephemeral queues to enable seamless local input for remote chat sessions, enhancing interactivity.
  • Patch Application Capability: The chat_tool_stream also includes an apply_patch function, allowing the model to apply code patches.
  • Environment Variable Update: The .env.example file has been updated to include EXA_API_KEY, which is necessary for the browser tool's functionality.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in issue comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments or fill out our survey to provide feedback.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request adds functionality to run vLLM with openai_gpt_oss on Modal, including an interactive chat mode using a queue. The implementation is quite comprehensive and covers various modes of generation and chat. My review focuses on improving the robustness, maintainability, and adherence to best practices. I've identified several areas for improvement, including pinning dependencies for reproducible builds, fixing a potential Unicode decoding error, improving error handling, and refactoring complex code for better clarity. Addressing these points will make the new functionality more reliable and easier to maintain.

Signed-off-by: weedge <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants