Skip to content

Conversation

@joshua-mo-143
Copy link
Contributor

@joshua-mo-143 joshua-mo-143 commented Jun 11, 2025

Fixes #503
Fixes #400
Fixes #227
Fixes RIG-780

A list of breaking changes:

  • Added call_id as an Option in all core tool result/call type variants to enable usage with OpenAI. If the provider doesn't specifically need it, it can be set as None.

To-do:

  • Streaming
  • Streaming with tools

@linear
Copy link

linear bot commented Jun 11, 2025

@joshua-mo-143 joshua-mo-143 requested a review from cvauclair July 1, 2025 15:01
Copy link
Contributor

@0xMochan 0xMochan left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Couple of remarks but this looks fantastic! the responses api looked daunting but I am curious how we can leverage more of it's feature-set beyond the completion compatibility layer. the api allows you to have stateful messages (server-side message history) alongside some other things which would be neat to have a follow up PR for more examples.

  • The #[rig_tool] macro is borked due to the additionalParameters being required (ofc OpenAI still makes odd API design choises).
  • Since this completely changes the OpenAI client, there will be people's code breaking if they were using the older completion endpoint. I see that is still preserved but an example that showcases how to use the legacy client with other providers would be helpful
    • Ofc the built-in providers have "no-changes" but i'm sure someone will be using legacy clients.
  • Couple of other remarks on todos.

@joshua-mo-143 joshua-mo-143 removed the request for review from cvauclair July 3, 2025 21:59
@joshua-mo-143 joshua-mo-143 requested a review from 0xMochan July 3, 2025 23:59
Copy link
Contributor

@0xMochan 0xMochan left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Still getting a

thread 'main' panicked at rig-core/rig-core-derive/examples/rig_tool/async_tool.rs:51:64:
called `Result::unwrap()` on an `Err` value: CompletionError(ProviderError("{\n  \"error\": {\n    \"message\": \"Invalid schema for function 'async_operation': In context=(), 'required' is required to be supplied and to be an array including every key in properties. Missing 'delay_ms'.\",\n    \"type\": \"invalid_request_error\",\n    \"param\": \"tools[0].parameters\",\n    \"code\": \"invalid_function_parameters\"\n  }\n}"))
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace

on cargo run --example async_tool

@joshua-mo-143
Copy link
Contributor Author

Still getting a

thread 'main' panicked at rig-core/rig-core-derive/examples/rig_tool/async_tool.rs:51:64:
called `Result::unwrap()` on an `Err` value: CompletionError(ProviderError("{\n  \"error\": {\n    \"message\": \"Invalid schema for function 'async_operation': In context=(), 'required' is required to be supplied and to be an array including every key in properties. Missing 'delay_ms'.\",\n    \"type\": \"invalid_request_error\",\n    \"param\": \"tools[0].parameters\",\n    \"code\": \"invalid_function_parameters\"\n  }\n}"))
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace

on cargo run --example async_tool

Ended up just adding a required field to the tool macro. I think this will probably be the best way forward as otherwise there may be a significant amount of code duplication.

@joshua-mo-143 joshua-mo-143 requested a review from 0xMochan July 4, 2025 13:05
@0xMochan
Copy link
Contributor

0xMochan commented Jul 4, 2025

Since this completely changes the OpenAI client, there will be people's code breaking if they were using the older completion endpoint. I see that is still preserved but an example that showcases how to use the legacy client with other providers would be helpful

  • Ofc the built-in providers have "no-changes" but i'm sure someone will be using legacy clients.

was this addressed? it's fairly common to use the OpenAI client with custom providers. an example that showcases the legacy client would be suitable.

Otherwise, i think i can approve.

@joshua-mo-143
Copy link
Contributor Author

Since this completely changes the OpenAI client, there will be people's code breaking if they were using the older completion endpoint. I see that is still preserved but an example that showcases how to use the legacy client with other providers would be helpful

  • Ofc the built-in providers have "no-changes" but i'm sure someone will be using legacy clients.

was this addressed? it's fairly common to use the OpenAI client with custom providers. an example that showcases the legacy client would be suitable.

Otherwise, i think i can approve.

I'll add a relevant example and make sure it compiles and passes CI then we'll get this merged in.

@joshua-mo-143 joshua-mo-143 merged commit ab46a47 into main Jul 4, 2025
5 checks passed
@github-actions github-actions bot mentioned this pull request Jul 4, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Integrate OpenAI Responses API feat: openai responses api feat: strict parameter in ToolDefinition

3 participants