Skip to content

Commit

Permalink
Sync spec + structured outputs example (#257)
Browse files Browse the repository at this point in the history
* CreateChatCompletionRequest updated

* CreateChatCompletionResponse updated

* ChatCompletionStreamResponseDelta updated

* CreateChatCompletionStreamResponse updated

* CreateFineTuningJobRequest updated

* ResponseFormat and ImageResponseFormat

* update examples with ImageResponseFormat

* AssistantToolsFileSearch updated with FileSearchRankingOptions

* updated MessageContent and MessageDeltaContent to include refusal variant

* update examples with message refusal variant

* updated RunStepDetailsToolCallsFileSearchObject

* updated VectoreStoreFileObject last_error enum variant

* updated step-object link

* updated ChatCompletionRequestMessage

* udpated FunctionObject to include strict

* update example for FunctionObject strict param

* helper From traits for chat message types

* Add structured-outputs example

* update readme

* updated readme

* add comment
  • Loading branch information
64bit authored Aug 29, 2024
1 parent 650281d commit 577c27f
Show file tree
Hide file tree
Showing 21 changed files with 506 additions and 96 deletions.
13 changes: 8 additions & 5 deletions async-openai/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -34,11 +34,12 @@
- [x] Images
- [x] Models
- [x] Moderations
- [ ] Organizations | Administration
- [ ] Uploads
- SSE streaming on all available APIs
- SSE streaming on available APIs
- Requests (except SSE streaming) including form submissions are retried with exponential backoff when [rate limited](https://platform.openai.com/docs/guides/rate-limits).
- Ergonomic builder pattern for all request objects.
- Microsoft Azure OpenAI Service (only APIs matching OpenAI spec)
- Microsoft Azure OpenAI Service (only for APIs matching OpenAI spec)

## Usage

Expand All @@ -61,7 +62,7 @@ $Env:OPENAI_API_KEY='sk-...'

```rust
use async_openai::{
types::{CreateImageRequestArgs, ImageSize, ResponseFormat},
types::{CreateImageRequestArgs, ImageSize, ImageResponseFormat},
Client,
};
use std::error::Error;
Expand All @@ -74,7 +75,7 @@ async fn main() -> Result<(), Box<dyn Error>> {
let request = CreateImageRequestArgs::default()
.prompt("cats on sofa and carpet in living room")
.n(2)
.response_format(ResponseFormat::Url)
.response_format(ImageResponseFormat::Url)
.size(ImageSize::S256x256)
.user("async-openai")
.build()?;
Expand Down Expand Up @@ -110,14 +111,16 @@ All forms of contributions, such as new features requests, bug fixes, issues, do
A good starting point would be to look at existing [open issues](https://github.com/64bit/async-openai/issues).

To maintain quality of the project, a minimum of the following is a must for code contribution:

- **Names & Documentation**: All struct names, field names and doc comments are from OpenAPI spec. Nested objects in spec without names leaves room for making appropriate name.
- **Tested**: For changes supporting test(s) and/or example is required. Existing examples, doc tests, unit tests, and integration tests should be made to work with the changes if applicable.
- **Tested**: For changes supporting test(s) and/or example is required. Existing examples, doc tests, unit tests, and integration tests should be made to work with the changes if applicable.
- **Scope**: Keep scope limited to APIs available in official documents such as [API Reference](https://platform.openai.com/docs/api-reference) or [OpenAPI spec](https://github.com/openai/openai-openapi/). Other LLMs or AI Providers offer OpenAI-compatible APIs, yet they may not always have full parity. In such cases, the OpenAI spec takes precedence.
- **Consistency**: Keep code style consistent across all the "APIs" that library exposes; it creates a great developer experience.

This project adheres to [Rust Code of Conduct](https://www.rust-lang.org/policies/code-of-conduct)

## Complimentary Crates

- [openai-func-enums](https://github.com/frankfralick/openai-func-enums) provides procedural macros that make it easier to use this library with OpenAI API's tool calling feature. It also provides derive macros you can add to existing [clap](https://github.com/clap-rs/clap) application subcommands for natural language use of command line tools. It also supports openai's [parallel tool calls](https://platform.openai.com/docs/guides/function-calling/parallel-function-calling) and allows you to choose between running multiple tool calls concurrently or own their own OS threads.
- [async-openai-wasm](https://github.com/ifsheldon/async-openai-wasm) provides WASM support.

Expand Down
47 changes: 26 additions & 21 deletions async-openai/src/types/assistant.rs
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ use serde::{Deserialize, Serialize};

use crate::error::OpenAIError;

use super::{FunctionName, FunctionObject};
use super::{FunctionName, FunctionObject, ResponseFormat};

#[derive(Clone, Serialize, Debug, Deserialize, PartialEq, Default)]
pub struct AssistantToolCodeInterpreterResources {
Expand Down Expand Up @@ -112,6 +112,8 @@ pub struct AssistantObject {

/// Specifies the format that the model must output. Compatible with [GPT-4o](https://platform.openai.com/docs/models/gpt-4o), [GPT-4 Turbo](https://platform.openai.com/docs/models/gpt-4-turbo-and-gpt-4), and all GPT-3.5 Turbo models since `gpt-3.5-turbo-1106`.
///
/// Setting to `{ "type": "json_schema", "json_schema": {...} }` enables Structured Outputs which guarantees the model will match your supplied JSON schema. Learn more in the [Structured Outputs guide](https://platform.openai.com/docs/guides/structured-outputs).
///
/// Setting to `{ "type": "json_object" }` enables JSON mode, which guarantees the message the model generates is valid JSON.
///
/// **Important:** when using JSON mode, you **must** also instruct the model to produce JSON yourself via a system or user message. Without this, the model may generate an unending stream of whitespace until the generation reaches the token limit, resulting in a long-running and seemingly "stuck" request. Also note that the message content may be partially cut off if `finish_reason="length"`, which indicates the generation exceeded `max_tokens` or the conversation exceeded the max context length.
Expand All @@ -120,25 +122,8 @@ pub enum AssistantsApiResponseFormatOption {
#[default]
#[serde(rename = "auto")]
Auto,
#[serde(rename = "none")]
None,
#[serde(untagged)]
Format(AssistantsApiResponseFormat),
}

/// An object describing the expected output of the model. If `json_object` only `function` type `tools` are allowed to be passed to the Run. If `text` the model can return text or any value needed.
#[derive(Clone, Serialize, Debug, Deserialize, PartialEq, Default)]
pub struct AssistantsApiResponseFormat {
/// Must be one of `text` or `json_object`.
pub r#type: AssistantsApiResponseFormatType,
}

#[derive(Clone, Serialize, Debug, Deserialize, PartialEq, Default)]
#[serde(rename_all = "snake_case")]
pub enum AssistantsApiResponseFormatType {
#[default]
Text,
JsonObject,
Format(ResponseFormat),
}

/// Retrieval tool
Expand All @@ -153,8 +138,28 @@ pub struct AssistantToolsFileSearch {
pub struct AssistantToolsFileSearchOverrides {
/// The maximum number of results the file search tool should output. The default is 20 for gpt-4* models and 5 for gpt-3.5-turbo. This number should be between 1 and 50 inclusive.
///
//// Note that the file search tool may output fewer than `max_num_results` results. See the [file search tool documentation](https://platform.openai.com/docs/assistants/tools/file-search/number-of-chunks-returned) for more information.
pub max_num_results: u8,
//// Note that the file search tool may output fewer than `max_num_results` results. See the [file search tool documentation](https://platform.openai.com/docs/assistants/tools/file-search/customizing-file-search-settings) for more information.
pub max_num_results: Option<u8>,
pub ranking_options: Option<FileSearchRankingOptions>,
}

#[derive(Clone, Serialize, Debug, Deserialize, PartialEq)]
pub enum FileSearchRanker {
#[serde(rename = "auto")]
Auto,
#[serde(rename = "default_2024_08_21")]
Default2024_08_21,
}

/// The ranking options for the file search.
///
/// See the [file search tool documentation](/docs/assistants/tools/file-search/customizing-file-search-settings) for more information.
#[derive(Clone, Serialize, Debug, Deserialize, PartialEq)]
pub struct FileSearchRankingOptions {
/// The ranker to use for the file search. If not specified will use the `auto` ranker.
pub ranker: Option<FileSearchRanker>,
/// The score threshold for the file search. All values must be a floating point number between 0 and 1.
pub score_threshold: Option<f32>,
}

/// Function tool
Expand Down
14 changes: 7 additions & 7 deletions async-openai/src/types/assistant_stream.rs
Original file line number Diff line number Diff line change
Expand Up @@ -66,25 +66,25 @@ pub enum AssistantStreamEvent {
/// Occurs when a [run](https://platform.openai.com/docs/api-reference/runs/object) expires.
#[serde(rename = "thread.run.expired")]
ThreadRunExpired(RunObject),
/// Occurs when a [run step](https://platform.openai.com/docs/api-reference/runs/step-object) is created.
/// Occurs when a [run step](https://platform.openai.com/docs/api-reference/run-steps/step-object) is created.
#[serde(rename = "thread.run.step.created")]
ThreadRunStepCreated(RunStepObject),
/// Occurs when a [run step](https://platform.openai.com/docs/api-reference/runs/step-object) moves to an `in_progress` state.
/// Occurs when a [run step](https://platform.openai.com/docs/api-reference/run-steps/step-object) moves to an `in_progress` state.
#[serde(rename = "thread.run.step.in_progress")]
ThreadRunStepInProgress(RunStepObject),
/// Occurs when parts of a [run step](https://platform.openai.com/docs/api-reference/runs/step-object) are being streamed.
/// Occurs when parts of a [run step](https://platform.openai.com/docs/api-reference/run-steps/step-object) are being streamed.
#[serde(rename = "thread.run.step.delta")]
ThreadRunStepDelta(RunStepDeltaObject),
/// Occurs when a [run step](https://platform.openai.com/docs/api-reference/runs/step-object) is completed.
/// Occurs when a [run step](https://platform.openai.com/docs/api-reference/run-steps/step-object) is completed.
#[serde(rename = "thread.run.step.completed")]
ThreadRunStepCompleted(RunStepObject),
/// Occurs when a [run step](https://platform.openai.com/docs/api-reference/runs/step-object) fails.
/// Occurs when a [run step](https://platform.openai.com/docs/api-reference/run-steps/step-object) fails.
#[serde(rename = "thread.run.step.failed")]
ThreadRunStepFailed(RunStepObject),
/// Occurs when a [run step](https://platform.openai.com/docs/api-reference/runs/step-object) is cancelled.
/// Occurs when a [run step](https://platform.openai.com/docs/api-reference/run-steps/step-object) is cancelled.
#[serde(rename = "thread.run.step.cancelled")]
ThreadRunStepCancelled(RunStepObject),
/// Occurs when a [run step](https://platform.openai.com/docs/api-reference/runs/step-object) expires.
/// Occurs when a [run step](https://platform.openai.com/docs/api-reference/run-steps/step-object) expires.
#[serde(rename = "thread.run.step.expired")]
ThreadRunStepExpired(RunStepObject),
/// Occurs when a [message](https://platform.openai.com/docs/api-reference/messages/object) is created.
Expand Down
Loading

0 comments on commit 577c27f

Please sign in to comment.