@@ -19,6 +19,9 @@ It is generated from our [OpenAPI specification](https://github.com/openai/opena
1919- [ How to use chat completions with tools and function calling] ( #how-to-use-chat-completions-with-tools-and-function-calling )
2020- [ How to use chat completions with structured outputs] ( #how-to-use-chat-completions-with-structured-outputs )
2121- [ How to use chat completions with audio] ( #how-to-use-chat-completions-with-audio )
22+ - [ How to use responses with streaming and reasoning] ( #how-to-use-responses-with-streaming-and-reasoning )
23+ - [ How to use responses with file search] ( #how-to-use-responses-with-file-search )
24+ - [ How to use responses with web search] ( #how-to-use-responses-with-web-search )
2225- [ How to generate text embeddings] ( #how-to-generate-text-embeddings )
2326- [ How to generate images] ( #how-to-generate-images )
2427- [ How to transcribe audio] ( #how-to-transcribe-audio )
@@ -81,6 +84,7 @@ The library is organized into namespaces by feature areas in the OpenAI REST API
8184| ` OpenAI.Images ` | ` ImageClient ` | |
8285| ` OpenAI.Models ` | ` OpenAIModelClient ` | |
8386| ` OpenAI.Moderations ` | ` ModerationClient ` | |
87+ | ` OpenAI.Responses ` | ` OpenAIResponseClient ` | |
8488| ` OpenAI.VectorStores ` | ` VectorStoreClient ` | ![ Experimental] ( https://img.shields.io/badge/experimental-purple ) |
8589
8690### Using the async API
@@ -424,6 +428,107 @@ contain any of:
424428- The ` ExpiresAt ` value that describes when the ` Id ` will no longer be valid for use with ` ChatAudioReference ` in subsequent requests; this typically appears once and only once, in the final ` StreamingOutputAudioUpdate `
425429- Incremental ` TranscriptUpdate ` and/or ` AudioBytesUpdate ` values, which can incrementally consumed and, when concatenated, form the complete audio transcript and audio output for the overall response; many of these typically appear
426430
431+ ## How to use responses with streaming and reasoning
432+
433+ ``` csharp
434+ OpenAIResponseClient client = new (
435+ model : " o3-mini" ,
436+ apiKey : Environment .GetEnvironmentVariable (" OPENAI_API_KEY" ));
437+
438+ OpenAIResponse response = await client .CreateResponseAsync (
439+ userInputText : " What's the optimal strategy to win at poker?" ,
440+ new ResponseCreationOptions ()
441+ {
442+ ReasoningOptions = new ResponseReasoningOptions ()
443+ {
444+ ReasoningEffortLevel = ResponseReasoningEffortLevel .High ,
445+ },
446+ });
447+
448+ await foreach (StreamingResponseUpdate update
449+ in client .CreateResponseStreamingAsync (
450+ userInputText : " What's the optimal strategy to win at poker?" ,
451+ new ResponseCreationOptions ()
452+ {
453+ ReasoningOptions = new ResponseReasoningOptions ()
454+ {
455+ ReasoningEffortLevel = ResponseReasoningEffortLevel .High ,
456+ },
457+ }))
458+ {
459+ if (update is StreamingResponseItemUpdate itemUpdate
460+ && itemUpdate .Item is ReasoningResponseItem reasoningItem )
461+ {
462+ Console .WriteLine ($" [Reasoning] ({reasoningItem .Status })" );
463+ }
464+ else if (update is StreamingResponseContentPartDeltaUpdate deltaUpdate )
465+ {
466+ Console .Write (deltaUpdate .Text );
467+ }
468+ }
469+ ```
470+
471+ ## How to use responses with file search
472+
473+ ``` csharp
474+ OpenAIResponseClient client = new (
475+ model : " gpt-4o-mini" ,
476+ apiKey : Environment .GetEnvironmentVariable (" OPENAI_API_KEY" ));
477+
478+ ResponseTool fileSearchTool
479+ = ResponseTool .CreateFileSearchTool (
480+ vectorStoreIds : [ExistingVectorStoreForTest .Id ]);
481+ OpenAIResponse response = await client .CreateResponseAsync (
482+ userInputText : " According to available files, what's the secret number?" ,
483+ new ResponseCreationOptions ()
484+ {
485+ Tools = { fileSearchTool }
486+ });
487+
488+ foreach (ResponseItem outputItem in response .OutputItems )
489+ {
490+ if (outputItem is FileSearchCallResponseItem fileSearchCall )
491+ {
492+ Console .WriteLine ($" [file_search] ({fileSearchCall .Status }): {fileSearchCall .Id }" );
493+ foreach (string query in fileSearchCall .Queries )
494+ {
495+ Console .WriteLine ($" - {query }" );
496+ }
497+ }
498+ else if (outputItem is MessageResponseItem message )
499+ {
500+ Console .WriteLine ($" [{message .Role }] {message .Content .FirstOrDefault ()? .Text }" );
501+ }
502+ }
503+ ```
504+
505+ ## How to use responses with web search
506+
507+ ``` csharp
508+ OpenAIResponseClient client = new (
509+ model : " gpt-4o-mini" ,
510+ apiKey : Environment .GetEnvironmentVariable (" OPENAI_API_KEY" ));
511+
512+ OpenAIResponse response = await client .CreateResponseAsync (
513+ userInputText : " What's a happy news headline from today?" ,
514+ new ResponseCreationOptions ()
515+ {
516+ Tools = { ResponseTool .CreateWebSearchTool () },
517+ });
518+
519+ foreach (ResponseItem item in response .OutputItems )
520+ {
521+ if (item is WebSearchCallResponseItem webSearchCall )
522+ {
523+ Console .WriteLine ($" [Web search invoked]({webSearchCall .Status }) {webSearchCall .Id }" );
524+ }
525+ else if (item is MessageResponseItem message )
526+ {
527+ Console .WriteLine ($" [{message .Role }] {message .Content ? .FirstOrDefault ()? .Text }" );
528+ }
529+ }
530+ ```
531+
427532## How to generate text embeddings
428533
429534In this example, you want to create a trip-planning website that allows customers to write a prompt describing the kind of hotel that they are looking for and then offers hotel recommendations that closely match this description. To achieve this, it is possible to use text embeddings to measure the relatedness of text strings. In summary, you can get embeddings of the hotel descriptions, store them in a vector database, and use them to build a search index that you can query using the embedding of a given customer's prompt.
0 commit comments