Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

remove legacy completions endpoint #160

Open
wants to merge 2 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
112 changes: 0 additions & 112 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,8 +15,6 @@ This repository contains Swift community-maintained implementation over [OpenAI]
- [Installation](#installation)
- [Usage](#usage)
- [Initialization](#initialization)
- [Completions](#completions)
- [Completions Streaming](#completions-streaming)
- [Chats](#chats)
- [Chats Streaming](#chats-streaming)
- [Images](#images)
Expand Down Expand Up @@ -85,115 +83,6 @@ let openAI = OpenAI(configuration: configuration)

Once token you posses the token, and the instance is initialized you are ready to make requests.

### Completions

Given a prompt, the model will return one or more predicted completions, and can also return the probabilities of alternative tokens at each position.

**Request**

```swift
struct CompletionsQuery: Codable {
/// ID of the model to use.
public let model: Model
/// The prompt(s) to generate completions for, encoded as a string, array of strings, array of tokens, or array of token arrays.
public let prompt: String
/// What sampling temperature to use. Higher values means the model will take more risks. Try 0.9 for more creative applications, and 0 (argmax sampling) for ones with a well-defined answer.
public let temperature: Double?
/// The maximum number of tokens to generate in the completion.
public let maxTokens: Int?
/// An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.
public let topP: Double?
/// Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim.
public let frequencyPenalty: Double?
/// Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics.
public let presencePenalty: Double?
/// Up to 4 sequences where the API will stop generating further tokens. The returned text will not contain the stop sequence.
public let stop: [String]?
/// A unique identifier representing your end-user, which can help OpenAI to monitor and detect abuse.
public let user: String?
}
```

**Response**

```swift
struct CompletionsResult: Codable, Equatable {
public struct Choice: Codable, Equatable {
public let text: String
public let index: Int
}

public let id: String
public let object: String
public let created: TimeInterval
public let model: Model
public let choices: [Choice]
public let usage: Usage
}
```
**Example**

```swift
let query = CompletionsQuery(model: .textDavinci_003, prompt: "What is 42?", temperature: 0, maxTokens: 100, topP: 1, frequencyPenalty: 0, presencePenalty: 0, stop: ["\\n"])
openAI.completions(query: query) { result in
//Handle result here
}
//or
let result = try await openAI.completions(query: query)
```

```
(lldb) po result
▿ CompletionsResult
- id : "cmpl-6P9be2p2fQlwB7zTOl0NxCOetGmX3"
- object : "text_completion"
- created : 1671453146.0
- model : OpenAI.Model.textDavinci_003
▿ choices : 1 element
▿ 0 : Choice
- text : "\n\n42 is the answer to the ultimate question of life, the universe, and everything, according to the book The Hitchhiker\'s Guide to the Galaxy."
- index : 0
```

#### Completions Streaming

Completions streaming is available by using `completionsStream` function. Tokens will be sent one-by-one.

**Closures**
```swift
openAI.completionsStream(query: query) { partialResult in
switch partialResult {
case .success(let result):
print(result.choices)
case .failure(let error):
//Handle chunk error here
}
} completion: { error in
//Handle streaming error here
}
```

**Combine**

```swift
openAI
.completionsStream(query: query)
.sink { completion in
//Handle completion result here
} receiveValue: { result in
//Handle chunk here
}.store(in: &cancellables)
```

**Structured concurrency**
```swift
for try await result in openAI.completionsStream(query: query) {
//Handle result here
}
```

Review [Completions Documentation](https://platform.openai.com/docs/api-reference/completions) for more info.

### Chats

Using the OpenAI Chat API, you can build your own applications with `gpt-3.5-turbo` to do things like:
Expand Down Expand Up @@ -999,7 +888,6 @@ Read more about Cosine Similarity [here](https://en.wikipedia.org/wiki/Cosine_si
The library contains built-in [Combine](https://developer.apple.com/documentation/combine) extensions.

```swift
func completions(query: CompletionsQuery) -> AnyPublisher<CompletionsResult, Error>
func images(query: ImagesQuery) -> AnyPublisher<ImagesResult, Error>
func embeddings(query: EmbeddingsQuery) -> AnyPublisher<EmbeddingsResult, Error>
func chats(query: ChatQuery) -> AnyPublisher<ChatResult, Error>
Expand Down
9 changes: 0 additions & 9 deletions Sources/OpenAI/OpenAI.swift
Original file line number Diff line number Diff line change
Expand Up @@ -56,14 +56,6 @@ final public class OpenAI: OpenAIProtocol {
self.init(configuration: configuration, session: session as URLSessionProtocol)
}

public func completions(query: CompletionsQuery, completion: @escaping (Result<CompletionsResult, Error>) -> Void) {
performRequest(request: JSONRequest<CompletionsResult>(body: query, url: buildURL(path: .completions)), completion: completion)
}

public func completionsStream(query: CompletionsQuery, onResult: @escaping (Result<CompletionsResult, Error>) -> Void, completion: ((Error?) -> Void)?) {
performStreamingRequest(request: JSONRequest<CompletionsResult>(body: query.makeStreamable(), url: buildURL(path: .completions)), onResult: onResult, completion: completion)
}

public func images(query: ImagesQuery, completion: @escaping (Result<ImagesResult, Error>) -> Void) {
performRequest(request: JSONRequest<ImagesResult>(body: query, url: buildURL(path: .images)), completion: completion)
}
Expand Down Expand Up @@ -205,7 +197,6 @@ extension OpenAI {
typealias APIPath = String
extension APIPath {

static let completions = "/v1/completions"
static let embeddings = "/v1/embeddings"
static let chats = "/v1/chat/completions"
static let edits = "/v1/edits"
Expand Down
56 changes: 0 additions & 56 deletions Sources/OpenAI/Public/Models/CompletionsQuery.swift

This file was deleted.

42 changes: 0 additions & 42 deletions Sources/OpenAI/Public/Models/CompletionsResult.swift

This file was deleted.

13 changes: 0 additions & 13 deletions Sources/OpenAI/Public/Models/Models/Models.swift
Original file line number Diff line number Diff line change
Expand Up @@ -70,19 +70,6 @@ public extension Model {

/// Snapshot of `gpt-3.5-turbo-16k` from June 13th 2023. Unlike `gpt-3.5-turbo-16k`, this model will not receive updates, and will be deprecated 3 months after a new version is released.
static let gpt3_5Turbo_16k_0613 = "gpt-3.5-turbo-16k-0613"

// Completions

/// Can do any language task with better quality, longer output, and consistent instruction-following than the curie, babbage, or ada models. Also supports inserting completions within text.
static let textDavinci_003 = "text-davinci-003"
/// Similar capabilities to text-davinci-003 but trained with supervised fine-tuning instead of reinforcement learning.
static let textDavinci_002 = "text-davinci-002"
/// Very capable, faster and lower cost than Davinci.
static let textCurie = "text-curie-001"
/// Capable of straightforward tasks, very fast, and lower cost.
static let textBabbage = "text-babbage-001"
/// Capable of very simple tasks, usually the fastest model in the GPT-3 series, and lowest cost.
static let textAda = "text-ada-001"

// Edits

Expand Down
26 changes: 0 additions & 26 deletions Sources/OpenAI/Public/Protocols/OpenAIProtocol+Async.swift
Original file line number Diff line number Diff line change
Expand Up @@ -12,32 +12,6 @@ import Foundation
@available(tvOS 13.0, *)
@available(watchOS 6.0, *)
public extension OpenAIProtocol {
func completions(
query: CompletionsQuery
) async throws -> CompletionsResult {
try await withCheckedThrowingContinuation { continuation in
completions(query: query) { result in
switch result {
case let .success(success):
return continuation.resume(returning: success)
case let .failure(failure):
return continuation.resume(throwing: failure)
}
}
}
}

func completionsStream(
query: CompletionsQuery
) -> AsyncThrowingStream<CompletionsResult, Error> {
return AsyncThrowingStream { continuation in
return completionsStream(query: query) { result in
continuation.yield(with: result)
} completion: { error in
continuation.finish(throwing: error)
}
}
}

func images(
query: ImagesQuery
Expand Down
21 changes: 0 additions & 21 deletions Sources/OpenAI/Public/Protocols/OpenAIProtocol+Combine.swift
Original file line number Diff line number Diff line change
Expand Up @@ -15,27 +15,6 @@ import Combine
@available(watchOS 6.0, *)
public extension OpenAIProtocol {

func completions(query: CompletionsQuery) -> AnyPublisher<CompletionsResult, Error> {
Future<CompletionsResult, Error> {
completions(query: query, completion: $0)
}
.eraseToAnyPublisher()
}

func completionsStream(query: CompletionsQuery) -> AnyPublisher<Result<CompletionsResult, Error>, Error> {
let progress = PassthroughSubject<Result<CompletionsResult, Error>, Error>()
completionsStream(query: query) { result in
progress.send(result)
} completion: { error in
if let error {
progress.send(completion: .failure(error))
} else {
progress.send(completion: .finished)
}
}
return progress.eraseToAnyPublisher()
}

func images(query: ImagesQuery) -> AnyPublisher<ImagesResult, Error> {
Future<ImagesResult, Error> {
images(query: query, completion: $0)
Expand Down
35 changes: 0 additions & 35 deletions Sources/OpenAI/Public/Protocols/OpenAIProtocol.swift
Original file line number Diff line number Diff line change
Expand Up @@ -9,41 +9,6 @@ import Foundation

public protocol OpenAIProtocol {

/**
This function sends a completions query to the OpenAI API and retrieves generated completions in response. The Completions API enables you to build applications using OpenAI's language models, like the powerful GPT-3.

Example:
```
let query = CompletionsQuery(model: .textDavinci_003, prompt: "What is 42?")
openAI.completions(query: query) { result in
//Handle result here
}
```

- Parameters:
- query: A `CompletionsQuery` object containing the input parameters for the API request. This includes the prompt, model, temperature, max tokens, and other settings.
- completion: A closure which receives the result when the API request finishes. The closure's parameter, `Result<CompletionsResult, Error>`, will contain either the `CompletionsResult` object with the generated completions, or an error if the request failed.
**/
func completions(query: CompletionsQuery, completion: @escaping (Result<CompletionsResult, Error>) -> Void)

/**
This function sends a completions query to the OpenAI API and retrieves generated completions in response. The Completions API enables you to build applications using OpenAI's language models, like the powerful GPT-3. The result is returned by chunks.

Example:
```
let query = CompletionsQuery(model: .textDavinci_003, prompt: "What is 42?")
openAI.completions(query: query) { result in
//Handle result here
}
```

- Parameters:
- query: A `CompletionsQuery` object containing the input parameters for the API request. This includes the prompt, model, temperature, max tokens, and other settings.
- onResult: A closure which receives the result when the API request finishes. The closure's parameter, `Result<CompletionsResult, Error>`, will contain either the `CompletionsResult` object with the generated completions, or an error if the request failed.
- completion: A closure that is being called when all chunks are delivered or uncrecoverable error occured
**/
func completionsStream(query: CompletionsQuery, onResult: @escaping (Result<CompletionsResult, Error>) -> Void, completion: ((Error?) -> Void)?)

/**
This function sends an images query to the OpenAI API and retrieves generated images in response. The Images Generation API enables you to create various images or graphics using OpenAI's powerful deep learning models.

Expand Down
Loading
Loading