Skip to content

Commit c48e96c

Browse files
authored
Merge pull request #459 from alexrudall/7.0.0
7.0.0
2 parents 6640dd0 + bffcaad commit c48e96c

15 files changed

+1710
-2414
lines changed

.circleci/config.yml

+2-1
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@ jobs:
88
rubocop:
99
parallelism: 1
1010
docker:
11-
- image: cimg/ruby:3.1-node
11+
- image: cimg/ruby:3.2-node
1212
steps:
1313
- checkout
1414
- ruby/install-deps
@@ -43,3 +43,4 @@ workflows:
4343
- cimg/ruby:3.0-node
4444
- cimg/ruby:3.1-node
4545
- cimg/ruby:3.2-node
46+
- cimg/ruby:3.3-node

CHANGELOG.md

+31-2
Original file line numberDiff line numberDiff line change
@@ -5,6 +5,35 @@ All notable changes to this project will be documented in this file.
55
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
66
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
77

8+
## [7.0.0] - 2024-04-27
9+
10+
### Added
11+
12+
- Add support for Batches, thanks to [@simonx1](https://github.com/simonx1) for the PR!
13+
- Allow use of local LLMs like Ollama! Thanks to [@ThomasSevestre](https://github.com/ThomasSevestre)
14+
- Update to v2 of the Assistants beta & add documentation on streaming from an Assistant.
15+
- Add Assistants endpoint to create and run a thread in one go, thank you [@quocphien90](https://github.com/
16+
quocphien90)
17+
- Add missing parameters (order, limit, etc) to Runs, RunSteps and Messages - thanks to [@shalecraig](https://github.com/shalecraig) and [@coezbek](https://github.com/coezbek)
18+
- Add missing Messages#list spec - thanks [@adammeghji](https://github.com/adammeghji)
19+
- Add Messages#modify to README - thanks to [@nas887](https://github.com/nas887)
20+
- Don't add the api_version (`/v1/`) to base_uris that already include it - thanks to [@kaiwren](https://github.com/kaiwren) for raising this issue
21+
- Allow passing a `StringIO` to Files#upload - thanks again to [@simonx1](https://github.com/simonx1)
22+
- Add Ruby 3.3 to CI
23+
24+
### Security
25+
26+
- [BREAKING] ruby-openai will no longer log out API errors by default - you can reenable by passing `log_errors: true` to your client. This will help to prevent leaking secrets to logs. Thanks to [@lalunamel](https://github.com/lalunamel) for this PR.
27+
28+
### Removed
29+
30+
- [BREAKING] Remove deprecated edits endpoint.
31+
32+
### Fixed
33+
34+
- Fix README DALL·E 3 error - thanks to [@clayton](https://github.com/clayton)
35+
- Fix README tool_calls error and add missing tool_choice info - thanks to [@Jbrito6492](https://github.com/Jbrito6492)
36+
837
## [6.5.0] - 2024-03-31
938

1039
### Added
@@ -67,13 +96,13 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
6796
- [BREAKING] Switch from legacy Finetunes to the new Fine-tune-jobs endpoints. Implemented by [@lancecarlson](https://github.com/lancecarlson)
6897
- [BREAKING] Remove deprecated Completions endpoints - use Chat instead.
6998

70-
### Fix
99+
### Fixed
71100

72101
- [BREAKING] Fix issue where :stream parameters were replaced by a boolean in the client application. Thanks to [@martinjaimem](https://github.com/martinjaimem), [@vickymadrid03](https://github.com/vickymadrid03) and [@nicastelo](https://github.com/nicastelo) for spotting and fixing this issue.
73102

74103
## [5.2.0] - 2023-10-30
75104

76-
### Fix
105+
### Fixed
77106

78107
- Added more spec-compliant SSE parsing: see here https://html.spec.whatwg.org/multipage/server-sent-events.html#event-stream-interpretation
79108
- Fixes issue where OpenAI or an intermediary returns only partial JSON per chunk of streamed data

Gemfile.lock

+1-1
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
PATH
22
remote: .
33
specs:
4-
ruby-openai (6.5.0)
4+
ruby-openai (7.0.0)
55
event_stream_parser (>= 0.3.0, < 2.0.0)
66
faraday (>= 1)
77
faraday-multipart (>= 1)

README.md

+6-38
Original file line numberDiff line numberDiff line change
@@ -29,7 +29,6 @@ Stream text with GPT-4, transcribe and translate audio with Whisper, or create i
2929
- [Ollama](#ollama)
3030
- [Counting Tokens](#counting-tokens)
3131
- [Models](#models)
32-
- [Examples](#examples)
3332
- [Chat](#chat)
3433
- [Streaming Chat](#streaming-chat)
3534
- [Vision](#vision)
@@ -258,24 +257,9 @@ There are different models that can be used to generate text. For a full list an
258257

259258
```ruby
260259
client.models.list
261-
client.models.retrieve(id: "text-ada-001")
260+
client.models.retrieve(id: "gpt-3.5-turbo")
262261
```
263262

264-
#### Examples
265-
266-
- [GPT-4 (limited beta)](https://platform.openai.com/docs/models/gpt-4)
267-
- gpt-4 (uses current version)
268-
- gpt-4-0314
269-
- gpt-4-32k
270-
- [GPT-3.5](https://platform.openai.com/docs/models/gpt-3-5)
271-
- gpt-3.5-turbo
272-
- gpt-3.5-turbo-0301
273-
- text-davinci-003
274-
- [GPT-3](https://platform.openai.com/docs/models/gpt-3)
275-
- text-ada-001
276-
- text-babbage-001
277-
- text-curie-001
278-
279263
### Chat
280264

281265
GPT is a model that can be used to generate text in a conversational style. You can use it to [generate a response](https://platform.openai.com/docs/api-reference/chat/create) to a sequence of [messages](https://platform.openai.com/docs/guides/chat/introduction):
@@ -387,7 +371,7 @@ You can stream it as well!
387371

388372
### Functions
389373

390-
You can describe and pass in functions and the model will intelligently choose to output a JSON object containing arguments to call those them. For example, if you want the model to use your method `get_current_weather` to get the current weather in a given location, see the example below. Note that tool_choice is optional, but if you exclude it, the model will choose whether to use the function or not ([see this for more details](https://github.com/openai/openai-cookbook/blob/main/examples/How_to_call_functions_with_chat_models.ipynb)).
374+
You can describe and pass in functions and the model will intelligently choose to output a JSON object containing arguments to call them - eg., to use your method `get_current_weather` to get the weather in a given location. Note that tool_choice is optional, but if you exclude it, the model will choose whether to use the function or not ([see here](https://platform.openai.com/docs/api-reference/chat/create#chat-create-tool_choice)).
391375

392376
```ruby
393377

@@ -398,7 +382,7 @@ end
398382
response =
399383
client.chat(
400384
parameters: {
401-
model: "gpt-3.5-turbo-0613",
385+
model: "gpt-3.5-turbo",
402386
messages: [
403387
{
404388
"role": "user",
@@ -462,30 +446,14 @@ Hit the OpenAI API for a completion using other GPT-3 models:
462446
```ruby
463447
response = client.completions(
464448
parameters: {
465-
model: "text-davinci-001",
449+
model: "gpt-3.5-turbo",
466450
prompt: "Once upon a time",
467451
max_tokens: 5
468452
})
469453
puts response["choices"].map { |c| c["text"] }
470454
# => [", there lived a great"]
471455
```
472456

473-
### Edits
474-
475-
Send a string and some instructions for what to do to the string:
476-
477-
```ruby
478-
response = client.edits(
479-
parameters: {
480-
model: "text-davinci-edit-001",
481-
input: "What day of the wek is it?",
482-
instruction: "Fix the spelling mistakes"
483-
}
484-
)
485-
puts response.dig("choices", 0, "text")
486-
# => What day of the week is it?
487-
```
488-
489457
### Embeddings
490458

491459
You can use the embeddings endpoint to get a vector of numbers representing an input. You can then compare these vectors for different inputs to efficiently check how similar the inputs are.
@@ -624,7 +592,7 @@ You can then use this file ID to create a fine tuning job:
624592
response = client.finetunes.create(
625593
parameters: {
626594
training_file: file_id,
627-
model: "gpt-3.5-turbo-0613"
595+
model: "gpt-3.5-turbo"
628596
})
629597
fine_tune_id = response["id"]
630598
```
@@ -1030,7 +998,7 @@ HTTP errors can be caught like this:
1030998

1031999
```
10321000
begin
1033-
OpenAI::Client.new.models.retrieve(id: "text-ada-001")
1001+
OpenAI::Client.new.models.retrieve(id: "gpt-3.5-turbo")
10341002
rescue Faraday::Error => e
10351003
raise "Got a Faraday error: #{e}"
10361004
end

lib/openai/client.rb

-4
Original file line numberDiff line numberDiff line change
@@ -30,10 +30,6 @@ def chat(parameters: {})
3030
json_post(path: "/chat/completions", parameters: parameters)
3131
end
3232

33-
def edits(parameters: {})
34-
json_post(path: "/edits", parameters: parameters)
35-
end
36-
3733
def embeddings(parameters: {})
3834
json_post(path: "/embeddings", parameters: parameters)
3935
end

lib/openai/version.rb

+1-1
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,3 @@
11
module OpenAI
2-
VERSION = "6.5.0".freeze
2+
VERSION = "7.0.0".freeze
33
end

0 commit comments

Comments
 (0)