You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: CHANGELOG.md
+31-2
Original file line number
Diff line number
Diff line change
@@ -5,6 +5,35 @@ All notable changes to this project will be documented in this file.
5
5
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
6
6
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
7
7
8
+
## [7.0.0] - 2024-04-27
9
+
10
+
### Added
11
+
12
+
- Add support for Batches, thanks to [@simonx1](https://github.com/simonx1) for the PR!
13
+
- Allow use of local LLMs like Ollama! Thanks to [@ThomasSevestre](https://github.com/ThomasSevestre)
14
+
- Update to v2 of the Assistants beta & add documentation on streaming from an Assistant.
15
+
- Add Assistants endpoint to create and run a thread in one go, thank you [@quocphien90](https://github.com/
16
+
quocphien90)
17
+
- Add missing parameters (order, limit, etc) to Runs, RunSteps and Messages - thanks to [@shalecraig](https://github.com/shalecraig) and [@coezbek](https://github.com/coezbek)
- Add Messages#modify to README - thanks to [@nas887](https://github.com/nas887)
20
+
- Don't add the api_version (`/v1/`) to base_uris that already include it - thanks to [@kaiwren](https://github.com/kaiwren) for raising this issue
21
+
- Allow passing a `StringIO` to Files#upload - thanks again to [@simonx1](https://github.com/simonx1)
22
+
- Add Ruby 3.3 to CI
23
+
24
+
### Security
25
+
26
+
-[BREAKING] ruby-openai will no longer log out API errors by default - you can reenable by passing `log_errors: true` to your client. This will help to prevent leaking secrets to logs. Thanks to [@lalunamel](https://github.com/lalunamel) for this PR.
27
+
28
+
### Removed
29
+
30
+
-[BREAKING] Remove deprecated edits endpoint.
31
+
32
+
### Fixed
33
+
34
+
- Fix README DALL·E 3 error - thanks to [@clayton](https://github.com/clayton)
35
+
- Fix README tool_calls error and add missing tool_choice info - thanks to [@Jbrito6492](https://github.com/Jbrito6492)
36
+
8
37
## [6.5.0] - 2024-03-31
9
38
10
39
### Added
@@ -67,13 +96,13 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
67
96
-[BREAKING] Switch from legacy Finetunes to the new Fine-tune-jobs endpoints. Implemented by [@lancecarlson](https://github.com/lancecarlson)
68
97
-[BREAKING] Remove deprecated Completions endpoints - use Chat instead.
69
98
70
-
### Fix
99
+
### Fixed
71
100
72
101
-[BREAKING] Fix issue where :stream parameters were replaced by a boolean in the client application. Thanks to [@martinjaimem](https://github.com/martinjaimem), [@vickymadrid03](https://github.com/vickymadrid03) and [@nicastelo](https://github.com/nicastelo) for spotting and fixing this issue.
73
102
74
103
## [5.2.0] - 2023-10-30
75
104
76
-
### Fix
105
+
### Fixed
77
106
78
107
- Added more spec-compliant SSE parsing: see here https://html.spec.whatwg.org/multipage/server-sent-events.html#event-stream-interpretation
79
108
- Fixes issue where OpenAI or an intermediary returns only partial JSON per chunk of streamed data
GPT is a model that can be used to generate text in a conversational style. You can use it to [generate a response](https://platform.openai.com/docs/api-reference/chat/create) to a sequence of [messages](https://platform.openai.com/docs/guides/chat/introduction):
@@ -387,7 +371,7 @@ You can stream it as well!
387
371
388
372
### Functions
389
373
390
-
You can describe and pass in functions and the model will intelligently choose to output a JSON object containing arguments to call those them. For example, if you want the model to use your method `get_current_weather` to get the current weather in a given location, see the example below. Note that tool_choice is optional, but if you exclude it, the model will choose whether to use the function or not ([see this for more details](https://github.com/openai/openai-cookbook/blob/main/examples/How_to_call_functions_with_chat_models.ipynb)).
374
+
You can describe and pass in functions and the model will intelligently choose to output a JSON object containing arguments to call them - eg., to use your method `get_current_weather` to get the weather in a given location. Note that tool_choice is optional, but if you exclude it, the model will choose whether to use the function or not ([see here](https://platform.openai.com/docs/api-reference/chat/create#chat-create-tool_choice)).
391
375
392
376
```ruby
393
377
@@ -398,7 +382,7 @@ end
398
382
response =
399
383
client.chat(
400
384
parameters: {
401
-
model:"gpt-3.5-turbo-0613",
385
+
model:"gpt-3.5-turbo",
402
386
messages: [
403
387
{
404
388
"role": "user",
@@ -462,30 +446,14 @@ Hit the OpenAI API for a completion using other GPT-3 models:
462
446
```ruby
463
447
response = client.completions(
464
448
parameters: {
465
-
model:"text-davinci-001",
449
+
model:"gpt-3.5-turbo",
466
450
prompt:"Once upon a time",
467
451
max_tokens:5
468
452
})
469
453
puts response["choices"].map { |c| c["text"] }
470
454
# => [", there lived a great"]
471
455
```
472
456
473
-
### Edits
474
-
475
-
Send a string and some instructions for what to do to the string:
476
-
477
-
```ruby
478
-
response = client.edits(
479
-
parameters: {
480
-
model:"text-davinci-edit-001",
481
-
input:"What day of the wek is it?",
482
-
instruction:"Fix the spelling mistakes"
483
-
}
484
-
)
485
-
puts response.dig("choices", 0, "text")
486
-
# => What day of the week is it?
487
-
```
488
-
489
457
### Embeddings
490
458
491
459
You can use the embeddings endpoint to get a vector of numbers representing an input. You can then compare these vectors for different inputs to efficiently check how similar the inputs are.
@@ -624,7 +592,7 @@ You can then use this file ID to create a fine tuning job:
624
592
response = client.finetunes.create(
625
593
parameters: {
626
594
training_file: file_id,
627
-
model:"gpt-3.5-turbo-0613"
595
+
model:"gpt-3.5-turbo"
628
596
})
629
597
fine_tune_id = response["id"]
630
598
```
@@ -1030,7 +998,7 @@ HTTP errors can be caught like this:
0 commit comments