Skip to content

Conversation

@iktakahiro
Copy link
Contributor

@iktakahiro iktakahiro commented Apr 15, 2025

I've added new GPT-4.1 series.

@codecov
Copy link

codecov bot commented Apr 15, 2025

Codecov Report

All modified and coverable lines are covered by tests ✅

Project coverage is 85.41%. Comparing base (774fc9d) to head (97761ca).
Report is 89 commits behind head on master.

Additional details and impacted files
@@             Coverage Diff             @@
##           master     #966       +/-   ##
===========================================
- Coverage   98.46%   85.41%   -13.05%     
===========================================
  Files          24       43       +19     
  Lines        1364     2263      +899     
===========================================
+ Hits         1343     1933      +590     
- Misses         15      308      +293     
- Partials        6       22       +16     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@zarifaziz
Copy link

@sashabaranov please review 🙏

@sashabaranov
Copy link
Owner

Thank you!

GPT4Dot1Mini: true,
GPT4Dot1Mini20250414: true,
GPT4Dot1Nano: true,
GPT4Dot1Nano20250414: true,
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we should flip this mapping to an inverse one, most of the models are not enabled for completion endpoint!

@sashabaranov sashabaranov merged commit d68a683 into sashabaranov:master Apr 23, 2025
@sashabaranov sashabaranov requested a review from Copilot April 23, 2025 22:02
Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull Request Overview

This pull request adds support for new GPT-4.1 series model variants by declaring new constants and updating the disabled models map in completion.go, along with adding corresponding tests in completion_test.go to ensure the completion endpoint does not support these models.

  • Added new GPT-4.1 series model constants in completion.go.
  • Updated the disabledModelsForEndpoints map to disable the new GPT-4.1 models.
  • Introduced tests in completion_test.go to verify that requests using GPT-4.1 variants return the expected error.

Reviewed Changes

Copilot reviewed 2 out of 2 changed files in this pull request and generated 2 comments.

File Description
completion.go New model constants and updates to the disabled models map.
completion_test.go Added tests to verify that GPT-4.1 series and related models are unsupported.

Comment on lines +221 to +230
t.Run(model, func(t *testing.T) {
_, err := client.CreateCompletion(
context.Background(),
openai.CompletionRequest{
MaxTokens: 5,
Model: model,
},
)
if !errors.Is(err, openai.ErrCompletionUnsupportedModel) {
t.Fatalf("CreateCompletion should return ErrCompletionUnsupportedModel for %s model, but returned: %v", model, err)
Copy link

Copilot AI Apr 23, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

When using t.Run inside a loop, the loop variable 'model' may be captured by reference, which could lead to unexpected behavior. Consider assigning the loop variable to a local variable (e.g., 'm := model') before invoking t.Run.

Suggested change
t.Run(model, func(t *testing.T) {
_, err := client.CreateCompletion(
context.Background(),
openai.CompletionRequest{
MaxTokens: 5,
Model: model,
},
)
if !errors.Is(err, openai.ErrCompletionUnsupportedModel) {
t.Fatalf("CreateCompletion should return ErrCompletionUnsupportedModel for %s model, but returned: %v", model, err)
m := model // Create a new local variable to capture the current value of model
t.Run(m, func(t *testing.T) {
_, err := client.CreateCompletion(
context.Background(),
openai.CompletionRequest{
MaxTokens: 5,
Model: m,
},
)
if !errors.Is(err, openai.ErrCompletionUnsupportedModel) {
t.Fatalf("CreateCompletion should return ErrCompletionUnsupportedModel for %s model, but returned: %v", m, err)

Copilot uses AI. Check for mistakes.
Comment on lines +253 to +262
t.Run(model, func(t *testing.T) {
_, err := client.CreateCompletion(
context.Background(),
openai.CompletionRequest{
MaxTokens: 5,
Model: model,
},
)
if !errors.Is(err, openai.ErrCompletionUnsupportedModel) {
t.Fatalf("CreateCompletion should return ErrCompletionUnsupportedModel for %s model, but returned: %v", model, err)
Copy link

Copilot AI Apr 23, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

When using t.Run inside a loop, the loop variable 'model' may be captured by reference, which could lead to unexpected behavior. Consider assigning the loop variable to a local variable (e.g., 'm := model') before invoking t.Run.

Suggested change
t.Run(model, func(t *testing.T) {
_, err := client.CreateCompletion(
context.Background(),
openai.CompletionRequest{
MaxTokens: 5,
Model: model,
},
)
if !errors.Is(err, openai.ErrCompletionUnsupportedModel) {
t.Fatalf("CreateCompletion should return ErrCompletionUnsupportedModel for %s model, but returned: %v", model, err)
m := model // Assign loop variable to a local variable
t.Run(m, func(t *testing.T) {
_, err := client.CreateCompletion(
context.Background(),
openai.CompletionRequest{
MaxTokens: 5,
Model: m,
},
)
if !errors.Is(err, openai.ErrCompletionUnsupportedModel) {
t.Fatalf("CreateCompletion should return ErrCompletionUnsupportedModel for %s model, but returned: %v", m, err)

Copilot uses AI. Check for mistakes.
bububa pushed a commit to bububa/go-openai that referenced this pull request Apr 28, 2025
* feat: add new GPT-4.1 model variants to completion.go

* feat: add tests for unsupported models in completion endpoint

* fix: add missing periods to test function comments in completion_test.go
icowan pushed a commit to icowan/go-openai that referenced this pull request May 8, 2025
* feat: add new GPT-4.1 model variants to completion.go

* feat: add tests for unsupported models in completion endpoint

* fix: add missing periods to test function comments in completion_test.go
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants