Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feature/onfinish #5759

Merged
merged 2 commits into from
Nov 4, 2024
Merged

Feature/onfinish #5759

merged 2 commits into from
Nov 4, 2024

Conversation

Dogtiti
Copy link
Member

@Dogtiti Dogtiti commented Nov 4, 2024

πŸ’» ε˜ζ›΄η±»εž‹ | Change Type

  • feat
  • fix
  • refactor
  • perf
  • style
  • test
  • docs
  • ci
  • chore
  • build

πŸ”€ ε˜ζ›΄θ―΄ζ˜Ž | Description of Change

πŸ“ θ‘₯充俑息 | Additional Information

Summary by CodeRabbit

Release Notes

  • New Features

    • Enhanced chat functionality across multiple platforms by including the complete response object in the onFinish callback, allowing for more detailed handling of responses.
  • Bug Fixes

    • Improved error handling in the chat methods to provide clearer feedback when requests fail or are aborted.
  • Documentation

    • Updated method signatures to reflect the changes in parameters and return types for better clarity and usability.

Copy link

vercel bot commented Nov 4, 2024

@Dogtiti is attempting to deploy a commit to the NextChat Team on Vercel.

A member of the Team first needs to authorize it.

Copy link
Contributor

coderabbitai bot commented Nov 4, 2024

Walkthrough

The changes in this pull request primarily focus on enhancing the onFinish method's signature across various API classes, allowing it to accept an additional Response parameter. This modification enables better handling of the response object in chat interactions. Additionally, several classes have been updated to capture and manage the response from fetch requests, improving error handling and response management in both streaming and non-streaming scenarios.

Changes

File Path Change Summary
app/client/api.ts Updated onFinish method signature in ChatOptions interface to include responseRes: Response.
app/client/platforms/alibaba.ts Modified chat method in QwenApi to capture Response object and pass it to onFinish callback.
app/client/platforms/anthropic.ts Updated error handling in chat method of ClaudeApi to pass a Response object on abort.
app/client/platforms/baidu.ts Enhanced chat method in ErnieApi to include Response object in onFinish callback.
app/client/platforms/bytedance.ts Updated chat method in DoubaoApi to include Response object in onFinish callback.
app/client/platforms/glm.ts Modified chat method in ChatGLMApi to pass Response object to onFinish.
app/client/platforms/google.ts Updated chat method in GeminiProApi to pass Response object to onFinish.
app/client/platforms/iflytek.ts Enhanced chat method in SparkApi to include Response object in onFinish callback.
app/client/platforms/moonshot.ts Modified chat method in MoonshotApi to pass Response object to onFinish.
app/client/platforms/openai.ts Updated chat method in ChatGPTApi to include Response object in onFinish.
app/client/platforms/tencent.ts Enhanced chat method in HunyuanApi to include Response object in onFinish.
app/client/platforms/xai.ts Updated chat method in XAIApi to pass Response object to onFinish.
app/store/chat.ts Modified onFinish callback in summarizeSession to accept responseRes for error handling.
app/utils.ts Updated isVisionModel function to include a check for DALL-E 3 models.
app/utils/chat.ts Enhanced stream function to include responseRes for better error handling.
app/utils/stream.ts Updated return type of fetch function from Promise<any> to Promise<Response>.

Possibly related PRs

Suggested labels

enhancement, planned

Suggested reviewers

  • lloydzhou

Poem

πŸ‡ In the code where rabbits play,
New changes hop in bright array.
With responses now in hand,
We chat and stream, oh how grand!
Error handling, clear and bright,
Making our API just right! 🌟


Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media?

❀️ Share
πŸͺ§ Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link
Contributor

github-actions bot commented Nov 4, 2024

Your build has completed!

Preview deployment

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

🧹 Outside diff range and nitpick comments (26)
app/utils/stream.ts (4)

Line range hint 7-17: Add JSDoc comments to type definitions.

The types ResponseEvent and StreamResponse would benefit from documentation explaining their purpose and the meaning of their fields, especially since they're crucial for the Tauri integration.

+/**
+ * Represents an event received from Tauri's stream-response channel
+ * @property id - The event identifier
+ * @property payload - The event payload containing request details and data chunks
+ */
 type ResponseEvent = {
   id: number;
   payload: {
     request_id: number;
     status?: number;
     chunk?: number[];
   };
 };

+/**
+ * Represents the initial response from Tauri's stream_fetch command
+ * @property request_id - The unique identifier for the request
+ * @property status - HTTP status code
+ * @property status_text - HTTP status message
+ * @property headers - Response headers
+ */
 type StreamResponse = {
   request_id: number;
   status: number;
   status_text: string;
   headers: Record<string, string>;
 };

Line range hint 31-44: Enhance error handling in stream operations.

While the stream handling is generally well-implemented, the writer operations could benefit from more robust error handling.

 const close = () => {
   if (closed) return;
   closed = true;
   unlisten && unlisten();
-  writer.ready.then(() => {
-    writer.close().catch((e) => console.error(e));
-  });
+  writer.ready
+    .then(() => writer.close())
+    .catch((e) => {
+      console.error('Failed to close writer:', e);
+      // Optionally trigger an error callback or cleanup here
+    });
 };

Line range hint 45-63: Add defensive checks for chunk processing.

The chunk processing logic could be more resilient to malformed payloads and edge cases.

 window.__TAURI__.event
   .listen("stream-response", (e: ResponseEvent) =>
     requestIdPromise.then((request_id) => {
       const { request_id: rid, chunk, status } = e?.payload || {};
-      if (request_id != rid) {
+      if (request_id != rid || !e?.payload) {
         return;
       }
-      if (chunk) {
+      if (Array.isArray(chunk) && chunk.length > 0) {
         writer.ready.then(() => {
-          writer.write(new Uint8Array(chunk));
+          writer.write(new Uint8Array(chunk)).catch((error) => {
+            console.error('Failed to write chunk:', error);
+            close();
+          });
         });
       } else if (status === 0) {
         // end of body
         close();
       }
     }),
   )

Line range hint 95-102: Enhance error reporting and handling.

The current error handling could be more informative and provide better context for debugging.

-      .catch((e) => {
-        console.error("stream error", e);
-        // throw e;
-        return new Response("", { status: 599 });
+      .catch((error: Error) => {
+        const errorMessage = {
+          message: error.message,
+          type: error.name,
+          timestamp: new Date().toISOString(),
+        };
+        console.error("Stream error:", errorMessage);
+        return new Response(JSON.stringify(errorMessage), {
+          status: 599,
+          headers: { 'Content-Type': 'application/json' }
+        });
       });
app/client/platforms/glm.ts (2)

Line range hint 182-185: Consider calling onFinish in error cases.

The error handling only calls onError but doesn't call onFinish. For consistency with other implementations and proper cleanup, consider calling onFinish with the error response.

    } catch (e) {
      console.log("[Request] failed to make a chat request", e);
      options.onError?.(e as Error);
+     options.onFinish("", new Response(null, { status: 500 }));
    }

Line range hint 89-106: Consider improving type safety for plugin tools.

The code uses type assertions (as any) when handling plugin tools. Consider defining proper types to improve type safety and maintainability.

- tools as any,
+ tools as PluginTool[],  // Define appropriate interface
app/client/platforms/xai.ts (2)

Line range hint 89-176: Ensure consistent response handling between streaming and non-streaming paths.

The streaming implementation doesn't pass the response object to onFinish, making it inconsistent with the non-streaming path.

Consider updating the stream utility to pass the response object:

// In the stream utility (not shown in this file)
- options.onFinish(message);
+ options.onFinish(message, response);

Additionally, consider standardizing error handling between both paths. The non-streaming path only logs the error, while the streaming path's error handling isn't visible in this code.


Line range hint 1-176: Consider implementing request interceptors for consistent request/response handling.

To ensure consistent handling of requests and responses across streaming and non-streaming paths, consider implementing request/response interceptors.

This would allow:

  • Centralized error handling
  • Consistent response transformation
  • Unified logging
  • Easier testing and debugging

Example structure:

interface RequestInterceptor {
  onRequest?: (config: RequestConfig) => Promise<RequestConfig>;
  onRequestError?: (error: Error) => Promise<Error>;
}

interface ResponseInterceptor {
  onResponse?: (response: Response) => Promise<Response>;
  onResponseError?: (error: Error) => Promise<Error>;
}

class XAIApi implements LLMApi {
  private requestInterceptors: RequestInterceptor[] = [];
  private responseInterceptors: ResponseInterceptor[] = [];

  addRequestInterceptor(interceptor: RequestInterceptor) {
    this.requestInterceptors.push(interceptor);
  }

  addResponseInterceptor(interceptor: ResponseInterceptor) {
    this.responseInterceptors.push(interceptor);
  }
}
app/client/platforms/moonshot.ts (2)

Line range hint 176-184: Consider enhancing error handling for non-streaming responses.

While the change to onFinish is good, the non-streaming path could benefit from additional error handling:

  1. Response status validation
  2. Error response parsing

Consider applying this improvement:

        const res = await fetch(chatPath, chatPayload);
        clearTimeout(requestTimeoutId);

+       if (!res.ok) {
+         throw new Error(`HTTP error! status: ${res.status}`);
+       }
        const resJson = await res.json();
+       if (resJson.error) {
+         throw new Error(resJson.error.message || 'Unknown error');
+       }
        const message = this.extractMessage(resJson);
        options.onFinish(message, res);

Add error handling for HTTP response status and parsing errors

Based on the analysis of error handling patterns across the codebase:

  • Several implementations (Tencent, Bytedance, iFlytek, Baidu, Alibaba) check res.ok for HTTP status
  • Most implementations use options.onError for error reporting
  • The current implementation in moonshot.ts lacks HTTP status checks and response parsing error handling

Add these error checks before parsing the response:

if (!res.ok) {
  const errorText = await res.text();
  options.onError?.(new Error(`HTTP error ${res.status}: ${errorText}`));
  return;
}

try {
  const resJson = await res.json();
  const message = this.extractMessage(resJson);
  options.onFinish(message, res);
} catch (e) {
  options.onError?.(new Error(`Failed to parse response: ${e.message}`));
}
πŸ”— Analysis chain

Line range hint 176-184: Verify error handling consistency across API implementations.

Let's check if other API implementations have similar error handling patterns that we should align with.


The previous results show inconsistent error handling across implementations. Some files check res.ok, while others don't. Let's gather more information about error handling patterns, including try-catch blocks and error callbacks, to make a comprehensive assessment.

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Search for error handling patterns in other API implementations
echo "Searching for error handling patterns in API classes..."
rg -A 5 "const res = await fetch" "app/client/platforms/"

echo "Searching for HTTP status checks..."
rg "!res.ok" "app/client/platforms/"

echo "Searching for error response parsing..."
rg "resJson.error" "app/client/platforms/"

Length of output: 6128


Script:

#!/bin/bash
# Let's check for error handling patterns in try-catch blocks and error callbacks
echo "Searching for try-catch blocks around fetch..."
rg -B 2 -A 10 "try \{.*?const res = await fetch" -U --multiline "app/client/platforms/"

echo "Searching for onError callback usage..."
rg "options\.onError" "app/client/platforms/"

echo "Searching for error message extraction..."
rg "error.*message|message.*error" "app/client/platforms/"

Length of output: 2824

app/client/platforms/bytedance.ts (2)

179-179: Consider initializing responseRes to prevent undefined access.

The responseRes is assigned in the onopen handler, but it could potentially be accessed before this if the finish callback is triggered early (e.g., by controller abort). Consider initializing it with a default value.

-let responseRes: Response;
+let responseRes: Response | undefined;

Line range hint 133-245: Consider standardizing error handling with the enhanced response object.

Now that the response object is available in the onFinish callback, consider standardizing error handling across the codebase to leverage this additional context. This could include:

  1. Creating common error handling utilities that can extract detailed error information from the response
  2. Implementing consistent error reporting patterns
  3. Adding response status and headers to error logs

This would improve debugging capabilities and provide better error context to users.

app/client/platforms/iflytek.ts (2)

160-160: Consider enhancing error logging

The response capture is well-placed and used effectively in error handling. Consider adding response headers to error logs for better debugging context.

- console.log("[Spark] request response content type: ", contentType);
+ console.log("[Spark] request response details: ", {
+   contentType,
+   status: res.status,
+   headers: Object.fromEntries(res.headers.entries())
+ });

235-235: Consider type consistency for response parameter

The response object is correctly passed to onFinish. For better type safety, consider using the same Response type annotation in both streaming and non-streaming paths.

- options.onFinish(message, res);
+ options.onFinish(message, res as Response);
app/client/platforms/alibaba.ts (2)

192-192: Consider enhancing error logging for debugging.

The response object capture is well-placed. To improve debugging capabilities, consider adding structured logging for error cases.

- console.log("[Alibaba] request response content type: ", contentType);
+ console.log("[Alibaba] request response:", {
+   status: res.status,
+   contentType,
+   headers: Object.fromEntries(res.headers.entries())
+ });

259-259: Consider standardizing error handling between streaming and non-streaming modes.

The addition of the response object to onFinish is correct. However, the error handling differs between streaming and non-streaming modes.

 const resJson = await res.json();
 const message = this.extractMessage(resJson);
+if (message.length === 0) {
+  throw new Error("empty response from server");
+}
 options.onFinish(message, res);
app/client/platforms/tencent.ts (1)

257-257: Consider adding response status check

While the response object is now correctly passed to onFinish, consider adding a status check before processing the response to ensure consistent error handling with the streaming path.

-        options.onFinish(message, res);
+        if (!res.ok) {
+          const errorText = await res.clone().text();
+          throw new Error(`HTTP ${res.status}: ${errorText}`);
+        }
+        options.onFinish(message, res);
app/client/platforms/baidu.ts (2)

208-208: Consider enhancing error handling with response status

While the response capture is correct, consider adding specific error handling for different response status codes to provide more detailed error messages.

 responseRes = res;
+if (!res.ok) {
+  const errorMessage = `HTTP error! status: ${res.status}`;
+  console.error("[Baidu API]", errorMessage);
+  options.onError?.(new Error(errorMessage));
+}

Line range hint 165-195: Consider cleanup for animation frame

The animation frame callback should be properly cleaned up to prevent potential memory leaks.

 let finished = false;
+let animationFrameId: number;
 
 function animateResponseText() {
   if (finished || controller.signal.aborted) {
     responseText += remainText;
     console.log("[Response Animation] finished");
     if (responseText?.length === 0) {
       options.onError?.(new Error("empty response from server"));
     }
+    if (animationFrameId) {
+      cancelAnimationFrame(animationFrameId);
+    }
     return;
   }
 
   // ... existing animation code ...
 
-  requestAnimationFrame(animateResponseText);
+  animationFrameId = requestAnimationFrame(animateResponseText);
 }
app/client/platforms/google.ts (1)

276-278: Consider enhancing error handling with response metadata.

Since we now have access to the raw Response object, consider checking the response status and headers before calling onFinish. This could help catch and handle HTTP-level errors more gracefully.

        const resJson = await res.json();
+       if (!res.ok) {
+         throw new Error(`HTTP error! status: ${res.status}, message: ${resJson?.error?.message || 'Unknown error'}`);
+       }
        if (resJson?.promptFeedback?.blockReason) {
          // being blocked
          options.onError?.(
            new Error(
              "Message is being blocked for reason: " +
                resJson.promptFeedback.blockReason,
            ),
          );
        }
        const message = apiClient.extractMessage(resJson);
        options.onFinish(message, res);
app/client/api.ts (1)

73-73: Document the Response parameter usage

Consider adding JSDoc comments to explain:

  • The purpose of the Response parameter
  • Expected handling of different response statuses
  • Common usage patterns

Example documentation:

 export interface ChatOptions {
   messages: RequestMessage[];
   config: LLMConfig;
 
   onUpdate?: (message: string, chunk: string) => void;
+  /**
+   * Callback invoked when the chat request completes
+   * @param message The final message content
+   * @param responseRes The raw Response object for status/header access
+   */
   onFinish: (message: string, responseRes: Response) => void;
app/utils.ts (1)

269-271: Consider moving visionKeywords array outside the function.

To optimize performance, consider moving the visionKeywords array outside the function to prevent recreation on each call.

+const VISION_KEYWORDS = [
+  "vision",
+  "claude-3",
+  "gemini-1.5-pro",
+  "gemini-1.5-flash",
+  "gpt-4o",
+  "gpt-4o-mini",
+];

export function isVisionModel(model: string) {
  // Note: This is a better way using the TypeScript feature instead of `&&` or `||` (ts v5.5.0-dev.20240314 I've been using)

-  const visionKeywords = [
-    "vision",
-    "claude-3",
-    "gemini-1.5-pro",
-    "gemini-1.5-flash",
-    "gpt-4o",
-    "gpt-4o-mini",
-  ];
  const isGpt4Turbo =
    model.includes("gpt-4-turbo") && !model.includes("preview");

  return (
-    visionKeywords.some((keyword) => model.includes(keyword)) ||
+    VISION_KEYWORDS.some((keyword) => model.includes(keyword)) ||
    isGpt4Turbo ||
    isDalle3(model)
  );
}
app/client/platforms/anthropic.ts (1)

320-321: Use a more semantically correct HTTP status code for aborted requests.

The current implementation uses status code 400 (Bad Request) for aborted requests, which doesn't accurately represent the nature of the cancellation. Consider using either:

  • 499 (Client Closed Request) - More specific to client-side cancellations
  • 408 (Request Timeout) - Suitable for timeout-based cancellations
- options.onFinish("", new Response(null, { status: 400 }));
+ options.onFinish("", new Response(null, { status: 499 }));
app/client/platforms/openai.ts (2)

Line range hint 352-365: Consider standardizing response handling between streaming and non-streaming paths.

While the non-streaming path now provides access to the raw response via onFinish(message, res), the streaming path still uses the old signature. Consider standardizing the response handling to maintain consistency.

This could be achieved by:

  1. Collecting the complete streamed response
  2. Passing both the final message and the collected response to onFinish

363-365: Enhance error handling for response processing.

Consider adding more robust error handling around the response processing:

  1. Type checking for the response structure
  2. Handling edge cases where extractMessage might fail

Example implementation:

-        const resJson = await res.json();
-        const message = await this.extractMessage(resJson);
-        options.onFinish(message, res);
+        try {
+          const resJson = await res.json();
+          if (!resJson) {
+            throw new Error('Empty response received');
+          }
+          const message = await this.extractMessage(resJson);
+          options.onFinish(message, res);
+        } catch (e) {
+          console.error('[Response] failed to process chat response', e);
+          options.onError?.(e as Error);
+        }
app/store/chat.ts (1)

673-673: Extract magic number to a constant.

The fallback value of 4000 should be defined as a named constant for better maintainability and documentation.

+const DEFAULT_MAX_TOKENS = 4000;
+
-        if (historyMsgLength > (modelConfig?.max_tokens || 4000)) {
+        if (historyMsgLength > (modelConfig?.max_tokens || DEFAULT_MAX_TOKENS)) {
πŸ“œ Review details

Configuration used: CodeRabbit UI
Review profile: CHILL

πŸ“₯ Commits

Reviewing files that changed from the base of the PR and between d66bfc6 and 44fc5b5.

πŸ“’ Files selected for processing (16)
  • app/client/api.ts (1 hunks)
  • app/client/platforms/alibaba.ts (4 hunks)
  • app/client/platforms/anthropic.ts (1 hunks)
  • app/client/platforms/baidu.ts (4 hunks)
  • app/client/platforms/bytedance.ts (4 hunks)
  • app/client/platforms/glm.ts (1 hunks)
  • app/client/platforms/google.ts (1 hunks)
  • app/client/platforms/iflytek.ts (4 hunks)
  • app/client/platforms/moonshot.ts (1 hunks)
  • app/client/platforms/openai.ts (1 hunks)
  • app/client/platforms/tencent.ts (4 hunks)
  • app/client/platforms/xai.ts (1 hunks)
  • app/store/chat.ts (3 hunks)
  • app/utils.ts (1 hunks)
  • app/utils/chat.ts (3 hunks)
  • app/utils/stream.ts (1 hunks)
🧰 Additional context used
πŸͺ› Biome
app/store/chat.ts

[error] 656-658: The assignment should not be in an expression.

The use of assignments in expressions is confusing.
Expressions are often considered as side-effect free.

(lint/suspicious/noAssignInExpressions)

πŸ”‡ Additional comments (24)
app/utils/stream.ts (2)

22-22: Great improvement in type safety!

The change from Promise<any> to Promise<Response> is a significant improvement that:

  • Provides better type safety
  • Matches the Web Fetch API's return type
  • Makes the function more predictable and easier to use

Line range hint 22-102: Verify error handling across different scenarios.

Please ensure that error handling works correctly in the following scenarios:

  1. Network failures
  2. Malformed responses from Tauri
  3. Aborted requests
  4. Large payloads
βœ… Verification successful

Based on the search results, I can now generate the final response as there is sufficient evidence to verify the error handling implementation:

Error handling implementation is robust and follows established patterns

The error handling in the stream fetch implementation aligns with the codebase's error handling patterns:

  • Network failures: Returns a Response with status 599 when the stream fetch fails
  • Malformed responses: Properly closes the writer and handles cleanup via the close() function
  • Aborted requests: Implements proper signal handling with event listener cleanup
  • Large payloads: Uses TransformStream for efficient streaming and handles write errors

The implementation matches error handling patterns seen across the codebase, including:

  • Consistent error status code usage (599 for failures)
  • Proper cleanup of resources and event listeners
  • Error propagation through Promise chains
  • Console error logging for debugging
🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Search for tests covering error scenarios
rg -l "describe|test|it.*fetch.*error" --type ts

# Look for error handling patterns in related files
rg "catch|throw|Error|status.*599" --type ts

Length of output: 16456

app/client/platforms/glm.ts (1)

180-180: LGTM! Verify streaming flow consistency.

The updated onFinish callback correctly passes both the message and response object. However, we should ensure consistent behavior in the streaming flow.

Let's verify the streaming implementation:

βœ… Verification successful

Streaming implementation is consistent with non-streaming flow

The verification shows that both streaming and non-streaming implementations correctly pass the message and response object to onFinish:

  • Non-streaming: options.onFinish(message, res)
  • Streaming: options.onFinish(responseText + remainText, responseRes)
🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Check if the streaming flow handles the response object consistently
rg -A 10 "onFinish.*Response" "app/utils/chat.ts"

Length of output: 49


Script:

#!/bin/bash
# Let's check the streaming implementation in the chat utility and GLM client
rg -A 10 "onFinish" "app/utils/chat.ts" "app/client/platforms/glm.ts"

Length of output: 1049

app/client/platforms/moonshot.ts (1)

183-183: LGTM! The onFinish callback update aligns with the PR objectives.

The addition of the response object to the onFinish callback provides better response handling capabilities, consistent with the changes across other API classes.

app/client/platforms/bytedance.ts (3)

133-133: LGTM: Well-scoped response variable declaration.

The variable is correctly typed and appropriately scoped for use across the streaming handlers.


245-245: LGTM: Consistent response handling in non-streaming mode.

The implementation maintains parity between streaming and non-streaming modes by passing the response object to onFinish in both cases.


163-163: Verify error handling for the enhanced onFinish callback.

The response object is correctly passed to onFinish, enhancing error handling capabilities. However, we should verify that all error paths properly handle the response object.

βœ… Verification successful

Let me gather more context about the error handling implementations.


Let me check how the error handling is implemented in the store where the onFinish callback is consumed.


Based on the comprehensive analysis of the codebase, I can now provide the final response:

Error handling for the enhanced onFinish callback is properly implemented

The response object is consistently handled across the codebase:

  • All platform implementations properly pass the response object to onFinish
  • Error status codes (especially 401) are properly handled with appropriate error messages
  • The chat store correctly checks response status (200) before processing the message
  • Error paths include both streaming and non-streaming scenarios
🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Search for other onFinish implementations to ensure consistent error handling
rg "onFinish.*Response" --type ts

Length of output: 218


Script:

#!/bin/bash
# Search for onFinish usage patterns across the codebase
rg "onFinish" -A 3 --type ts

Length of output: 5502


Script:

#!/bin/bash
# Search for error handling in chat store
rg "status.*=== (?:4|5)" -A 3 --type ts

Length of output: 1683

app/client/platforms/iflytek.ts (2)

120-120: LGTM: Response variable declaration

The responseRes variable is properly declared with the correct type and scope.


147-147: Verify onFinish handler compatibility

The response object is now correctly passed to onFinish. This change aligns with the standardization of onFinish signatures across the codebase.

βœ… Verification successful

The previous ast-grep pattern didn't yield results, so let's try with ripgrep to get a broader view of how onFinish is used and defined across the codebase. This will help us verify the compatibility of the handler signatures.


Verified: onFinish handler signature is consistent across the codebase

The ripgrep results show that the onFinish handler consistently accepts two parameters across all implementations:

  • The type definition in app/client/api.ts explicitly declares onFinish: (message: string, responseRes: Response) => void
  • All platform implementations (iflytek, openai, anthropic, etc.) follow this signature by passing both the message and response object
  • Both streaming and non-streaming code paths maintain this contract

The change in iflytek.ts correctly aligns with the standardized signature used throughout the codebase.

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Verify that all implementations of onFinish in the codebase accept the Response parameter
ast-grep --pattern 'onFinish($_, $_)'

Length of output: 39


Script:

#!/bin/bash
# Search for onFinish usage patterns with ripgrep for better context
rg "onFinish" -A 2 -B 2

# Also search for type definitions or interfaces that might define onFinish
rg "onFinish.*:" -A 2 -B 2

Length of output: 7443

app/client/platforms/alibaba.ts (2)

146-146: LGTM: Response object tracking added.

The addition of responseRes enables proper response object tracking throughout the streaming process.


176-176: Verify onFinish callback implementation in consumers.

The modification to include responseRes in the onFinish callback is correct. However, ensure all consumers of this API are updated to handle the additional parameter.

βœ… Verification successful

The previous command failed because of incorrect file type specification. Let's try with the correct file type flags and also look for type definitions to understand the callback interface.


The onFinish callback modification is consistent across the codebase

The verification shows that:

  • The type definition in app/client/api.ts correctly declares onFinish with both message and Response parameters
  • All platform implementations (alibaba, baidu, tencent, etc.) consistently use the callback with both parameters
  • The consumers in app/store/chat.ts properly handle both parameters, using the Response status for additional logic
🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Search for onFinish implementations to verify they handle the Response parameter
rg -t typescript -t javascript "onFinish.*=.*=>" --no-heading

Length of output: 98


Script:

#!/bin/bash
# Search for onFinish usage patterns with ripgrep
rg "onFinish" -t ts -t js --no-heading -A 2

# Also search for type definitions of onFinish
rg "onFinish.*:" -t ts -t js --no-heading -A 1

Length of output: 4290

app/client/platforms/tencent.ts (3)

145-145: LGTM: Response storage enhancement

The addition of responseRes variable improves response handling by preserving the full Response object for later use.


175-175: LGTM: Enhanced streaming response handling

The streaming path now correctly captures and forwards the Response object while maintaining existing error handling capabilities.

Also applies to: 191-192


Line range hint 145-257: Verify consistency with other platform implementations

Let's ensure that the response handling pattern is consistent across other platform implementations.

βœ… Verification successful

Let me analyze the response handling patterns more deeply by checking the streaming implementations.


Response handling pattern is consistent across platform implementations

The verification shows that the response handling pattern in the Tencent platform implementation is consistent with other platforms that support streaming (Alibaba, Baidu, ByteDance, iFlytek). All these implementations:

  • Use the same fetchEventSource for streaming responses
  • Share similar onopen, onmessage handlers
  • Follow the same pattern for handling text responses and error cases
  • Use consistent animation and text accumulation approach
  • Call onFinish with the same parameter structure (message and response object)

The changes are well-aligned with the established patterns across the codebase.

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Search for onFinish implementations in other platform files
echo "Checking onFinish implementations in other platform files:"
rg -A 2 "onFinish\(" "app/client/platforms/"

Length of output: 3183


Script:

#!/bin/bash
# Search for streaming response handling patterns
echo "Checking streaming implementations:"
rg -B 2 -A 5 "fetchEventSource|onmessage|onopen" "app/client/platforms/"

Length of output: 9100

app/client/platforms/baidu.ts (3)

165-165: LGTM: Response variable declaration

The new variable is properly typed and well-placed to capture the response object.


271-271: LGTM: Consistent response handling

The non-streaming mode now correctly passes the response object to onFinish, maintaining consistency with the streaming implementation.


195-195: Verify the onFinish callback implementation

The enhanced onFinish call now correctly passes both the response text and response object, improving error handling capabilities.

app/client/platforms/google.ts (1)

277-277: Verify type compatibility with ChatOptions interface.

The addition of the res parameter to onFinish needs to be type-safe. Ensure that the ChatOptions interface in app/client/api.ts has been updated to include the Response parameter in its onFinish callback definition.

βœ… Verification successful

Type compatibility verified for onFinish callback

The ChatOptions interface in app/client/api.ts has been correctly updated to include the Response parameter: onFinish: (message: string, responseRes: Response) => void. This matches the usage in the Google API client implementation.

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Check if ChatOptions interface has been updated to include Response parameter
# Expected: Should find the updated interface definition

# Search for ChatOptions interface definition
ast-grep --pattern 'interface ChatOptions {
  $$$
  onFinish($$$): void;
  $$$
}'

# Alternatively, search for the specific parameter
rg -t ts "onFinish.*Response.*\)" app/client/api.ts

Length of output: 199

app/client/api.ts (1)

73-73: Breaking Change: Updated onFinish callback signature

The addition of the responseRes parameter to onFinish improves error handling capabilities but constitutes a breaking change that requires updates to all implementations.

Let's verify that all implementations have been updated:

βœ… Verification successful

Breaking change has been properly implemented across the codebase

The verification shows that all implementations have been updated to use the new onFinish(message, responseRes) signature:

  • All platform implementations (OpenAI, Google, Anthropic, etc.) correctly pass both message and response parameters
  • Stream handlers in various platforms properly handle the responseRes parameter
  • Error cases (like in Anthropic platform) correctly create and pass Response objects
🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Search for onFinish implementations that might need updating
rg "onFinish.*=.*\(" --type ts

# Search for chat method implementations that should pass the Response
rg "onFinish\(((?!Response).)*\)" --type ts

Length of output: 321


Script:

#!/bin/bash
# Search for onFinish implementations and their usage
rg "onFinish" --type ts -A 2

# Also search for specific class implementations that might use this interface
ast-grep --pattern 'class $_ implements ChatOptions {
  $$$
}'

# And search for direct assignments to onFinish
ast-grep --pattern 'onFinish = $_'

Length of output: 4055

app/utils.ts (2)

269-271: LGTM! The DALL-E 3 check is a logical addition.

The addition of the DALL-E 3 check to the vision model detection is appropriate since DALL-E 3 is indeed a vision-capable model. The code maintains good readability with clear conditions.


269-271: Verify the relationship between vision model changes and onFinish functionality.

The changes to the vision model detection logic appear unrelated to the PR's main focus on enhancing the onFinish method's signature. Could you clarify how these changes support the PR's objectives?

app/client/platforms/anthropic.ts (2)

327-327: LGTM! Enhanced response handling.

The modification to pass the raw response object to onFinish is a good improvement, allowing consumers to access response metadata when needed while maintaining backward compatibility.


320-327: Verify consistent response handling across API implementations.

These changes enhance error handling by providing access to the full response object. To maintain architectural consistency, ensure that other API implementations (if any) follow the same pattern of passing both the message and response object to onFinish.

βœ… Verification successful

Let me gather more information about how other implementations handle their responses.


Response handling is consistent across all API implementations

The verification shows that all API implementations in the codebase follow the same pattern of passing both the message and response object to onFinish. This includes both streaming and non-streaming implementations:

  • Non-streaming APIs (XAI, GLM, Moonshot, etc.) use: options.onFinish(message, res)
  • Streaming APIs (Tencent, Baidu, Alibaba, etc.) use: options.onFinish(responseText + remainText, responseRes)
  • Error handling in Anthropic's implementation matches the pattern: options.onFinish("", new Response(null, { status: 400 }))

The changes maintain architectural consistency across all LLM API implementations.

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Check if other API implementations follow the same pattern

# Search for other API implementations and their onFinish calls
rg -A 2 "implements LLMApi" app/client/platforms/
rg "onFinish.*Response" app/client/platforms/

Length of output: 2429


Script:

#!/bin/bash
# Search for onFinish calls in chat/completion implementations across all API files
rg -B 2 -A 2 "onFinish\(" app/client/platforms/ --type ts

# Search for response handling patterns
rg -B 2 -A 2 "const res = await fetch" app/client/platforms/ --type ts

Length of output: 9287

app/client/platforms/openai.ts (1)

364-364: Verify type safety for the updated onFinish callback.

The change to pass both message and res to onFinish looks good, as it provides access to the raw response for advanced use cases. However, we should ensure type safety.

Let's verify the interface definition and usage:

βœ… Verification successful

Let me try a different search strategy to verify the type safety.


Type safety for onFinish callback is verified and consistent

The implementation is type-safe and consistent across the codebase:

  • The ChatOptions interface in app/client/api.ts correctly defines onFinish with signature: onFinish: (message: string, responseRes: Response) => void
  • All LLM implementations (ChatGPTApi, ClaudeApi, GeminiProApi, etc.) consistently pass both the message and response object to onFinish
  • The usage in chat store and utilities also aligns with this type signature
  • The change in openai.ts matches the interface contract and follows the same pattern as other implementations
🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Check the LLMApi interface definition and other implementations
# to ensure consistent typing of the onFinish callback

# Search for the ChatOptions interface definition
echo "Searching for ChatOptions interface definition..."
rg --type typescript "interface ChatOptions" -A 10

# Search for other implementations of the chat method
echo "Searching for other chat method implementations..."
ast-grep --pattern 'interface LLMApi {
  $$$
  chat(options: ChatOptions): $$$
  $$$
}'

Length of output: 397


Script:

#!/bin/bash
# Search for ChatOptions and LLMApi definitions
echo "Searching for ChatOptions and LLMApi..."
rg "ChatOptions|LLMApi" -A 5

# Search for onFinish usage in the codebase
echo "Searching for onFinish usage..."
rg "onFinish" -A 2

Length of output: 20372

@@ -173,7 +173,7 @@ export class XAIApi implements LLMApi {

const resJson = await res.json();
const message = this.extractMessage(resJson);
options.onFinish(message);
options.onFinish(message, res);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Caution: Response object might be consumed.

The Response object being passed to onFinish has already been consumed by res.json(). This could lead to issues if the caller tries to read the response body again.

Consider one of these approaches:

- const resJson = await res.json();
- const message = this.extractMessage(resJson);
- options.onFinish(message, res);
+ const resClone = res.clone();
+ const resJson = await res.json();
+ const message = this.extractMessage(resJson);
+ options.onFinish(message, resClone);

Or:

- const resJson = await res.json();
- const message = this.extractMessage(resJson);
- options.onFinish(message, res);
+ const resJson = await res.json();
+ const message = this.extractMessage(resJson);
+ options.onFinish(message, { status: res.status, statusText: res.statusText, headers: res.headers });
πŸ“ Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
options.onFinish(message, res);
const resClone = res.clone();
const resJson = await res.json();
const message = this.extractMessage(resJson);
options.onFinish(message, resClone);
```
Solution 2 (using response metadata):
```suggestion
const resJson = await res.json();
const message = this.extractMessage(resJson);
options.onFinish(message, { status: res.status, statusText: res.statusText, headers: res.headers });

Comment on lines +719 to +726
onFinish(message, responseRes) {
if (responseRes?.status === 200) {
console.log("[Memory] ", message);
get().updateCurrentSession((session) => {
session.lastSummarizeIndex = lastSummarizeIndex;
session.memoryPrompt = message; // Update the memory prompt for stored it in local storage
});
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

πŸ› οΈ Refactor suggestion

Improve error handling and remove redundant comment.

Consider these improvements:

  1. Handle non-200 status codes explicitly
  2. Remove the redundant comment about local storage
-            onFinish(message, responseRes) {
-              if (responseRes?.status === 200) {
-                console.log("[Memory] ", message);
-                get().updateCurrentSession((session) => {
-                  session.lastSummarizeIndex = lastSummarizeIndex;
-                  session.memoryPrompt = message; // Update the memory prompt for stored it in local storage
-                });
-              }
+            onFinish(message, responseRes) {
+              if (responseRes?.status !== 200) {
+                console.error("[Memory] Failed to update memory prompt:", responseRes?.status);
+                return;
+              }
+              console.log("[Memory] ", message);
+              get().updateCurrentSession((session) => {
+                session.lastSummarizeIndex = lastSummarizeIndex;
+                session.memoryPrompt = message;
+              });
             },
πŸ“ Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
onFinish(message, responseRes) {
if (responseRes?.status === 200) {
console.log("[Memory] ", message);
get().updateCurrentSession((session) => {
session.lastSummarizeIndex = lastSummarizeIndex;
session.memoryPrompt = message; // Update the memory prompt for stored it in local storage
});
}
onFinish(message, responseRes) {
if (responseRes?.status !== 200) {
console.error("[Memory] Failed to update memory prompt:", responseRes?.status);
return;
}
console.log("[Memory] ", message);
get().updateCurrentSession((session) => {
session.lastSummarizeIndex = lastSummarizeIndex;
session.memoryPrompt = message;
});
},

Comment on lines +652 to +659
onFinish(message, responseRes) {
if (responseRes?.status === 200) {
get().updateCurrentSession(
(session) =>
(session.topic =
message.length > 0 ? trimTopic(message) : DEFAULT_TOPIC),
);
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

πŸ› οΈ Refactor suggestion

Improve error handling and code clarity.

While the status check is a good addition, consider these improvements:

  1. Handle non-200 status codes explicitly
  2. Restructure the assignment for better readability
-            onFinish(message, responseRes) {
-              if (responseRes?.status === 200) {
-                get().updateCurrentSession(
-                  (session) =>
-                    (session.topic =
-                      message.length > 0 ? trimTopic(message) : DEFAULT_TOPIC),
-                );
-              }
+            onFinish(message, responseRes) {
+              if (responseRes?.status !== 200) {
+                console.error("[Topic] Failed to update topic:", responseRes?.status);
+                return;
+              }
+              get().updateCurrentSession((session) => {
+                const newTopic = message.length > 0 ? trimTopic(message) : DEFAULT_TOPIC;
+                session.topic = newTopic;
+              });
             },
πŸ“ Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
onFinish(message, responseRes) {
if (responseRes?.status === 200) {
get().updateCurrentSession(
(session) =>
(session.topic =
message.length > 0 ? trimTopic(message) : DEFAULT_TOPIC),
);
}
onFinish(message, responseRes) {
if (responseRes?.status !== 200) {
console.error("[Topic] Failed to update topic:", responseRes?.status);
return;
}
get().updateCurrentSession((session) => {
const newTopic = message.length > 0 ? trimTopic(message) : DEFAULT_TOPIC;
session.topic = newTopic;
});
},
🧰 Tools
πŸͺ› Biome

[error] 656-658: The assignment should not be in an expression.

The use of assignments in expressions is confusing.
Expressions are often considered as side-effect free.

(lint/suspicious/noAssignInExpressions)

@@ -174,6 +174,7 @@ export function stream(
let finished = false;
let running = false;
let runTools: any[] = [];
let responseRes: Response;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Potential undefined responseRes in options.onFinish

There's a risk that responseRes may be undefined when options.onFinish is called. Since responseRes is assigned within the onopen callback, if finish() is invoked before onopen is triggered (e.g., due to an early error or abort), responseRes will remain undefined. This could lead to runtime errors if onFinish expects a defined Response object.

Consider updating the variable declaration to allow for undefined and ensure that onFinish can handle this case. Apply this diff to adjust the variable declaration:

- let responseRes: Response;
+ let responseRes: Response | undefined;

Additionally, ensure that any usage of responseRes in onFinish accounts for the possibility of it being undefined.

Also applies to: 276-276

@Dogtiti Dogtiti merged commit 5733e3c into ChatGPTNextWeb:main Nov 4, 2024
2 of 3 checks passed
@Dogtiti Dogtiti mentioned this pull request Nov 4, 2024
10 tasks
frostime added a commit to frostime/ChatGPT-Next-Web that referenced this pull request Nov 12, 2024
commit 38fa305
Author: lloydzhou <[email protected]>
Date:   Mon Nov 11 13:26:08 2024 +0800

    update version

commit 289aeec
Merge: f8f6954 7d71da9
Author: Lloyd Zhou <[email protected]>
Date:   Mon Nov 11 13:19:26 2024 +0800

    Merge pull request ChatGPTNextWeb#5786 from ConnectAI-E/feature/realtime-chat

    Feature/realtime chat

commit 7d71da9
Author: lloydzhou <[email protected]>
Date:   Mon Nov 11 13:15:09 2024 +0800

    remove close-24 svg

commit f8f6954
Merge: 6e03f32 64aa760
Author: Lloyd Zhou <[email protected]>
Date:   Mon Nov 11 13:13:09 2024 +0800

    Merge pull request ChatGPTNextWeb#5779 from ConnectAI-E/feature/model/claude35haiku

    add claude35haiku & not support vision

commit 6e03f32
Merge: 108069a 18a6571
Author: Lloyd Zhou <[email protected]>
Date:   Mon Nov 11 13:10:00 2024 +0800

    Merge pull request ChatGPTNextWeb#5795 from JingSyue/main

    fix: built-in plugin dalle3 error ChatGPTNextWeb#5787

commit 18a6571
Author: JingSyue <[email protected]>
Date:   Mon Nov 11 12:59:29 2024 +0800

    Update proxy.ts

    Update proxy.ts

commit 14f444e
Author: Dogtiti <[email protected]>
Date:   Mon Nov 11 11:47:41 2024 +0800

    doc: realtime chat

commit 2b0f2e5
Author: JingSyue <[email protected]>
Date:   Sun Nov 10 10:28:25 2024 +0800

    fix: built-in plugin dalle3 error ChatGPTNextWeb#5787

commit 4629b39
Author: Dogtiti <[email protected]>
Date:   Sat Nov 9 16:22:01 2024 +0800

    chore: comment context history

commit d33e772
Author: Dogtiti <[email protected]>
Date:   Fri Nov 8 22:39:17 2024 +0800

    feat: voice print

commit 89136fb
Author: Dogtiti <[email protected]>
Date:   Fri Nov 8 22:18:39 2024 +0800

    feat: voice print

commit 8b4ca13
Author: Dogtiti <[email protected]>
Date:   Fri Nov 8 22:02:31 2024 +0800

    feat: voice print

commit a4c9eaf
Author: lloydzhou <[email protected]>
Date:   Fri Nov 8 13:43:13 2024 +0800

    do not save empty audio file

commit 50e6310
Author: lloydzhou <[email protected]>
Date:   Fri Nov 8 13:21:40 2024 +0800

    merge code and get analyser data

commit 48a1e8a
Author: Dogtiti <[email protected]>
Date:   Thu Nov 7 21:32:47 2024 +0800

    chore: i18n

commit e44ebe3
Author: Dogtiti <[email protected]>
Date:   Thu Nov 7 21:28:23 2024 +0800

    feat: realtime config

commit 108069a
Merge: fbb9385 d5bda29
Author: Lloyd Zhou <[email protected]>
Date:   Thu Nov 7 20:06:30 2024 +0800

    Merge pull request ChatGPTNextWeb#5788 from ConnectAI-E/fix-o1-maxtokens

    chore: o1ζ¨‘εž‹δ½Ώη”¨max_completion_tokens

commit d5bda29
Author: DDMeaqua <[email protected]>
Date:   Thu Nov 7 19:45:27 2024 +0800

    chore: o1ζ¨‘εž‹δ½Ώη”¨max_completion_tokens

commit 283caba
Author: lloydzhou <[email protected]>
Date:   Thu Nov 7 18:57:57 2024 +0800

    stop streaming play after get input audio.

commit b78e5db
Author: lloydzhou <[email protected]>
Date:   Thu Nov 7 17:55:51 2024 +0800

    add temperature config

commit 46c469b
Author: lloydzhou <[email protected]>
Date:   Thu Nov 7 17:47:55 2024 +0800

    add voice config

commit c00ebbe
Author: lloydzhou <[email protected]>
Date:   Thu Nov 7 17:40:03 2024 +0800

    update

commit c526ff8
Author: lloydzhou <[email protected]>
Date:   Thu Nov 7 17:23:20 2024 +0800

    update

commit 0037b0c
Author: lloydzhou <[email protected]>
Date:   Thu Nov 7 17:03:04 2024 +0800

    ts error

commit 6f81bb3
Author: lloydzhou <[email protected]>
Date:   Thu Nov 7 16:56:15 2024 +0800

    add context after connected

commit 7bdc45e
Author: lloydzhou <[email protected]>
Date:   Thu Nov 7 16:41:24 2024 +0800

    connect realtime model when open panel

commit 88cd3ac
Author: Dogtiti <[email protected]>
Date:   Thu Nov 7 12:16:11 2024 +0800

    fix: ts error

commit 4988d2e
Author: Dogtiti <[email protected]>
Date:   Thu Nov 7 11:56:58 2024 +0800

    fix: ts error

commit 8deb7a9
Author: lloydzhou <[email protected]>
Date:   Thu Nov 7 11:53:01 2024 +0800

    hotfix for update target session

commit db060d7
Author: lloydzhou <[email protected]>
Date:   Thu Nov 7 11:45:38 2024 +0800

    upload save record wav file

commit 5226278
Author: lloydzhou <[email protected]>
Date:   Thu Nov 7 09:36:22 2024 +0800

    upload save wav file logic

commit cf46d5a
Author: lloydzhou <[email protected]>
Date:   Thu Nov 7 01:12:08 2024 +0800

    upload response audio, and update audio_url to session message

commit a494152
Author: Dogtiti <[email protected]>
Date:   Wed Nov 6 22:30:02 2024 +0800

    feat: audio to message

commit f6e1f83
Author: Dogtiti <[email protected]>
Date:   Wed Nov 6 22:07:33 2024 +0800

    wip

commit d544eea
Author: Dogtiti <[email protected]>
Date:   Wed Nov 6 21:14:45 2024 +0800

    feat: realtime chat ui

commit fbb9385
Merge: 6ded4e9 18144c3
Author: Lloyd Zhou <[email protected]>
Date:   Wed Nov 6 20:33:51 2024 +0800

    Merge pull request ChatGPTNextWeb#5782 from ConnectAI-E/style/classname

    style: improve classname by clsx

commit 18144c3
Author: Dogtiti <[email protected]>
Date:   Wed Nov 6 20:16:38 2024 +0800

    chore: clsx

commit 64aa760
Author: opchips <[email protected]>
Date:   Wed Nov 6 19:18:05 2024 +0800

    update claude rank

commit e0bbb8b
Author: Dogtiti <[email protected]>
Date:   Wed Nov 6 16:58:26 2024 +0800

    style: improve classname by clsx

commit 6667ee1
Merge: 3086a2f 6ded4e9
Author: opchips <[email protected]>
Date:   Wed Nov 6 15:08:18 2024 +0800

    merge main

commit 6ded4e9
Merge: f4c9410 85cdcab
Author: Lloyd Zhou <[email protected]>
Date:   Wed Nov 6 15:04:46 2024 +0800

    Merge pull request ChatGPTNextWeb#5778 from ConnectAI-E/fix/5436

    fix: botMessage reply date

commit 85cdcab
Author: Dogtiti <[email protected]>
Date:   Wed Nov 6 14:53:08 2024 +0800

    fix: botMessage reply date

commit f4c9410
Merge: f526d6f adf7d82
Author: Lloyd Zhou <[email protected]>
Date:   Wed Nov 6 14:02:20 2024 +0800

    Merge pull request ChatGPTNextWeb#5776 from ConnectAI-E/feat-glm

    fix: glm chatpath

commit adf7d82
Author: DDMeaqua <[email protected]>
Date:   Wed Nov 6 13:55:57 2024 +0800

    fix: glm chatpath

commit 3086a2f
Author: opchips <[email protected]>
Date:   Wed Nov 6 12:56:24 2024 +0800

    add claude35haiku not vision

commit f526d6f
Merge: f3603e5 106461a
Author: Lloyd Zhou <[email protected]>
Date:   Wed Nov 6 11:16:33 2024 +0800

    Merge pull request ChatGPTNextWeb#5774 from ConnectAI-E/feature/update-target-session

    fix: updateCurrentSession => updateTargetSession

commit 106461a
Merge: c4e19db f3603e5
Author: Dogtiti <[email protected]>
Date:   Wed Nov 6 11:08:41 2024 +0800

    Merge branch 'main' of https://github.com/ConnectAI-E/ChatGPT-Next-Web into feature/update-target-session

commit c4e19db
Author: Dogtiti <[email protected]>
Date:   Wed Nov 6 11:06:18 2024 +0800

    fix: updateCurrentSession => updateTargetSession

commit f3603e5
Merge: 00d6cb2 8e2484f
Author: Dogtiti <[email protected]>
Date:   Wed Nov 6 10:49:28 2024 +0800

    Merge pull request ChatGPTNextWeb#5769 from ryanhex53/fix-model-multi@

    Custom model names can include the `@` symbol by itself.

commit 8e2484f
Author: ryanhex53 <[email protected]>
Date:   Tue Nov 5 13:52:54 2024 +0000

    Refactor: Replace all provider split occurrences with getModelProvider utility method

commit 00d6cb2
Author: lloydzhou <[email protected]>
Date:   Tue Nov 5 17:42:55 2024 +0800

    update version

commit b844045
Author: ryanhex53 <[email protected]>
Date:   Tue Nov 5 07:44:12 2024 +0000

    Custom model names can include the `@` symbol by itself.

    To specify the model's provider, append it after the model name using `@` as before.

    This format supports cases like `google vertex ai` with a model name like `claude-3-5-sonnet@20240620`.

    For instance, `claude-3-5-sonnet@20240620@vertex-ai` will be split by `split(/@(?!.*@)/)` into:

    `[ 'claude-3-5-sonnet@20240620', 'vertex-ai' ]`, where the former is the model name and the latter is the custom provider.

commit e49fe97
Merge: 14f7519 e49466f
Author: Lloyd Zhou <[email protected]>
Date:   Tue Nov 5 15:07:52 2024 +0800

    Merge pull request ChatGPTNextWeb#5765 from ConnectAI-E/feature/onfinish

    feat: update real 'currentSession'

commit 14f7519
Merge: 820ab54 0ec4233
Author: Dogtiti <[email protected]>
Date:   Tue Nov 5 11:07:52 2024 +0800

    Merge pull request ChatGPTNextWeb#5767 from ConnectAI-E/feat-glm

    chore: update readme

commit 0ec4233
Author: DDMeaqua <[email protected]>
Date:   Tue Nov 5 11:06:20 2024 +0800

    chore: update readme

commit 820ab54
Merge: 0dc4071 a6c1eb2
Author: Dogtiti <[email protected]>
Date:   Tue Nov 5 10:54:52 2024 +0800

    Merge pull request ChatGPTNextWeb#5766 from ConnectAI-E/feature/add-claude-haiku3.5

    Feature/add claude haiku3.5

commit a6c1eb2
Merge: 801dc41 0dc4071
Author: lloydzhou <[email protected]>
Date:   Tue Nov 5 10:23:15 2024 +0800

    add claude 3.5 haiku

commit 0dc4071
Merge: aef535f 4d39497
Author: Lloyd Zhou <[email protected]>
Date:   Tue Nov 5 01:10:06 2024 +0800

    Merge pull request ChatGPTNextWeb#5464 from endless-learner/main

    Added 1-click deployment link for Alibaba Cloud.

commit 4d39497
Author: Lloyd Zhou <[email protected]>
Date:   Tue Nov 5 01:09:27 2024 +0800

    merge main

commit aef535f
Merge: 686a80e fbb7a1e
Author: Dogtiti <[email protected]>
Date:   Mon Nov 4 21:41:11 2024 +0800

    Merge pull request ChatGPTNextWeb#5753 from ChatGPTNextWeb/feat-bt-doc

    Feat bt doc

commit 686a80e
Merge: 5733e3c 4b93370
Author: Dogtiti <[email protected]>
Date:   Mon Nov 4 21:37:34 2024 +0800

    Merge pull request ChatGPTNextWeb#5764 from ChatGPTNextWeb/dependabot/npm_and_yarn/testing-library/react-16.0.1

    chore(deps-dev): bump @testing-library/react from 16.0.0 to 16.0.1

commit e49466f
Author: Dogtiti <[email protected]>
Date:   Mon Nov 4 21:25:56 2024 +0800

    feat: update real 'currentSession'

commit 4b93370
Author: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Date:   Mon Nov 4 10:24:30 2024 +0000

    chore(deps-dev): bump @testing-library/react from 16.0.0 to 16.0.1

    Bumps [@testing-library/react](https://github.com/testing-library/react-testing-library) from 16.0.0 to 16.0.1.
    - [Release notes](https://github.com/testing-library/react-testing-library/releases)
    - [Changelog](https://github.com/testing-library/react-testing-library/blob/main/CHANGELOG.md)
    - [Commits](testing-library/react-testing-library@v16.0.0...v16.0.1)

    ---
    updated-dependencies:
    - dependency-name: "@testing-library/react"
      dependency-type: direct:development
      update-type: version-update:semver-patch
    ...

    Signed-off-by: dependabot[bot] <[email protected]>

commit 5733e3c
Merge: d66bfc6 44fc5b5
Author: Dogtiti <[email protected]>
Date:   Mon Nov 4 17:16:44 2024 +0800

    Merge pull request ChatGPTNextWeb#5759 from ConnectAI-E/feature/onfinish

    Feature/onfinish

commit 44fc5b5
Author: Dogtiti <[email protected]>
Date:   Mon Nov 4 17:00:45 2024 +0800

    fix: onfinish responseRes

commit 2d3f7c9
Author: Dogtiti <[email protected]>
Date:   Wed Oct 16 15:17:08 2024 +0800

    fix: vision model dalle3

commit fe8cca3
Merge: adf97c6 d66bfc6
Author: GH Action - Upstream Sync <[email protected]>
Date:   Sat Nov 2 01:12:09 2024 +0000

    Merge branch 'main' of https://github.com/ChatGPTNextWeb/ChatGPT-Next-Web

commit fbb7a1e
Author: weige <[email protected]>
Date:   Fri Nov 1 18:20:16 2024 +0800

    fix

commit fb2c155
Author: weige <[email protected]>
Date:   Fri Nov 1 17:45:50 2024 +0800

    fix

commit c2c52a1
Author: weige <[email protected]>
Date:   Fri Nov 1 17:35:34 2024 +0800

    fix

commit 106ddc1
Author: weige <[email protected]>
Date:   Fri Nov 1 17:35:09 2024 +0800

    fix

commit 17d5209
Author: weige <[email protected]>
Date:   Fri Nov 1 17:28:20 2024 +0800

    add bt install doc

commit adf97c6
Merge: 7c466c9 0581e37
Author: GH Action - Upstream Sync <[email protected]>
Date:   Fri Nov 1 01:18:59 2024 +0000

    Merge branch 'main' of https://github.com/ChatGPTNextWeb/ChatGPT-Next-Web

commit 7c466c9
Merge: b0d28eb a0fa4d7
Author: GH Action - Upstream Sync <[email protected]>
Date:   Thu Oct 31 01:14:28 2024 +0000

    Merge branch 'main' of https://github.com/ChatGPTNextWeb/ChatGPT-Next-Web

commit b0d28eb
Merge: 064e964 613d67e
Author: endless-learner <[email protected]>
Date:   Tue Oct 29 14:38:49 2024 -0700

    Merge branch 'main' into main

commit 801dc41
Author: lloydzhou <[email protected]>
Date:   Thu Oct 24 15:28:05 2024 +0800

    add claude-3.5-haiku

commit 064e964
Author: endless-learner <[email protected]>
Date:   Tue Sep 24 23:05:32 2024 -0700

    Updated link to deploy on Alibaba Cloud, readable when not logged in, also, able to choose region.

commit 47fb40d
Merge: 9e18cc2 4c84182
Author: endless-learner <[email protected]>
Date:   Tue Sep 24 23:03:03 2024 -0700

    Merge branch 'ChatGPTNextWeb:main' into main

commit 9e18cc2
Author: endless-learner <[email protected]>
Date:   Tue Sep 24 13:55:00 2024 -0700

    Update README.md

    Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>

commit 03268ce
Author: endless-learner <[email protected]>
Date:   Wed Sep 18 20:38:20 2024 -0700

    Added 1-click deployment link for Alibaba Cloud.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants