-
Notifications
You must be signed in to change notification settings - Fork 59.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: [#5714] 支持GLM #5741
feat: [#5714] 支持GLM #5741
Conversation
@DDMeaqua is attempting to deploy a commit to the NextChat Team on Vercel. A member of the Team first needs to authorize it. |
WalkthroughThis pull request introduces support for the ChatGLM model provider across multiple components of the application. Key changes include the addition of a handler for ChatGLM requests in the API routes, updates to the authentication logic to accommodate the ChatGLM API key, and the creation of a dedicated ChatGLM API handler. Additionally, modifications to the settings component allow users to configure ChatGLM-specific parameters, and localization files are updated to include ChatGLM settings. The changes ensure that the application can effectively interact with the ChatGLM model alongside existing providers. Changes
Possibly related PRs
Suggested labels
Suggested reviewers
Poem
Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media? 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
Your build has completed! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 5
🧹 Outside diff range and nitpick comments (14)
app/api/auth.ts (1)
98-100
: Consider refactoring provider configuration management.
As the number of model providers grows, the switch statement becomes increasingly complex. Consider extracting provider-specific configurations into a separate module or using a configuration map for better maintainability.
Example approach:
// providers/config.ts
const PROVIDER_CONFIGS = {
[ModelProvider.GLM]: {
getApiKey: (config) => config.glmApiKey,
},
// ... other providers
} as const;
// Then in auth.ts
systemApiKey = PROVIDER_CONFIGS[modelProvider]?.getApiKey(serverConfig);
This would:
- Improve maintainability by isolating provider-specific logic
- Make it easier to add new providers
- Reduce the complexity of the auth function
app/store/access.ts (1)
Line range hint 251-266
: Consider documenting GLM fields in migration logic.
While no migration is needed for the new GLM fields (they'll default to empty strings), it might be helpful to document this in the migration logic for future reference.
Add a comment in the migration logic:
migrate(persistedState, version) {
if (version < 2) {
const state = persistedState as {
token: string;
openaiApiKey: string;
azureApiVersion: string;
googleApiKey: string;
};
state.openaiApiKey = state.token;
state.azureApiVersion = "2023-08-01-preview";
}
+ // Version 2 added GLM support with optional glmUrl and glmApiKey fields
return persistedState as any;
},
app/config/server.ts (1)
Line range hint 78-228
: Consider refactoring provider configurations for better maintainability.
As the number of supported providers grows, this configuration file is becoming quite large. Consider splitting provider-specific configurations into separate modules that follow a common interface, making it easier to maintain and add new providers.
Example structure:
// types/provider-config.ts
interface ProviderConfig {
isEnabled: boolean;
getConfig(): Record<string, unknown>;
}
// config/providers/glm.ts
class GLMConfig implements ProviderConfig {
isEnabled = !!process.env.GLM_API_KEY;
getConfig() {
return {
glmUrl: process.env.GLM_URL,
glmApiKey: getApiKey(process.env.GLM_API_KEY),
};
}
}
app/client/api.ts (2)
349-350
: Consider refactoring provider mapping
While the implementation is correct, consider refactoring the provider mapping to reduce duplication:
Consider creating a mapping object:
const PROVIDER_MODEL_MAP = {
[ServiceProvider.Google]: ModelProvider.GeminiPro,
[ServiceProvider.GLM]: ModelProvider.GLM,
// ... other mappings
} as const;
export function getClientApi(provider: ServiceProvider): ClientApi {
return new ClientApi(PROVIDER_MODEL_MAP[provider] ?? ModelProvider.GPT);
}
This would:
- Reduce switch statement complexity
- Make provider-to-model mapping more maintainable
- Provide better type safety
Line range hint 24-350
: Consider enhancing provider abstraction
While the GLM integration is solid, consider these architectural improvements:
- Create an abstract factory for provider instantiation
- Define a common interface for provider configurations
- Implement a provider registry pattern
This would make adding new providers like GLM more streamlined and reduce the need for switch statements throughout the codebase.
app/constant.ts (2)
233-236
: LGTM: GLM configuration is properly structured.
The GLM configuration follows the established pattern for provider configurations.
Consider adding knowledge cutoff dates for GLM models in the KnowledgeCutOffDate
record to maintain consistency with other providers:
export const KnowledgeCutOffDate: Record<string, string> = {
default: "2021-09",
+ "glm-4": "2024-03",
+ "glm-4-plus": "2024-03",
// ... existing entries
};
389-398
: Consider sorting GLM models by capability/version.
While the models list is comprehensive, consider sorting them in a logical order (e.g., by capability or version) to improve maintainability and readability.
Suggested order:
const glmModels = [
"glm-4-plus",
- "glm-4-0520",
"glm-4",
+ "glm-4-0520",
"glm-4-air",
"glm-4-airx",
"glm-4-long",
- "glm-4-flashx",
"glm-4-flash",
+ "glm-4-flashx",
];
app/locales/cn.ts (1)
482-485
: Consider adding an example endpoint URL.
The SubTitle for the endpoint configuration only shows "样例:" without providing an actual example URL. This differs from other providers that show concrete examples to guide users.
Consider updating the SubTitle to include an example endpoint URL:
- SubTitle: "样例:",
+ SubTitle: "样例:https://api.zhipuai.cn/v1",
app/locales/en.ts (1)
467-468
: Add an example endpoint URL in the SubTitle.
The SubTitle for the Endpoint configuration only shows "Example: " without providing an actual example URL. This differs from other providers that include example endpoints to guide users.
Consider adding a representative endpoint URL, similar to how it's done for other providers. For example:
Endpoint: {
Title: "Endpoint Address",
- SubTitle: "Example: ",
+ SubTitle: "Example: https://api.zhipuai.cn/v1",
},
app/components/settings.tsx (1)
1238-1275
: LGTM! Consider adding input validation.
The GLM configuration component follows the established patterns and best practices, including proper state management, accessibility, and security considerations for sensitive data.
Consider adding input validation for the endpoint URL to ensure it's a valid URL format before saving:
onChange={(e) =>
+ const url = e.currentTarget.value;
+ if (url && !url.startsWith('http')) {
+ showToast(Locale.Settings.Access.GLM.Endpoint.Invalid);
+ return;
+ }
accessStore.update(
(access) => (access.glmUrl = e.currentTarget.value),
)
}
🧰 Tools
🪛 Biome
[error] 1253-1253: The assignment should not be in an expression.
The use of assignments in expressions is confusing.
Expressions are often considered as side-effect free.
(lint/suspicious/noAssignInExpressions)
[error] 1269-1269: The assignment should not be in an expression.
The use of assignments in expressions is confusing.
Expressions are often considered as side-effect free.
(lint/suspicious/noAssignInExpressions)
app/api/glm.ts (2)
19-19
: Avoid Logging Potentially Sensitive Information
Logging request parameters and paths can lead to exposure of sensitive information. Consider removing or sanitizing these log statements to prevent any unintended data leakage.
Apply this diff to remove or modify the log statements:
- console.log("[GLM Route] params ", params);
...
- console.log("[Proxy] ", path);
- console.log("[Base Url]", baseUrl);
+ console.log("[GLM Route] Request received");
+ // Consider logging only non-sensitive information or using debug levels
Also applies to: 57-58
77-78
: Avoid Using @ts-ignore
; Resolve TypeScript Errors Properly
Using @ts-ignore
suppresses TypeScript's type checking, which can hide potential bugs. The duplex
property is not standard in the RequestInit
interface. Ensure that this property is necessary and consider updating TypeScript definitions or handling the request differently.
If duplex
is required for streaming, you might need to adjust your TypeScript configuration or cast fetchOptions
to any
, though casting should be used cautiously.
- // @ts-ignore
duplex: "half",
app/client/platforms/glm.ts (2)
57-59
: Handle the unimplemented speech
method appropriately.
The speech
method currently throws an error and is unimplemented. If the GLM API does not support speech synthesis, consider removing this method to avoid potential runtime errors.
You can remove the unimplemented method:
- speech(options: SpeechOptions): Promise<ArrayBuffer> {
- throw new Error("Method not implemented.");
- }
87-87
: Replace console.log
statements with a proper logging mechanism.
Using console.log
is not recommended in production code. Consider utilizing a dedicated logging library or mechanism that can be configured for different environments, ensuring better control over log levels and outputs.
Also applies to: 178-178
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
📒 Files selected for processing (11)
- app/api/[provider]/[...path]/route.ts (2 hunks)
- app/api/auth.ts (1 hunks)
- app/api/glm.ts (1 hunks)
- app/client/api.ts (6 hunks)
- app/client/platforms/glm.ts (1 hunks)
- app/components/settings.tsx (3 hunks)
- app/config/server.ts (3 hunks)
- app/constant.ts (7 hunks)
- app/locales/cn.ts (1 hunks)
- app/locales/en.ts (1 hunks)
- app/store/access.ts (5 hunks)
🧰 Additional context used
🪛 Biome
app/components/settings.tsx
[error] 1253-1253: The assignment should not be in an expression.
The use of assignments in expressions is confusing.
Expressions are often considered as side-effect free.
(lint/suspicious/noAssignInExpressions)
[error] 1269-1269: The assignment should not be in an expression.
The use of assignments in expressions is confusing.
Expressions are often considered as side-effect free.
(lint/suspicious/noAssignInExpressions)
🔇 Additional comments (26)
app/api/[provider]/[...path]/route.ts (2)
45-46
: LGTM! GLM case implementation is consistent.
The GLM case follows the established pattern of other providers in the switch statement, maintaining consistency in:
- Using the ApiPath constant
- Handler invocation with req and params
- Error handling (inherited from the handler)
14-14
: Verify GLM handler implementation.
The import statement follows the consistent pattern, but let's verify the GLM handler implementation.
✅ Verification successful
Let me gather more information about the GLM handler implementation since the previous results were incomplete.
GLM handler implementation is properly structured and complete
The GLM handler implementation in app/api/glm.ts
follows the expected patterns and includes:
- Proper error handling with try-catch blocks
- Authentication checks using the auth middleware
- Consistent request handling with timeout controls
- Model availability verification
- Appropriate header management
The import statement in the route file correctly references this implementation.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Verify GLM handler implementation and structure
# Test 1: Check if glm.ts exists and contains handle export
echo "Checking GLM handler implementation..."
fd --type f "glm.ts" app/api -x cat {} | grep -A 10 "export const handle"
# Test 2: Verify ApiPath.GLM constant exists
echo "Verifying GLM API path constant..."
rg "GLM\s*=" app/constant.ts
Length of output: 343
Script:
#!/bin/bash
# Let's try a different approach to find and verify the GLM handler
# Check for glm.ts file existence and location
echo "Locating glm.ts file..."
fd "glm.ts" app/api
# Get full content of glm.ts if it exists
echo -e "\nChecking glm.ts content..."
fd "glm.ts" app/api -x cat {}
# Search for any GLM-related handler implementations
echo -e "\nSearching for GLM handler patterns..."
ast-grep --pattern 'export const handle = async $_'
Length of output: 3534
app/store/access.ts (4)
17-17
: LGTM: GLM URL configuration follows established patterns.
The GLM URL configuration follows the same pattern as other providers, correctly handling both app and non-app build modes.
Also applies to: 51-51
114-117
: LGTM: GLM state properties are well-structured.
The GLM-related state properties are properly organized and follow the established pattern of other providers.
210-210
: LGTM: Authorization check properly includes GLM validation.
The GLM validation is correctly integrated into the authorization chain.
190-192
: Verify if GLM requires additional validation fields.
While the implementation is consistent with the pattern, some other providers require multiple fields for validation. Please verify if GLM authentication requires only the API key.
app/config/server.ts (3)
78-81
: LGTM! Environment variable declarations are well-structured.
The GLM environment variable declarations follow the established pattern and are properly typed as optional strings.
158-158
: LGTM! GLM provider flag is correctly implemented.
The isGLM
flag follows the established pattern of checking for the presence of the API key.
225-228
: Verify if GLM requires additional security parameters.
The implementation follows the established pattern, but please verify if GLM requires additional security parameters (like a secret key) similar to other providers such as Baidu.
app/client/api.ts (5)
24-24
: LGTM: Import follows established pattern
The GLM import follows the same structure as other provider imports.
284-284
: LGTM: Config export follows pattern
The GLM flag export maintains consistency with other provider configurations.
267-268
: Add type safety for GLM API key
While the API key handling is correctly placed, consider adding type safety:
- Verify glmApiKey exists in AccessStore
- Consider adding runtime validation for the API key format
#!/bin/bash
# Verify glmApiKey is defined in AccessStore
rg "glmApiKey.*:" app/store/access.ts
251-251
: Verify ServiceProvider.GLM enum existence
The GLM provider check follows the established pattern.
#!/bin/bash
# Verify ServiceProvider.GLM is defined
rg -A 1 "ServiceProvider = {"
160-162
: Verify ModelProvider.GLM enum existence
The GLM case implementation looks correct, following the same pattern as other providers.
✅ Verification successful
ModelProvider.GLM enum exists and is properly defined
The GLM case implementation is correct and matches the codebase. The ModelProvider.GLM enum is properly defined in app/constant.ts along with other model providers.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Verify ModelProvider.GLM is defined in constants
rg -A 1 "ModelProvider = {"
Length of output: 234
Script:
#!/bin/bash
# Search for ModelProvider enum definition with different patterns
rg -l "enum ModelProvider"
# Also try searching for object literal definition
rg "ModelProvider\s*=\s*[{]" --type ts
# Search for any mentions of ModelProvider and GLM together
rg "ModelProvider.*GLM|GLM.*ModelProvider" --type ts
Length of output: 365
Script:
#!/bin/bash
# Check the ModelProvider enum definition in constant.ts
cat app/constant.ts
Length of output: 13757
app/constant.ts (3)
33-34
: LGTM: Base URL and API path are correctly defined.
The GLM base URL and API path definitions follow the established pattern and correctly point to the official GLM API endpoint.
Also applies to: 67-67
121-121
: LGTM: Provider enums are properly updated.
The GLM provider is correctly added to both ServiceProvider and ModelProvider enums, maintaining consistency with other providers.
Also applies to: 145-145
523-533
: LGTM: DEFAULT_MODELS extension is properly implemented.
The GLM models are correctly integrated into the DEFAULT_MODELS array, maintaining consistency with other providers and following the established sorting pattern.
app/locales/cn.ts (1)
476-486
: LGTM! The GLM localization entries are well-structured.
The implementation follows the established pattern for model provider configurations and maintains consistency with other providers in the codebase.
app/locales/en.ts (1)
460-470
: LGTM! Structure and format are consistent with other providers.
The GLM configuration section follows the established pattern and maintains consistency with other provider configurations.
app/components/settings.tsx (2)
1736-1736
: LGTM! GLM component integration is correct.
The GLM configuration component is properly integrated into the provider configuration section, maintaining consistency with other providers.
1253-1253
: Static analysis warnings can be ignored.
The assignments in expressions are part of React's state management pattern and are commonly used throughout the codebase.
Also applies to: 1269-1269
🧰 Tools
🪛 Biome
[error] 1253-1253: The assignment should not be in an expression.
The use of assignments in expressions is confusing.
Expressions are often considered as side-effect free.
(lint/suspicious/noAssignInExpressions)
app/api/glm.ts (1)
116-119
: Preserve Important Response Headers
When modifying response headers, ensure that essential headers like CORS headers are preserved. Deleting or not including headers like Access-Control-Allow-Origin
may prevent clients from accessing the resource due to CORS restrictions.
Please verify if CORS headers need to be added or preserved in the response. You can check for the existence of CORS headers in the original response and include them as needed.
If the headers are missing, consider adding them to newHeaders
:
// to prevent browser prompt for credentials
const newHeaders = new Headers(res.headers);
newHeaders.delete("www-authenticate");
+ newHeaders.set('Access-Control-Allow-Origin', '*'); // Adjust as necessary
app/client/platforms/glm.ts (4)
81-85
: Confirm usage of presence_penalty
and frequency_penalty
parameters.
The presence_penalty
and frequency_penalty
parameters are included in the request payload. Verify whether the GLM API supports these parameters, as they might be specific to the OpenAI API. Including unsupported parameters could lead to API errors.
If these parameters are not supported by GLM, consider removing them:
model: modelConfig.model,
temperature: modelConfig.temperature,
- presence_penalty: modelConfig.presence_penalty,
- frequency_penalty: modelConfig.frequency_penalty,
top_p: modelConfig.top_p,
94-100
: Ensure correct headers are included in the API request.
When making requests to the GLM API, verify that all necessary headers are provided, such as authentication tokens, content types, and any other required metadata. This ensures successful communication with the API.
If authentication is needed, include it in the headers:
const chatPayload = {
method: "POST",
body: JSON.stringify(requestPayload),
signal: controller.signal,
- headers: getHeaders(),
+ headers: {
+ ...getHeaders(),
+ Authorization: `Bearer ${accessStore.glmApiKey}`,
+ },
};
29-51
: Review base URL construction logic for accuracy.
The logic for constructing the baseUrl
may not handle all possible user configurations correctly. Ensure that the conditions cover scenarios where the user provides custom URLs, and that the resulting baseUrl
is accurate and properly formatted.
Consider adding unit tests to validate different configurations of baseUrl
.
20-20
:
Verify the compatibility of RequestPayload
with the GLM API.
The RequestPayload
type is imported from ./openai
, which may be specific to the OpenAI API. Ensure that this payload structure aligns with the GLM API requirements to prevent potential data mismatches or API errors.
Consider defining a new RequestPayload
type specific to GLM or adapting the existing one to meet GLM's specifications.
Also applies to: 77-85
app/api/auth.ts
Outdated
case ModelProvider.GLM: | ||
systemApiKey = serverConfig.glmApiKey; | ||
break; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Codebase verification
GLM configuration is properly implemented but needs documentation
The implementation of GLM API key is correctly structured in the codebase:
- Proper environment variable handling in
server.ts
- Type definitions and validation through
isGLM
check - Consistent integration with the authentication system
However, the GLM configuration is missing from the documentation. The developer should:
- Add GLM_API_KEY to the environment variables section in README.md
- Include GLM_URL configuration instructions
- Document the GLM provider setup process
🔗 Analysis chain
LGTM! Verify server configuration for GLM.
The implementation follows the established pattern for model providers and correctly assigns the GLM API key.
Let's verify the server configuration includes the GLM API key definition:
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Verify GLM API key configuration in server config
# Expected: Find the definition of glmApiKey in server configuration
# Search for GLM API key configuration
rg -l 'glmApiKey'
# Check the type definition to ensure GLM API key is properly typed
ast-grep --pattern 'interface $_ {
$$$
glmApiKey?: $_
$$$
}'
Length of output: 189
Script:
#!/bin/bash
# Let's check the server config implementation and type definitions
rg -A 5 -B 5 'glmApiKey' app/config/server.ts
# Also check the interface/type definitions that might contain glmApiKey
ast-grep --pattern 'type $_ = {
$$$
glmApiKey?: $_
$$$
}'
# Check if there are any validation or requirement checks for glmApiKey
rg -A 3 'glmApiKey.*required|required.*glmApiKey|validate.*glmApiKey'
Length of output: 615
Script:
#!/bin/bash
# Let's check the isGLM configuration and usage
rg -A 2 -B 2 'isGLM'
# Check if there's any validation or initialization of GLM_API_KEY env var
rg -A 2 'GLM_API_KEY'
# Look for any GLM-related environment variable documentation
rg -l '\.env|environment|config' --type md
Length of output: 2251
Script:
#!/bin/bash
# Check README and documentation for GLM configuration instructions
rg -A 5 -B 5 'GLM|Environment' README.md
# Check environment variable declarations in any example env files
fd -e example -e sample -e template | grep -i env | xargs cat
# Look for any GLM-related configuration in the docs
rg -A 3 'GLM' docs/
Length of output: 368
body: req.body, | ||
redirect: "manual", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ensure Request Body Consistency When Reading Streams
The request body (req.body
) is a stream and can only be read once. Assigning it directly to fetchOptions.body
and then reading it again with await req.text()
can cause the stream to be empty or lead to runtime errors.
To fix this, read the body once and reuse it:
- body: req.body,
+ const clonedBody = await req.text();
+ body: clonedBody,
...
if (serverConfig.customModels && clonedBody) {
try {
- const clonedBody = await req.text();
- fetchOptions.body = clonedBody;
+ fetchOptions.body = clonedBody;
Also applies to: 85-86
console.error("[GLM] ", e); | ||
return NextResponse.json(prettyObject(e)); | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Avoid Exposing Internal Error Details to Clients
Returning the full error object in the API response may reveal sensitive internal information. It's better to return a generic error message to the client and log the detailed error on the server side.
Apply this diff to return a generic error response:
console.error("[GLM] ", e);
- return NextResponse.json(prettyObject(e));
+ return NextResponse.json({ error: "Internal Server Error" }, { status: 500 });
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
console.error("[GLM] ", e); | |
return NextResponse.json(prettyObject(e)); | |
} | |
console.error("[GLM] ", e); | |
return NextResponse.json({ error: "Internal Server Error" }, { status: 500 }); | |
} |
if (req.method === "OPTIONS") { | ||
return NextResponse.json({ body: "OK" }, { status: 200 }); | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Set CORS Headers for Preflight Requests
When handling OPTIONS
requests for CORS preflight, it's important to include the appropriate headers to allow cross-origin requests. Currently, the response lacks CORS headers, which may cause client requests to fail due to CORS policy restrictions.
Apply this diff to include the necessary CORS headers:
return NextResponse.json({ body: "OK" }, {
status: 200,
+ headers: {
+ 'Access-Control-Allow-Origin': '*', // Adjust as needed
+ 'Access-Control-Allow-Methods': 'GET,POST,OPTIONS',
+ 'Access-Control-Allow-Headers': 'Content-Type, Authorization',
+ },
});
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
if (req.method === "OPTIONS") { | |
return NextResponse.json({ body: "OK" }, { status: 200 }); | |
} | |
if (req.method === "OPTIONS") { | |
return NextResponse.json({ body: "OK" }, { | |
status: 200, | |
headers: { | |
'Access-Control-Allow-Origin': '*', // Adjust as needed | |
'Access-Control-Allow-Methods': 'GET,POST,OPTIONS', | |
'Access-Control-Allow-Headers': 'Content-Type, Authorization', | |
}, | |
}); | |
} |
async usage() { | ||
return { | ||
used: 0, | ||
total: 0, | ||
}; | ||
} | ||
|
||
async models(): Promise<LLMModel[]> { | ||
return []; | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Implement the usage
and models
methods or handle them appropriately.
The usage
and models
methods currently return default values or empty arrays. If the GLM API provides usage statistics or model listings, consider implementing these methods to reflect accurate information. Otherwise, handle these methods appropriately to avoid confusion.
建议将GLM更换成 ChatGLM(在模型选择列表,括号里面会写这个名字,如果只是一个GLM可能识别不出来) |
It is recommended to replace GLM with ChatGLM (in the model selection list, this name will be written in brackets. If there is only one GLM, it may not be recognized) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Outside diff range and nitpick comments (1)
app/utils.ts (1)
281-282
: LGTM! Consider renaming GLM to ChatGLM.The implementation correctly adds GLM support for plugins, following the established pattern. However, based on the PR comments, consider using "ChatGLM" instead of "GLM" in the model selection list for better user recognition.
app/constant.ts
Outdated
@@ -118,7 +118,7 @@ export enum ServiceProvider { | |||
Stability = "Stability", | |||
Iflytek = "Iflytek", | |||
XAI = "XAI", | |||
GLM = "GLM", | |||
GLM = "ChatGLM", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
这里不改吗
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 11
🧹 Outside diff range and nitpick comments (3)
app/client/platforms/glm.ts (3)
53-53
: Consider using debug logging.Replace console.log with a proper logging utility that can be disabled in production.
-console.log("[Proxy Endpoint] ", baseUrl, path); +logger.debug("[Proxy Endpoint]", { baseUrl, path });
62-64
: Enhance error message for unsupported feature.The error message could be more informative about the unsupported feature.
-throw new Error("Method not implemented."); +throw new Error("Speech synthesis is not supported by the ChatGLM API.");
92-92
: Consider using debug logging.Replace console.log with a proper logging utility.
-console.log("[Request] glm payload: ", requestPayload); +logger.debug("[Request] glm payload:", requestPayload);
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
📒 Files selected for processing (12)
app/api/[provider]/[...path]/route.ts
(2 hunks)app/api/auth.ts
(1 hunks)app/api/glm.ts
(1 hunks)app/client/api.ts
(6 hunks)app/client/platforms/glm.ts
(1 hunks)app/components/settings.tsx
(3 hunks)app/config/server.ts
(3 hunks)app/constant.ts
(7 hunks)app/locales/cn.ts
(1 hunks)app/locales/en.ts
(1 hunks)app/store/access.ts
(5 hunks)app/utils.ts
(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (6)
- app/api/[provider]/[...path]/route.ts
- app/api/auth.ts
- app/config/server.ts
- app/locales/cn.ts
- app/locales/en.ts
- app/utils.ts
🧰 Additional context used
🪛 Biome
app/components/settings.tsx
[error] 1255-1255: The assignment should not be in an expression.
The use of assignments in expressions is confusing.
Expressions are often considered as side-effect free.
(lint/suspicious/noAssignInExpressions)
[error] 1271-1271: The assignment should not be in an expression.
The use of assignments in expressions is confusing.
Expressions are often considered as side-effect free.
(lint/suspicious/noAssignInExpressions)
🔇 Additional comments (20)
app/api/glm.ts (2)
1-13
: LGTM: Imports and configuration are well-structured.
The necessary dependencies, configurations, and types are properly imported and initialized.
60-65
: Consider reducing the timeout duration.
A 10-minute timeout is quite long and could lead to resource exhaustion under load. Consider reducing it to a more reasonable duration (e.g., 2-3 minutes) unless there's a specific requirement for long-running requests.
Let's check if there are any comments or discussions about the timeout duration:
app/client/platforms/glm.ts (1)
58-60
: LGTM!
The message extraction is implemented correctly with proper null handling.
app/store/access.ts (4)
17-17
: LGTM: URL configuration follows established patterns
The ChatGLM URL configuration is properly implemented, following the same pattern as other providers with appropriate conditional logic based on the build mode.
Also applies to: 51-51
114-117
: LGTM: Default state configuration is consistent
The ChatGLM configuration in DEFAULT_ACCESS_STATE follows the established pattern with proper initialization of URL and API key properties.
210-210
: LGTM: Authorization check properly updated
The ChatGLM validation is correctly integrated into the authorization chain, maintaining consistency with other providers.
114-117
: Verify property name consistency across components
Let's ensure that the property names (chatglmUrl
and chatglmApiKey
) are consistently used across all related components.
Also applies to: 190-192
✅ Verification successful
Property names are consistently used across the codebase
The verification shows that chatglmUrl
and chatglmApiKey
are used consistently across all relevant components:
- Configuration (app/config/server.ts)
- Store (app/store/access.ts)
- API clients (app/api/glm.ts, app/client/platforms/glm.ts)
- UI components (app/components/settings.tsx)
- Authentication (app/api/auth.ts)
No inconsistencies or variations in property naming were found.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Verify consistent usage of ChatGLM property names across the codebase
# Test 1: Check for chatglmUrl usage
echo "Checking chatglmUrl usage:"
rg "chatglmUrl" --type ts
# Test 2: Check for chatglmApiKey usage
echo "Checking chatglmApiKey usage:"
rg "chatglmApiKey" --type ts
# Test 3: Check for any variations that might indicate inconsistency
echo "Checking for potential inconsistent naming:"
rg -i "glm.*(url|key|token)" --type ts
Length of output: 3428
app/client/api.ts (4)
24-24
: LGTM: Import follows established patterns
The ChatGLMApi import follows the existing pattern and is properly organized with other platform-specific imports.
160-162
: LGTM: ChatGLM case properly integrated
The ChatGLM case is correctly implemented in the ClientApi constructor, following the established pattern of other providers.
349-350
: LGTM: Proper provider mapping
The ChatGLM case in getClientApi correctly maps the service provider to the model provider, maintaining consistency with other implementations.
24-24
: Verify ChatGLMApi implementation
Let's verify the ChatGLMApi implementation to ensure it properly implements the LLMApi interface.
✅ Verification successful
ChatGLMApi implementation verified successfully
The ChatGLMApi class correctly implements the LLMApi interface with all required methods:
chat()
: Implements chat functionality with proper streaming and tool supportspeech()
: Implements with "not implemented" error as expectedusage()
: Returns basic usage statsmodels()
: Returns empty model list (matchesdisableListModels = true
)
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Verify ChatGLMApi implementation exists and implements required methods
# Check if ChatGLMApi file exists and contains required method implementations
ast-grep --pattern 'class ChatGLMApi {
$$$
chat($$$) {
$$$
}
$$$
speech($$$) {
$$$
}
$$$
usage($$$) {
$$$
}
$$$
models($$$) {
$$$
}
$$$
}'
Length of output: 182
Script:
#!/bin/bash
# Let's first locate the ChatGLMApi file
fd "glm.ts"
# Then examine its content
rg "class ChatGLMApi" -A 5
# Also check for any interface implementations
rg "implements.*Api" -A 2
# Look for the base interface definition
rg "interface.*Api" -A 10
Length of output: 2698
Script:
#!/bin/bash
# Let's examine the ChatGLMApi implementation in detail
cat app/client/platforms/glm.ts
# Also look for the LLMApi interface definition
rg "interface LLMApi" -A 15
Length of output: 5501
Script:
#!/bin/bash
# Let's find the LLMApi interface definition in the api.ts file
cat app/client/api.ts
Length of output: 9501
app/constant.ts (5)
33-33
: LGTM: Base URL constant follows conventions
The CHATGLM_BASE_URL
constant is correctly defined and follows the same pattern as other provider base URLs in the file.
67-67
: LGTM: API path follows consistent pattern
The ChatGLM
API path follows the same naming convention as other providers in the enum.
121-121
: LGTM: Provider enums use the recommended "ChatGLM" name
The enums correctly use "ChatGLM" instead of "GLM" as suggested in the PR comments, maintaining consistency with the provider's recognized name.
Also applies to: 145-145
233-236
: LGTM: ChatGLM configuration follows standard structure
The configuration object correctly defines the endpoint and chat path according to ChatGLM's API specifications.
523-533
: LGTM: Model configuration follows standard pattern
The ChatGLM models are correctly mapped to the standard configuration structure, with appropriate provider details and sorting.
app/components/settings.tsx (4)
75-75
: LGTM: Import follows established pattern.
The ChatGLM import is correctly placed with other provider imports.
1238-1277
: LGTM: ChatGLM configuration follows established patterns.
The implementation:
- Maintains consistency with other provider configurations
- Properly handles sensitive information using PasswordInput
- Includes appropriate accessibility attributes
- Uses "ChatGLM" branding as suggested in PR comments
🧰 Tools
🪛 Biome
[error] 1255-1255: The assignment should not be in an expression.
The use of assignments in expressions is confusing.
Expressions are often considered as side-effect free.
(lint/suspicious/noAssignInExpressions)
[error] 1271-1271: The assignment should not be in an expression.
The use of assignments in expressions is confusing.
Expressions are often considered as side-effect free.
(lint/suspicious/noAssignInExpressions)
1738-1738
: LGTM: ChatGLM component properly integrated.
The ChatGLM configuration is correctly integrated into the settings UI alongside other providers.
75-75
: Verify complete ChatGLM integration.
The settings UI implementation looks good, but let's verify the complete ChatGLM integration:
Also applies to: 1238-1277, 1738-1738
✅ Verification successful
ChatGLM integration appears complete and well-structured
The verification shows a complete ChatGLM integration with all necessary components in place:
- Constants and endpoints defined in
app/constant.ts
- API implementation in
app/client/platforms/glm.ts
- Store configuration in
app/store/access.ts
with URL handling and validation - UI integration in settings component
- Proper API client initialization in
app/client/api.ts
The implementation follows the same pattern as other model integrations in the codebase, with all required components properly connected.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Verify ChatGLM integration across the codebase
# Test 1: Check for ChatGLM constant definition
echo "Checking ChatGLM constant..."
rg -p "ChatGLM.*ExampleEndpoint"
# Test 2: Check for ChatGLM API handler
echo "Checking ChatGLM API handler..."
rg -p "ChatGLMApi"
# Test 3: Check for ChatGLM localization
echo "Checking ChatGLM localization..."
rg -p "ChatGLM.*Title|ChatGLM.*SubTitle" "app/locales/"
Length of output: 628
Script:
#!/bin/bash
# Let's verify additional ChatGLM integration aspects
# Check for ChatGLM model configuration and constants
echo "Checking ChatGLM model configuration..."
rg -p "export const ChatGLM" -A 5
# Check for ChatGLM implementation details
echo "Checking ChatGLM implementation..."
rg -p "class ChatGLMApi" -A 10
# Check for ChatGLM in language files
echo "Checking ChatGLM translations..."
fd -e json . app/locales/ -x grep -l "ChatGLM"
# Check for any ChatGLM-related types
echo "Checking ChatGLM types..."
rg -p "type.*ChatGLM|interface.*ChatGLM"
# Check for ChatGLM in store/config
echo "Checking ChatGLM in store/config..."
rg -p "ChatGLM" "app/store/"
Length of output: 1295
} catch (e) { | ||
console.error(`[GLM] filter`, e); | ||
} | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Improve error handling in model validation.
The catch block silently logs errors from JSON parsing or model validation. These should be properly handled as they indicate invalid requests.
Apply this diff:
} catch (e) {
console.error(`[GLM] filter`, e);
+ return NextResponse.json(
+ {
+ error: true,
+ message: "Invalid request format",
+ },
+ {
+ status: 400,
+ }
+ );
}
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
} catch (e) { | |
console.error(`[GLM] filter`, e); | |
} | |
} | |
} catch (e) { | |
console.error(`[GLM] filter`, e); | |
return NextResponse.json( | |
{ | |
error: true, | |
message: "Invalid request format", | |
}, | |
{ | |
status: 400, | |
} | |
); | |
} | |
} |
try { | ||
const response = await request(req); | ||
return response; | ||
} catch (e) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add error cleanup in catch block.
The try-catch block should clean up any resources (like response streams) in case of an error to prevent memory leaks.
Apply this diff:
try {
const response = await request(req);
return response;
} catch (e) {
+ if (response?.body) {
+ try {
+ await response.body.cancel();
+ } catch {} // Ignore cleanup errors
+ }
console.error("[GLM] ", e);
return NextResponse.json(prettyObject(e));
}
Committable suggestion skipped: line range outside the PR's diff.
let baseUrl = serverConfig.chatglmUrl || CHATGLM_BASE_URL; | ||
|
||
if (!baseUrl.startsWith("http")) { | ||
baseUrl = `https://${baseUrl}`; | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add URL validation for security.
The base URL construction should validate the URL to prevent potential SSRF attacks.
Apply this diff:
+ function isValidUrl(url: string): boolean {
+ try {
+ const parsedUrl = new URL(url);
+ return ['http:', 'https:'].includes(parsedUrl.protocol);
+ } catch {
+ return false;
+ }
+ }
let baseUrl = serverConfig.chatglmUrl || CHATGLM_BASE_URL;
if (!baseUrl.startsWith("http")) {
baseUrl = `https://${baseUrl}`;
}
+ if (!isValidUrl(baseUrl)) {
+ throw new Error("Invalid base URL configuration");
+ }
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
let baseUrl = serverConfig.chatglmUrl || CHATGLM_BASE_URL; | |
if (!baseUrl.startsWith("http")) { | |
baseUrl = `https://${baseUrl}`; | |
} | |
function isValidUrl(url: string): boolean { | |
try { | |
const parsedUrl = new URL(url); | |
return ['http:', 'https:'].includes(parsedUrl.protocol); | |
} catch { | |
return false; | |
} | |
} | |
let baseUrl = serverConfig.chatglmUrl || CHATGLM_BASE_URL; | |
if (!baseUrl.startsWith("http")) { | |
baseUrl = `https://${baseUrl}`; | |
} | |
if (!isValidUrl(baseUrl)) { | |
throw new Error("Invalid base URL configuration"); | |
} |
if ( | ||
isModelAvailableInServer( | ||
serverConfig.customModels, | ||
jsonBody?.model as string, | ||
ServiceProvider.ChatGLM as string, | ||
) | ||
) { | ||
return NextResponse.json( | ||
{ | ||
error: true, | ||
message: `you are not allowed to use ${jsonBody?.model} model`, | ||
}, | ||
{ | ||
status: 403, | ||
}, | ||
); | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix inverted logic in model availability check.
The condition for returning a 403 response is incorrect. It returns 403 when the model IS available, which is the opposite of what's intended.
Apply this diff:
if (
- isModelAvailableInServer(
+ !isModelAvailableInServer(
serverConfig.customModels,
jsonBody?.model as string,
ServiceProvider.ChatGLM as string,
)
) {
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
if ( | |
isModelAvailableInServer( | |
serverConfig.customModels, | |
jsonBody?.model as string, | |
ServiceProvider.ChatGLM as string, | |
) | |
) { | |
return NextResponse.json( | |
{ | |
error: true, | |
message: `you are not allowed to use ${jsonBody?.model} model`, | |
}, | |
{ | |
status: 403, | |
}, | |
); | |
} | |
if ( | |
!isModelAvailableInServer( | |
serverConfig.customModels, | |
jsonBody?.model as string, | |
ServiceProvider.ChatGLM as string, | |
) | |
) { | |
return NextResponse.json( | |
{ | |
error: true, | |
message: `you are not allowed to use ${jsonBody?.model} model`, | |
}, | |
{ | |
status: 403, | |
}, | |
); | |
} |
if (shouldStream) { | ||
const [tools, funcs] = usePluginStore | ||
.getState() | ||
.getAsTools( | ||
useChatStore.getState().currentSession().mask?.plugin || [], | ||
); | ||
return stream( | ||
chatPath, | ||
requestPayload, | ||
getHeaders(), | ||
tools as any, | ||
funcs, | ||
controller, | ||
// parseSSE | ||
(text: string, runTools: ChatMessageTool[]) => { | ||
// console.log("parseSSE", text, runTools); | ||
const json = JSON.parse(text); | ||
const choices = json.choices as Array<{ | ||
delta: { | ||
content: string; | ||
tool_calls: ChatMessageTool[]; | ||
}; | ||
}>; | ||
const tool_calls = choices[0]?.delta?.tool_calls; | ||
if (tool_calls?.length > 0) { | ||
const index = tool_calls[0]?.index; | ||
const id = tool_calls[0]?.id; | ||
const args = tool_calls[0]?.function?.arguments; | ||
if (id) { | ||
runTools.push({ | ||
id, | ||
type: tool_calls[0]?.type, | ||
function: { | ||
name: tool_calls[0]?.function?.name as string, | ||
arguments: args, | ||
}, | ||
}); | ||
} else { | ||
// @ts-ignore | ||
runTools[index]["function"]["arguments"] += args; | ||
} | ||
} | ||
return choices[0]?.delta?.content; | ||
}, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Consider extracting stream handling logic.
The stream handling logic is complex and could be extracted into a separate method for better maintainability.
Consider creating a separate method like handleStreamResponse
to encapsulate this logic.
if (!baseUrl.startsWith("http") && !baseUrl.startsWith(ApiPath.ChatGLM)) { | ||
baseUrl = "https://" + baseUrl; | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Enhance URL validation.
The current URL validation is basic. Consider using a URL validation utility to handle edge cases and ensure security.
-if (!baseUrl.startsWith("http") && !baseUrl.startsWith(ApiPath.ChatGLM)) {
- baseUrl = "https://" + baseUrl;
-}
+try {
+ const url = new URL(baseUrl);
+ if (!['http:', 'https:'].includes(url.protocol)) {
+ baseUrl = `https://${baseUrl}`;
+ }
+} catch {
+ if (!baseUrl.startsWith(ApiPath.ChatGLM)) {
+ baseUrl = `https://${baseUrl}`;
+ }
+}
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
if (!baseUrl.startsWith("http") && !baseUrl.startsWith(ApiPath.ChatGLM)) { | |
baseUrl = "https://" + baseUrl; | |
} | |
try { | |
const url = new URL(baseUrl); | |
if (!['http:', 'https:'].includes(url.protocol)) { | |
baseUrl = `https://${baseUrl}`; | |
} | |
} catch { | |
if (!baseUrl.startsWith(ApiPath.ChatGLM)) { | |
baseUrl = `https://${baseUrl}`; | |
} | |
} |
const isApp = !!getClientConfig()?.isApp; | ||
const apiPath = ApiPath.ChatGLM; | ||
baseUrl = isApp ? CHATGLM_BASE_URL : apiPath; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Consider extracting app detection logic.
The app detection logic could be moved to a utility function for better reusability and testability.
-const isApp = !!getClientConfig()?.isApp;
-const apiPath = ApiPath.ChatGLM;
-baseUrl = isApp ? CHATGLM_BASE_URL : apiPath;
+const isApp = isAppEnvironment(); // New utility function
+baseUrl = isApp ? CHATGLM_BASE_URL : ApiPath.ChatGLM;
Committable suggestion skipped: line range outside the PR's diff.
isValidChatGLM() { | ||
return ensure(get(), ["chatglmApiKey"]); | ||
}, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Consider validating the ChatGLM URL
While the current implementation follows the pattern of other providers, consider whether the URL should also be validated to ensure proper configuration, especially in custom deployment scenarios.
isValidChatGLM() {
- return ensure(get(), ["chatglmApiKey"]);
+ return ensure(get(), ["chatglmApiKey", "chatglmUrl"]);
}
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
isValidChatGLM() { | |
return ensure(get(), ["chatglmApiKey"]); | |
}, | |
isValidChatGLM() { | |
return ensure(get(), ["chatglmApiKey", "chatglmUrl"]); | |
}, |
@@ -244,6 +248,7 @@ export function getHeaders(ignoreHeaders: boolean = false) { | |||
const isMoonshot = modelConfig.providerName === ServiceProvider.Moonshot; | |||
const isIflytek = modelConfig.providerName === ServiceProvider.Iflytek; | |||
const isXAI = modelConfig.providerName === ServiceProvider.XAI; | |||
const isChatGLM = modelConfig.providerName === ServiceProvider.ChatGLM; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Consider refactoring the nested ternary operators
While the ChatGLM integration is functionally correct, the nested ternary operators for API key selection are becoming increasingly complex and harder to maintain. Consider refactoring this logic to improve readability.
Here's a suggested refactor using a more maintainable approach:
function getApiKeyForProvider(
modelConfig: ModelConfig,
accessStore: AccessStore
): string {
const providerMap = {
[ServiceProvider.Google]: accessStore.googleApiKey,
[ServiceProvider.Azure]: accessStore.azureApiKey,
[ServiceProvider.Anthropic]: accessStore.anthropicApiKey,
[ServiceProvider.ByteDance]: accessStore.bytedanceApiKey,
[ServiceProvider.Alibaba]: accessStore.alibabaApiKey,
[ServiceProvider.Moonshot]: accessStore.moonshotApiKey,
[ServiceProvider.XAI]: accessStore.xaiApiKey,
[ServiceProvider.ChatGLM]: accessStore.chatglmApiKey,
[ServiceProvider.Iflytek]: accessStore.iflytekApiKey && accessStore.iflytekApiSecret
? `${accessStore.iflytekApiKey}:${accessStore.iflytekApiSecret}`
: "",
// Default to OpenAI
default: accessStore.openaiApiKey,
};
return providerMap[modelConfig.providerName] ?? providerMap.default;
}
This approach:
- Improves readability and maintainability
- Makes it easier to add new providers
- Reduces the complexity of nested ternaries
- Makes the provider-key mapping explicit
Also applies to: 267-268, 284-284
const chatglmModels = [ | ||
"glm-4-plus", | ||
"glm-4-0520", | ||
"glm-4", | ||
"glm-4-air", | ||
"glm-4-airx", | ||
"glm-4-long", | ||
"glm-4-flashx", | ||
"glm-4-flash", | ||
]; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add knowledge cutoff dates for ChatGLM models
The ChatGLM models are missing from the KnowledgeCutOffDate
record. This information is needed to display the correct knowledge cutoff date in the chat interface.
Add the following entries to the KnowledgeCutOffDate
record:
export const KnowledgeCutOffDate: Record<string, string> = {
default: "2021-09",
+ "glm-4": "2023-12",
+ "glm-4-plus": "2023-12",
+ "glm-4-0520": "2023-12",
+ "glm-4-air": "2023-12",
+ "glm-4-airx": "2023-12",
+ "glm-4-long": "2023-12",
+ "glm-4-flashx": "2023-12",
+ "glm-4-flash": "2023-12",
// ... existing entries
};
Committable suggestion skipped: line range outside the PR's diff.
💻 变更类型 | Change Type
🔀 变更说明 | Description of Change
支持 GLM 模型 #5714
📝 补充信息 | Additional Information
Summary by CodeRabbit
Summary by CodeRabbit
Release Notes
New Features
Documentation
Bug Fixes
These updates enhance the application's capabilities, providing users with new options for utilizing the ChatGLM model effectively.