Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feature/chatglm #5995

Closed
wants to merge 8 commits into from
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -32,7 +32,7 @@ One-Click to get a well-designed cross-platform ChatGPT web UI, with Claude, GPT
[MacOS-image]: https://img.shields.io/badge/-MacOS-black?logo=apple
[Linux-image]: https://img.shields.io/badge/-Linux-333?logo=ubuntu

[<img src="https://vercel.com/button" alt="Deploy on Vercel" height="30">](https://vercel.com/new/clone?repository-url=https%3A%2F%2Fgithub.com%2FChatGPTNextWeb%2FChatGPT-Next-Web&env=OPENAI_API_KEY&env=CODE&project-name=nextchat&repository-name=NextChat) [<img src="https://zeabur.com/button.svg" alt="Deploy on Zeabur" height="30">](https://zeabur.com/templates/ZBUEFA) [<img src="https://gitpod.io/button/open-in-gitpod.svg" alt="Open in Gitpod" height="30">](https://gitpod.io/#https://github.com/Yidadaa/ChatGPT-Next-Web) [<img src="https://img.shields.io/badge/BT_Deploy-Install-20a53a" alt="BT Deply Install" height="30">](https://www.bt.cn/new/download.html)
[<img src="https://vercel.com/button" alt="Deploy on Vercel" height="30">](https://vercel.com/new/clone?repository-url=https://github.com/Dogtiti/ChatGPT-Next-Web-EarlyBird&env=OPENAI_API_KEY&env=CODE&project-name=nextchat-earlyBird&repository-name=NextChat-EarlyBird) [<img src="https://zeabur.com/button.svg" alt="Deploy on Zeabur" height="30">](https://zeabur.com/templates/ZBUEFA) [<img src="https://gitpod.io/button/open-in-gitpod.svg" alt="Open in Gitpod" height="30">](https://gitpod.io/#https://github.com/Yidadaa/ChatGPT-Next-Web) [<img src="https://img.shields.io/badge/BT_Deploy-Install-20a53a" alt="BT Deply Install" height="30">](https://www.bt.cn/new/download.html)

[<img src="https://github.com/user-attachments/assets/903482d4-3e87-4134-9af1-f2588fa90659" height="60" width="288" >](https://monica.im/?utm=nxcrp)

Expand Down Expand Up @@ -477,7 +477,7 @@ If you want to add a new translation, read this [document](./docs/translation.md

## Donation

[Buy Me a Coffee](https://www.buymeacoffee.com/yidadaa)
[Buy Me a Coffee](https://1kafei.com/dogtiti)

## Special Thanks

Expand Down
2 changes: 1 addition & 1 deletion README_CN.md
Original file line number Diff line number Diff line change
Expand Up @@ -33,7 +33,7 @@

1. 准备好你的 [OpenAI API Key](https://platform.openai.com/account/api-keys);
2. 点击右侧按钮开始部署:
[![Deploy with Vercel](https://vercel.com/button)](https://vercel.com/new/clone?repository-url=https%3A%2F%2Fgithub.com%2FYidadaa%2FChatGPT-Next-Web&env=OPENAI_API_KEY&env=CODE&env=GOOGLE_API_KEY&project-name=chatgpt-next-web&repository-name=ChatGPT-Next-Web),直接使用 Github 账号登录即可,记得在环境变量页填入 API Key 和[页面访问密码](#配置页面访问密码) CODE;
[![Deploy with Vercel](https://vercel.com/button)](https://vercel.com/new/clone?repository-url=https://github.com/Dogtiti/ChatGPT-Next-Web-EarlyBird&env=OPENAI_API_KEY&env=CODE&project-name=nextchat-earlyBird&repository-name=NextChat-EarlyBird),直接使用 Github 账号登录即可,记得在环境变量页填入 API Key 和[页面访问密码](#配置页面访问密码) CODE;
3. 部署完毕后,即可开始使用;
4. (可选)[绑定自定义域名](https://vercel.com/docs/concepts/projects/domains/add-a-domain):Vercel 分配的域名 DNS 在某些区域被污染了,绑定自定义域名即可直连。

Expand Down
2 changes: 1 addition & 1 deletion README_JA.md
Original file line number Diff line number Diff line change
Expand Up @@ -30,7 +30,7 @@

1. [OpenAI API Key](https://platform.openai.com/account/api-keys)を準備する;
2. 右側のボタンをクリックしてデプロイを開始:
[![Deploy with Vercel](https://vercel.com/button)](https://vercel.com/new/clone?repository-url=https%3A%2F%2Fgithub.com%2FYidadaa%2FChatGPT-Next-Web&env=OPENAI_API_KEY&env=CODE&env=GOOGLE_API_KEY&project-name=chatgpt-next-web&repository-name=ChatGPT-Next-Web) 、GitHubアカウントで直接ログインし、環境変数ページにAPI Keyと[ページアクセスパスワード](#設定ページアクセスパスワード) CODEを入力してください;
[![Deploy with Vercel](https://vercel.com/button)](https://vercel.com/new/clone?repository-url=https://github.com/Dogtiti/ChatGPT-Next-Web-EarlyBird&env=OPENAI_API_KEY&env=CODE&project-name=nextchat-earlyBird&repository-name=NextChat-EarlyBird) 、GitHubアカウントで直接ログインし、環境変数ページにAPI Keyと[ページアクセスパスワード](#設定ページアクセスパスワード) CODEを入力してください;
3. デプロイが完了したら、すぐに使用を開始できます;
4. (オプション)[カスタムドメインをバインド](https://vercel.com/docs/concepts/projects/domains/add-a-domain):Vercelが割り当てたドメインDNSは一部の地域で汚染されているため、カスタムドメインをバインドすると直接接続できます。

Expand Down
131 changes: 112 additions & 19 deletions app/client/platforms/glm.ts
Original file line number Diff line number Diff line change
Expand Up @@ -25,12 +25,103 @@ import { getMessageTextContent } from "@/app/utils";
import { RequestPayload } from "./openai";
import { fetch } from "@/app/utils/stream";

interface BasePayload {
model: string;
}

interface ChatPayload extends BasePayload {
messages: ChatOptions["messages"];
stream?: boolean;
temperature?: number;
presence_penalty?: number;
frequency_penalty?: number;
top_p?: number;
}

interface ImageGenerationPayload extends BasePayload {
prompt: string;
size?: string;
user_id?: string;
}

interface VideoGenerationPayload extends BasePayload {
prompt: string;
duration?: number;
resolution?: string;
user_id?: string;
}

type ModelType = "chat" | "image" | "video";

export class ChatGLMApi implements LLMApi {
private disableListModels = true;

private getModelType(model: string): ModelType {
if (model.startsWith("cogview-")) return "image";
if (model.startsWith("cogvideo-")) return "video";
return "chat";
}

private getModelPath(type: ModelType): string {
switch (type) {
case "image":
return ChatGLM.ImagePath;
case "video":
return ChatGLM.VideoPath;
default:
return ChatGLM.ChatPath;
}
}

private createPayload(
messages: ChatOptions["messages"],
modelConfig: any,
options: ChatOptions,
): BasePayload {
const modelType = this.getModelType(modelConfig.model);
const lastMessage = messages[messages.length - 1];
const prompt =
typeof lastMessage.content === "string"
? lastMessage.content
: lastMessage.content.map((c) => c.text).join("\n");

switch (modelType) {
case "image":
return {
model: modelConfig.model,
prompt,
size: options.config.size,
} as ImageGenerationPayload;
default:
return {
messages,
stream: options.config.stream,
model: modelConfig.model,
temperature: modelConfig.temperature,
presence_penalty: modelConfig.presence_penalty,
frequency_penalty: modelConfig.frequency_penalty,
top_p: modelConfig.top_p,
} as ChatPayload;
}
}

private parseResponse(modelType: ModelType, json: any): string {
switch (modelType) {
case "image": {
const imageUrl = json.data?.[0]?.url;
return imageUrl ? `![Generated Image](${imageUrl})` : "";
}
case "video": {
const videoUrl = json.data?.[0]?.url;
return videoUrl ? `<video controls src="${videoUrl}"></video>` : "";
}
default:
return this.extractMessage(json);
}
}

path(path: string): string {
const accessStore = useAccessStore.getState();

let baseUrl = "";

if (accessStore.useCustomConfig) {
Expand All @@ -51,7 +142,6 @@ export class ChatGLMApi implements LLMApi {
}

console.log("[Proxy Endpoint] ", baseUrl, path);

return [baseUrl, path].join("/");
}

Expand Down Expand Up @@ -79,53 +169,55 @@ export class ChatGLMApi implements LLMApi {
},
};

const requestPayload: RequestPayload = {
messages,
stream: options.config.stream,
model: modelConfig.model,
temperature: modelConfig.temperature,
presence_penalty: modelConfig.presence_penalty,
frequency_penalty: modelConfig.frequency_penalty,
top_p: modelConfig.top_p,
};
const modelType = this.getModelType(modelConfig.model);
const requestPayload = this.createPayload(messages, modelConfig, options);
const path = this.path(this.getModelPath(modelType));

console.log("[Request] glm payload: ", requestPayload);
console.log(`[Request] glm ${modelType} payload: `, requestPayload);

const shouldStream = !!options.config.stream;
const controller = new AbortController();
options.onController?.(controller);

try {
const chatPath = this.path(ChatGLM.ChatPath);
const chatPayload = {
method: "POST",
body: JSON.stringify(requestPayload),
signal: controller.signal,
headers: getHeaders(),
};

// make a fetch request
const requestTimeoutId = setTimeout(
() => controller.abort(),
REQUEST_TIMEOUT_MS,
);

if (modelType === "image" || modelType === "video") {
const res = await fetch(path, chatPayload);
clearTimeout(requestTimeoutId);

const resJson = await res.json();
console.log(`[Response] glm ${modelType}:`, resJson);
const message = this.parseResponse(modelType, resJson);
options.onFinish(message, res);
return;
}

const shouldStream = !!options.config.stream;
if (shouldStream) {
const [tools, funcs] = usePluginStore
.getState()
.getAsTools(
useChatStore.getState().currentSession().mask?.plugin || [],
);
return stream(
chatPath,
path,
requestPayload,
getHeaders(),
tools as any,
funcs,
controller,
// parseSSE
(text: string, runTools: ChatMessageTool[]) => {
// console.log("parseSSE", text, runTools);
const json = JSON.parse(text);
const choices = json.choices as Array<{
delta: {
Expand Down Expand Up @@ -154,7 +246,7 @@ export class ChatGLMApi implements LLMApi {
}
return choices[0]?.delta?.content;
},
// processToolMessage, include tool_calls message and tool call results
// processToolMessage
(
requestPayload: RequestPayload,
toolCallMessage: any,
Expand All @@ -172,7 +264,7 @@ export class ChatGLMApi implements LLMApi {
options,
);
} else {
const res = await fetch(chatPath, chatPayload);
const res = await fetch(path, chatPayload);
clearTimeout(requestTimeoutId);

const resJson = await res.json();
Expand All @@ -184,6 +276,7 @@ export class ChatGLMApi implements LLMApi {
options.onError?.(e as Error);
}
}

async usage() {
return {
used: 0,
Expand Down
4 changes: 2 additions & 2 deletions app/client/platforms/openai.ts
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@ import {
stream,
} from "@/app/utils/chat";
import { cloudflareAIGatewayUrl } from "@/app/utils/cloudflare";
import { DalleSize, DalleQuality, DalleStyle } from "@/app/typing";
import { ModelSize, DalleQuality, DalleStyle } from "@/app/typing";

import {
ChatOptions,
Expand Down Expand Up @@ -73,7 +73,7 @@ export interface DalleRequestPayload {
prompt: string;
response_format: "url" | "b64_json";
n: number;
size: DalleSize;
size: ModelSize;
quality: DalleQuality;
style: DalleStyle;
}
Expand Down
13 changes: 8 additions & 5 deletions app/components/chat.tsx
Original file line number Diff line number Diff line change
Expand Up @@ -72,14 +72,16 @@ import {
isDalle3,
showPlugins,
safeLocalStorage,
getModelSizes,
supportsCustomSize,
} from "../utils";

import { uploadImage as uploadImageRemote } from "@/app/utils/chat";

import dynamic from "next/dynamic";

import { ChatControllerPool } from "../client/controller";
import { DalleSize, DalleQuality, DalleStyle } from "../typing";
import { DalleQuality, DalleStyle, ModelSize } from "../typing";
import { Prompt, usePromptStore } from "../store/prompt";
import Locale from "../locales";

Expand Down Expand Up @@ -519,10 +521,11 @@ export function ChatActions(props: {
const [showSizeSelector, setShowSizeSelector] = useState(false);
const [showQualitySelector, setShowQualitySelector] = useState(false);
const [showStyleSelector, setShowStyleSelector] = useState(false);
const dalle3Sizes: DalleSize[] = ["1024x1024", "1792x1024", "1024x1792"];
const modelSizes = getModelSizes(currentModel);
const dalle3Qualitys: DalleQuality[] = ["standard", "hd"];
const dalle3Styles: DalleStyle[] = ["vivid", "natural"];
const currentSize = session.mask.modelConfig?.size ?? "1024x1024";
const currentSize =
session.mask.modelConfig?.size ?? ("1024x1024" as ModelSize);
const currentQuality = session.mask.modelConfig?.quality ?? "standard";
const currentStyle = session.mask.modelConfig?.style ?? "vivid";

Expand Down Expand Up @@ -673,7 +676,7 @@ export function ChatActions(props: {
/>
)}

{isDalle3(currentModel) && (
{supportsCustomSize(currentModel) && (
<ChatAction
onClick={() => setShowSizeSelector(true)}
text={currentSize}
Expand All @@ -684,7 +687,7 @@ export function ChatActions(props: {
{showSizeSelector && (
<Selector
defaultSelectedValue={currentSize}
items={dalle3Sizes.map((m) => ({
items={modelSizes.map((m) => ({
title: m,
value: m,
}))}
Expand Down
11 changes: 11 additions & 0 deletions app/constant.ts
Original file line number Diff line number Diff line change
Expand Up @@ -233,6 +233,8 @@ export const XAI = {
export const ChatGLM = {
ExampleEndpoint: CHATGLM_BASE_URL,
ChatPath: "api/paas/v4/chat/completions",
ImagePath: "api/paas/v4/images/generations",
VideoPath: "api/paas/v4/videos/generations",
};

export const DEFAULT_INPUT_TEMPLATE = `{{input}}`; // input / time / model / lang
Expand Down Expand Up @@ -431,6 +433,15 @@ const chatglmModels = [
"glm-4-long",
"glm-4-flashx",
"glm-4-flash",
"glm-4v-plus",
"glm-4v",
"glm-4v-flash", // free
"cogview-3-plus",
"cogview-3",
"cogview-3-flash", // free
// 目前无法适配轮询任务
// "cogvideox",
// "cogvideox-flash", // free
];

let seq = 1000; // 内置的模型序号生成器从1000开始
Expand Down
4 changes: 2 additions & 2 deletions app/store/config.ts
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
import { LLMModel } from "../client/api";
import { DalleSize, DalleQuality, DalleStyle } from "../typing";
import { DalleQuality, DalleStyle, ModelSize } from "../typing";
import { getClientConfig } from "../config/client";
import {
DEFAULT_INPUT_TEMPLATE,
Expand Down Expand Up @@ -78,7 +78,7 @@ export const DEFAULT_CONFIG = {
compressProviderName: "",
enableInjectSystemPrompts: true,
template: config?.template ?? DEFAULT_INPUT_TEMPLATE,
size: "1024x1024" as DalleSize,
size: "1024x1024" as ModelSize,
quality: "standard" as DalleQuality,
style: "vivid" as DalleStyle,
},
Expand Down
11 changes: 11 additions & 0 deletions app/typing.ts
Original file line number Diff line number Diff line change
Expand Up @@ -11,3 +11,14 @@ export interface RequestMessage {
export type DalleSize = "1024x1024" | "1792x1024" | "1024x1792";
export type DalleQuality = "standard" | "hd";
export type DalleStyle = "vivid" | "natural";

export type ModelSize =
| "1024x1024"
| "1792x1024"
| "1024x1792"
| "768x1344"
| "864x1152"
| "1344x768"
| "1152x864"
| "1440x720"
| "720x1440";
Loading
Loading