Skip to content

Commit

Permalink
Merge pull request #847 from fuergaosi233/chatgpt-api
Browse files Browse the repository at this point in the history
Merge chatgpt-api
  • Loading branch information
RealTong authored Apr 2, 2023
2 parents 8cad1bf + 36849e3 commit 1ee9bb5
Show file tree
Hide file tree
Showing 15 changed files with 121 additions and 41 deletions.
5 changes: 3 additions & 2 deletions .env.example
Original file line number Diff line number Diff line change
@@ -1,7 +1,8 @@
OPENAI_API_KEY=""
ENDPOINT="https://API.EXAMPLE.COM/v1"
OPENAI_API_KEY="sk-XXXXXXXXXXXXXXXXX"
MODEL="gpt-3.5-turbo"
CHAT_PRIVATE_TRIGGER_KEYWORD=
TEMPERATURE=
TEMPERATURE=0.6
BLOCK_WORDS="VPN"
CHATGPT_BLOCK_WORDS="VPN"
WECHATY_PUPPET=wechaty-puppet-wechat
22 changes: 11 additions & 11 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -137,17 +137,17 @@ npm run dev

## 📝 Environment Variables

| name | default | example | description |
|------------------------------|------------------------|------------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| ~~API~~ | https://api.openai.com | | ~~API endpoint of ChatGPT~~ |
| OPENAI_API_KEY | 123456789 | sk-XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX | [create new secret key](https://platform.openai.com/account/api-keys) |
| MODEL | gpt-3.5-turbo | | ID of the model to use. Currently, only gpt-3.5-turbo and gpt-3.5-turbo-0301 are supported. |
| TEMPERATURE | 0.6 | | What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. |
| CHAT_TRIGGER_RULE | | | Private chat triggering rules. |
| DISABLE_GROUP_MESSAGE | true | | Prohibited to use ChatGPT in group chat. |
| CHAT_PRIVATE_TRIGGER_KEYWORD | | | Keyword to trigger ChatGPT reply in WeChat private chat |
| BLOCK_WORDS | "VPN" | "WORD1,WORD2,WORD3" | Chat blocker words, (works for both private and group chats, Use, Split) |
| CHATGPT_BLOCK_WORDS | "VPN" | "WORD1,WORD2,WORD3" | The blocked words returned by ChatGPT(works for both private and group chats, Use, Split) |
| name | description |
|------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| API | API endpoint of ChatGPT |
| OPENAI_API_KEY | [create new secret key](https://platform.openai.com/account/api-keys) |
| MODEL | ID of the model to use. Currently, only gpt-3.5-turbo and gpt-3.5-turbo-0301 are supported. |
| TEMPERATURE | What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. |
| CHAT_TRIGGER_RULE | Private chat triggering rules. |
| DISABLE_GROUP_MESSAGE | Prohibited to use ChatGPT in group chat. |
| CHAT_PRIVATE_TRIGGER_KEYWORD | Keyword to trigger ChatGPT reply in WeChat private chat |
| BLOCK_WORDS | Chat blocker words, (works for both private and group chats, Use, Split) |
| CHATGPT_BLOCK_WORDS | The blocked words returned by ChatGPT(works for both private and group chats, Use, Split) |

## 📝 Using Custom ChatGPT API

Expand Down
20 changes: 10 additions & 10 deletions README_ZH.md
Original file line number Diff line number Diff line change
Expand Up @@ -142,17 +142,17 @@ npm run dev

## 📝 Environment Variables

| name | default | example | description |
|------------------------------|------------------------|------------------------------------------------|-------------------------------------------------------------|
| ~~API~~ | https://api.openai.com | | ~~ChatGPT API 地址~~ |
| OPENAI_API_KEY | 123456789 | sk-XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX | [创建你的 API 密钥](https://platform.openai.com/account/api-keys) |
| MODEL | gpt-3.5-turbo | | 要使用的模型ID, 目前仅支持`gpt-3.5-turbo``gpt-3.5-turbo-0301` |
| TEMPERATURE | 0.6 | | 在0和2之间。较高的数值如0.8会使 ChatGPT 输出更加随机,而较低的数值如0.2会使其更加稳定。 |
| CHAT_TRIGGER_RULE | | | 私聊触发规则 |
| DISABLE_GROUP_MESSAGE | true | | 禁用在群聊里使用ChatGPT |
| name | default | example | description |
|--------------------------|------------------------|------------------------------------------------|-------------------------------------------------------------|
| API | https://api.openai.com | | 自定义ChatGPT API 地址 |
| OPENAI_API_KEY | 123456789 | sk-XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX | [创建你的 API 密钥](https://platform.openai.com/account/api-keys) |
| MODEL | gpt-3.5-turbo | | 要使用的模型ID, 目前仅支持`gpt-3.5-turbo``gpt-3.5-turbo-0301` |
| TEMPERATURE | 0.6 | | 在0和2之间。较高的数值如0.8会使 ChatGPT 输出更加随机,而较低的数值如0.2会使其更加稳定。 |
| CHAT_TRIGGER_RULE | | | 私聊触发规则 |
| DISABLE_GROUP_MESSAGE | true | | 禁用在群聊里使用ChatGPT |
| CHAT_PRIVATE_TRIGGER_KEYWORD | | | 在私聊中触发ChatGPT的关键词, 默认是无需关键词即可触发 |
| BLOCK_WORDS | "VPN" | "WORD1,WORD2,WORD3" | 聊天屏蔽关键词(同时在群组和私聊中生效, 避免 bot 用户恶意提问导致封号 |
| CHATGPT_BLOCK_WORDS | "VPN" | "WORD1,WORD2,WORD3" | ChatGPT回复屏蔽词, 如果ChatGPT的回复中包含了屏蔽词, 则不回复 |
| BLOCK_WORDS | "VPN" | "WORD1,WORD2,WORD3" | 聊天屏蔽关键词(同时在群组和私聊中生效, 避免 bot 用户恶意提问导致封号 |
| CHATGPT_BLOCK_WORDS | "VPN" | "WORD1,WORD2,WORD3" | ChatGPT回复屏蔽词, 如果ChatGPT的回复中包含了屏蔽词, 则不回复 |

## 📝 使用自定义ChatGPT API
> https://github.com/fuergaosi233/openai-proxy
Expand Down
Binary file removed docs/images/cloudflare-token.png
Binary file not shown.
Binary file removed docs/images/railway-deployment.png
Binary file not shown.
Binary file removed docs/images/session-token.png
Binary file not shown.
Binary file removed docs/images/user-agent.png
Binary file not shown.
30 changes: 30 additions & 0 deletions package-lock.json

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

1 change: 1 addition & 0 deletions package.json
Original file line number Diff line number Diff line change
Expand Up @@ -14,6 +14,7 @@
"async-retry": "^1.3.3",
"dotenv": "^16.0.3",
"execa": "^6.1.0",
"gpt3-tokenizer": "^1.1.5",
"openai": "^3.2.1",
"qrcode": "^1.5.1",
"uuid": "^9.0.0",
Expand Down
7 changes: 5 additions & 2 deletions src/bot.ts
Original file line number Diff line number Diff line change
Expand Up @@ -131,8 +131,11 @@ export class ChatGPTBot {
}
async getGPTMessage(talkerName: string,text: string): Promise<string> {
let gptMessage = await chatgpt(talkerName,text);
DBUtils.addAssistantMessage(talkerName,gptMessage);
return gptMessage;
if (gptMessage !=="") {
DBUtils.addAssistantMessage(talkerName,gptMessage);
return gptMessage;
}
return "Sorry, please try again later. 😔";
}
// Check if the message returned by chatgpt contains masked words]
checkChatGPTBlockWords(message: string): boolean {
Expand Down
2 changes: 1 addition & 1 deletion src/config.ts
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@ dotenv.config();
import { IConfig } from "./interface";

export const config: IConfig = {
api: process.env.API || "https://api.openai.com",
api: process.env.API,
openai_api_key: process.env.OPENAI_API_KEY || "123456789",
model: process.env.MODEL || "gpt-3.5-turbo",
chatPrivateTriggerKeyword: process.env.CHAT_PRIVATE_TRIGGER_KEYWORD || "",
Expand Down
15 changes: 9 additions & 6 deletions src/data.ts
Original file line number Diff line number Diff line change
@@ -1,15 +1,10 @@
import {ChatCompletionRequestMessage, ChatCompletionRequestMessageRoleEnum} from "openai";
import {User} from "./interface";
import {isTokenOverLimit} from "./utils.js";

/**
* 使用内存作为数据库
*/
export const initState: Array<ChatCompletionRequestMessage> = new Array(
{
"role": ChatCompletionRequestMessageRoleEnum.System,
"content": "You are a helpful assistant."
}
)

class DB {
private static data: User[] = [];
Expand Down Expand Up @@ -75,6 +70,10 @@ class DB {
public addUserMessage(username: string, message: string): void {
const user = this.getUserByUsername(username);
if (user) {
while (isTokenOverLimit(user.chatMessage)){
// 删除从第2条开始的消息(因为第一条是prompt)
user.chatMessage.splice(1,1);
}
user.chatMessage.push({
role: ChatCompletionRequestMessageRoleEnum.User,
content: message,
Expand All @@ -90,6 +89,10 @@ class DB {
public addAssistantMessage(username: string, message: string): void {
const user = this.getUserByUsername(username);
if (user) {
while (isTokenOverLimit(user.chatMessage)){
// 删除从第2条开始的消息(因为第一条是prompt)
user.chatMessage.splice(1,1);
}
user.chatMessage.push({
role: ChatCompletionRequestMessageRoleEnum.Assistant,
content: message,
Expand Down
2 changes: 1 addition & 1 deletion src/interface.ts
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
import {ChatCompletionRequestMessage} from "openai";

export interface IConfig {
api: string;
api?: string;
openai_api_key: string;
model: string;
chatTriggerRule: string;
Expand Down
26 changes: 18 additions & 8 deletions src/openai.ts
Original file line number Diff line number Diff line change
Expand Up @@ -4,11 +4,13 @@ import {
CreateImageRequestSizeEnum,
OpenAIApi
} from "openai";
import DBUtils from "./data.js";
import fs from "fs";
import DBUtils from "./data.js";
import {config} from "./config.js";

const configuration = new Configuration({
apiKey: process.env.OPENAI_API_KEY,
apiKey: config.openai_api_key,
basePath: config.api,
});
const openai = new OpenAIApi(configuration);

Expand All @@ -24,13 +26,21 @@ async function chatgpt(username:string,message: string): Promise<string> {
const response = await openai.createChatCompletion({
model: "gpt-3.5-turbo",
messages: messages,
temperature: 0.6
}).then((res) => res.data).catch((err) => console.log(err));
if (response) {
return (response.choices[0].message as any).content.replace(/^\n+|\n+$/g, "");
} else {
return "Something went wrong"
temperature: config.temperature,
});
let assistantMessage = "";
try {
if (response.status === 200) {
assistantMessage = response.data.choices[0].message?.content.replace(/^\n+|\n+$/g, "") as string;
}else{
console.log(`Something went wrong,Code: ${response.status}, ${response.statusText}`)
}
}catch (e:any) {
if (e.request){
console.log("请求出错");
}
}
return assistantMessage;
}

/**
Expand Down
32 changes: 32 additions & 0 deletions src/utils.ts
Original file line number Diff line number Diff line change
@@ -1 +1,33 @@
import {ChatCompletionRequestMessage} from "openai";

import GPT3TokenizerImport from 'gpt3-tokenizer';
import {config} from "./config.js";

export const regexpEncode = (str: string) => str.replace(/[-/\\^$*+?.()|[\]{}]/g, '\\$&');

const GPT3Tokenizer: typeof GPT3TokenizerImport =
typeof GPT3TokenizerImport === 'function'
? GPT3TokenizerImport
: (GPT3TokenizerImport as any).default;
// https://github.com/chathub-dev/chathub/blob/main/src/app/bots/chatgpt-api/usage.ts
const tokenizer = new GPT3Tokenizer({ type: 'gpt3' })
function calTokens(chatMessage:ChatCompletionRequestMessage[]):number {
let count = 0
for (const msg of chatMessage) {
count += countTokens(msg.content)
count += countTokens(msg.role)
}
return count + 2
}

function countTokens(str: string):number {
const encoded = tokenizer.encode(str)
return encoded.bpe.length
}
export function isTokenOverLimit(chatMessage:ChatCompletionRequestMessage[]): boolean {
let limit = 4096;
if (config.model==="gpt-3.5-turbo" || config.model==="gpt-3.5-turbo-0301") {
limit = 4096;
}
return calTokens(chatMessage) > limit;
}

0 comments on commit 1ee9bb5

Please sign in to comment.