-
Notifications
You must be signed in to change notification settings - Fork 59.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Bug] 生成标题永远使用的model是gpt-4o-mini #5603
Comments
Title: [Bug] The model always used to generate titles is gpt-4o-mini 📦 Deployment methodDocker 📌 Software version2.15.3 💻 System environmentmacOS 📌 System version15.0 🌐 BrowserChrome 📌 Browser version129.0.6668.60 🐛 Problem descriptionUse the local ollama service, CUSTOM_MODELS=-all,qwen2.5:7b-instruct,DEFAULT_MODEL=qwen2.5:7b-instruct. When summarizing the conversation to generate the title, the model is still gpt-4o-mini, and the generated title is a background error prompt. json string. 📷 Steps to reproduceAs above 🚦 Expected resultsGenerate title using currently selected model 📝 Supplementary informationNone |
📦 部署方式
Docker
📌 软件版本
2.15.3
💻 系统环境
macOS
📌 系统版本
15.0
🌐 浏览器
Chrome
📌 浏览器版本
129.0.6668.60
🐛 问题描述
使用本地的ollama服务,CUSTOM_MODELS=-all,qwen2.5:7b-instruct,DEFAULT_MODEL=qwen2.5:7b-instruct,总结对话生成标题时model仍为gpt-4o-mini,生成的标题为后台错误提示json串。
📷 复现步骤
如上
🚦 期望结果
使用当前选择模型生成标题
📝 补充信息
无
The text was updated successfully, but these errors were encountered: