-
Notifications
You must be signed in to change notification settings - Fork 9.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: support vision models from xinference #4094
feat: support vision models from xinference #4094
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
@Minamiyama xorbitsai/inference#1425 Should I update xinference first? Only update dify, I still can't see the "eye" in "qwen-chat-vl". If using openai api comaptiblie, it showing that not support stream. Updating xinference from source code seem not smoothly because of some new changes. |
remove the old qwen-chat-vl from xinference provider, and add it as a totally new one, you should see the eye mark. And try to create a new workflow, just in case |
Co-authored-by: Yeuoly <[email protected]>
I have find out, the docker image is out of date. |
Co-authored-by: Yeuoly <[email protected]>
Co-authored-by: Yeuoly <[email protected]>
Description
Please include a summary of the change and which issue is fixed. Please also include relevant motivation and context. List any dependencies that are required for this change.
Fixes # (issue)
Type of Change
Please delete options that are not relevant.
How Has This Been Tested?