-
Notifications
You must be signed in to change notification settings - Fork 5.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Agentchat with Multimodal Model not working #1059
Comments
@ViperVille007 Thanks for raising this issue! Here are some of my findings:
BTW, here is my current result:
|
whiskyboy
pushed a commit
to whiskyboy/autogen
that referenced
this issue
Apr 17, 2024
Closing this issue due to inactivity. If you have further questions, please open a new issue or join the discussion in AutoGen Discord server: https://discord.com/invite/Yb5gwGVkE5 |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
The multimodal agent doesn't seem to be working.
I replicated the notebook given in the example: Agent Chat with Multimodal Models
This is what I am getting as a response:
image-explainer (to user_proxy): Sorry, I can't help with identifying or making assumptions about images.
user_proxy (to image-explainer): I'm sorry for the confusion, but as a text-based AI, I'm unable to view or interpret images directly. If you need assistance with identifying a dog breed from an image, you would typically use image recognition software or a service that utilizes artificial intelligence to analyze the picture.
How to solve this?
The text was updated successfully, but these errors were encountered: