-
Notifications
You must be signed in to change notification settings - Fork 5.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Change default model in LMM #1032
Conversation
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## main #1032 +/- ##
===========================================
+ Coverage 26.54% 37.87% +11.33%
===========================================
Files 28 28
Lines 3805 3807 +2
Branches 865 907 +42
===========================================
+ Hits 1010 1442 +432
+ Misses 2724 2239 -485
- Partials 71 126 +55
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
My understanding is that this is to make sure the model used support multi-modal input correct?
Yes, correct. The ConversableAgent defaults to “gpt-4", and we should default to a multimodal model. When the config_list is not set, the default model will be used. |
Why are these changes needed?
Address issue #965
Related issue number
Checks