-
Notifications
You must be signed in to change notification settings - Fork 517
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Adding a test function for OpenAI completion in flaml #951
Conversation
Co-authored-by: Li Jiang <[email protected]>
flaml/integrations/oai/completion.py
Outdated
result_agg, responses_list, result_list = {}, [], [] | ||
metric_keys = None | ||
with diskcache.Cache(cls.cache_path) as cls._cache: | ||
for _, data_i in enumerate(data): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why is it resolved?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Finished a round of review.
notebook/integrate_chatgpt.ipynb
Outdated
"result = oai.Completion.test(test_data, config, success_metrics)\n", | ||
"print('performance on test data with the tuned config:', result)" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Comment them?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Use " for consistency.
Why are these changes needed?
Adding a test function in the OpenAI completion such that we can directly evaluate the performance of an OpenAI model with a particular configuration.
Related issue number
Checks