-
Notifications
You must be signed in to change notification settings - Fork 44.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix(tests): restore config values after changing them in tests #2904
Conversation
Codecov ReportPatch coverage has no change and project coverage change:
Additional details and impacted files@@ Coverage Diff @@
## master #2904 +/- ##
==========================================
- Coverage 36.29% 36.22% -0.08%
==========================================
Files 60 60
Lines 2849 2849
Branches 471 471
==========================================
- Hits 1034 1032 -2
- Misses 1753 1755 +2
Partials 62 62 see 1 file with indirect coverage changes Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here. ☔ View full report in Codecov by Sentry. |
# Store debug mode to reset it after the test | ||
debug_mode = self.config.debug_mode | ||
|
||
self.config.set_debug_mode(True) | ||
self.assertTrue(self.config.debug_mode) | ||
|
||
# Reset debug mode | ||
self.config.set_debug_mode(debug_mode) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This seems to reduce coverage for some debug statements in json_fix_general
, suggesting tests aren't run in debug mode. Should they be?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Weird.
That's definitely something to look into then, especially as this flag was being set half way through the set of tests.
@ntindle
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
IMO would should have each test able to specify a config and a reasonable set of Params as defaults. Test plans for this should be considered a priority
Background
Config values are currently being changed during testing, which effects later tests in the queue.
Example error this can cause:
openai.error.InvalidRequestError: The model gpt-3.5-turbo-test does not exist
Changes
This PR ensures all values changed during testing are reset after the test.
Documentation
All code commented.
Test Plan
Tests are still passing on this file.
PR Quality Checklist