Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

avoid setting debug property if not needed #18872

Merged
merged 8 commits into from
Jun 20, 2024
Merged

Conversation

ZeeD
Copy link
Contributor

@ZeeD ZeeD commented Jun 6, 2024

this pr add an optional boolean flag to enable / disable / keep as it is the debug property
fixes #18871

@cbornet @tomplus @krjakbrjak @fa0311 @multani

PR checklist

  • Read the contribution guidelines.
  • Pull Request title clearly describes the work in the pull request and Pull Request description provides details about how to validate the work. Missing information here may result in delayed response from the community.
  • Run the following to build the project and update samples:
    ./mvnw clean package 
    ./bin/generate-samples.sh ./bin/configs/*.yaml
    ./bin/utils/export_docs_generators.sh
    
    (For Windows users, please run the script in Git BASH)
    Commit all changed files.
    This is important, as CI jobs will verify all generator outputs of your HEAD commit as it would merge with master.
    These must match the expectations made by your contribution.
    You may regenerate an individual generator by passing the relevant config(s) as an argument to the script, for example ./bin/generate-samples.sh bin/configs/java*.
    IMPORTANT: Do NOT purge/delete any folders/files (e.g. tests) when regenerating the samples as manually written tests may be removed.
  • File the PR against the correct branch: master (upcoming 7.6.0 minor release - breaking changes with fallbacks), 8.0.x (breaking changes without fallbacks)
  • If your PR is targeting a particular programming language, @mention the technical committee members, so they are more likely to review the pull request.

@wing328
Copy link
Member

wing328 commented Jun 6, 2024

thanks for the PR. please follow step 3 to update the samples so that the CI can verify the change.

@ZeeD
Copy link
Contributor Author

ZeeD commented Jun 6, 2024

updated the samples

@ZeeD
Copy link
Contributor Author

ZeeD commented Jun 12, 2024

Hi.
may someone have a look at the changes?

@wing328
Copy link
Member

wing328 commented Jun 12, 2024

can you please review the test failures when you've time?

@ZeeD
Copy link
Contributor Author

ZeeD commented Jun 12, 2024

Oh, it was the fact that bool | None was not supported in python 3.7. switched to Optional[bool]

@wing328
Copy link
Member

wing328 commented Jun 13, 2024

that's ok.

all tests passed. thanks again for the PR.

if no one has further feedback on this PR, i'll merge it later this week.

Copy link
Contributor

@multani multani left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not setting self.debug to a specific value will not set self.__debug, which may create AttributeError because the Configuration object is not correctly initialized.

I would also suggest to add a few tests to actually show this behavior:

  1. Configuration created without passing debug
  2. Configuration created with debug=False
  3. Configuration created with debug=True

In all three cases, configuration.debug should be called to verify the expected behavior. With this change, the first test would fail.

Comment on lines +217 to +218
if debug is not None:
self.debug = debug
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If this self.debug property is not always set, then the underlying __debug property will also not always be set, which may produce AttributeError.

I would suggest to at least set self.__debug to False by default so that the debug setter/getter don't have a way to fail, and all the code that are also using this attribute will also not fail.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I can easily add a default value for self.__debug, but I'm not sure how to write the tests.
Can you point me to a sample I can adapt?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You can check here for instance: https://github.com/OpenAPITools/openapi-generator/blob/master/samples/openapi3/client/petstore/python/tests/test_configuration.py

There are several tests around this class, you should be able to create a test that fails with your current change and make it pass by setting an initial value for __debug.

There may be other places with this kind of tests (they are written manually), but I didn't check.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Uhm... I was starting to write some tests, but it seems there are some unrelated failures.

$ cd samples/openapi3/client/petstore/python
$ poetry install
[...]
$ poetry run pytest
[...]
================================================= short test summary info ==================================================
FAILED tests/test_model.py::ModelTests::test_valdiator - AttributeError: 'ModelTests' object has no attribute 'assertEquals'. Did you mean: 'assertEqual'?
FAILED tests/test_pet_api.py::PetApiTests::test_add_pet_and_get_pet_by_id - urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=80): Max retries exceeded with url: /v2/pet...
FAILED tests/test_pet_api.py::PetApiTests::test_add_pet_and_get_pet_by_id_with_http_info - urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=80): Max retries exceeded with url: /v2/pet...
FAILED tests/test_pet_api.py::PetApiTests::test_add_pet_and_get_pet_by_id_without_preload_content_flag - urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=80): Max retries exceeded with url: /v2/pet...
FAILED tests/test_pet_api.py::PetApiTests::test_delete_pet - urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=80): Max retries exceeded with url: /v2/pet...
FAILED tests/test_pet_api.py::PetApiTests::test_find_pets_by_status - urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=80): Max retries exceeded with url: /v2/pet...
FAILED tests/test_pet_api.py::PetApiTests::test_find_pets_by_tags - urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=80): Max retries exceeded with url: /v2/pet...
FAILED tests/test_pet_api.py::PetApiTests::test_get_pet_by_id_serialize - urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=80): Max retries exceeded with url: /v2/pet...
FAILED tests/test_pet_api.py::PetApiTests::test_update_pet - urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=80): Max retries exceeded with url: /v2/pet...
FAILED tests/test_pet_api.py::PetApiTests::test_update_pet_with_form - urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=80): Max retries exceeded with url: /v2/pet...
FAILED tests/test_pet_api.py::PetApiTests::test_upload_file - urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=80): Max retries exceeded with url: /v2/pet...
================================== 11 failed, 264 passed, 3 skipped, 9 warnings in 14.25s ==================================

I'm normally using python 3.12, where it was removed
Should I also update the unrelated tests?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  • For assertEquals, this should be migrated in another PR I think. Can you test with 3.11 instead?
  • For the URL failures, there's an explanation how to start a test container to support the tests at the top of tests/test_pet_api.py

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

(oh, and for the various PetApiTests failed tests, while it's easy to follow the instructions in the tests

Run the tests.
$ docker pull swaggerapi/petstore
$ docker run -d -e SWAGGER_HOST=http://petstore.swagger.io -e SWAGGER_BASE_PATH=/v2 -p 80:8080 swaggerapi/petstore
$ pip install -U pytest
$ cd petstore_api-python
$ pytest
I would suggest to enhance them with testcontainers so you can avoid launching the docker image by hand

@ZeeD
Copy link
Contributor Author

ZeeD commented Jun 17, 2024

Updated the pr to add explicit check the behavior on the various values you can pass to the constructor

@@ -137,6 +137,7 @@ conf = {{{packageName}}}.Configuration(
"""

_default = None
__debug = False
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There's no reason to make it a class attribute, can you move this as an instance attribute line 217?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

@@ -76,5 +76,14 @@ def test_get_host_from_settings(self):
self.assertEqual("http://petstore.swagger.io:8080/v2", self.config.get_host_from_settings(0, {'port': '8080'}))
self.assertEqual("http://dev-petstore.swagger.io:8080/v2", self.config.get_host_from_settings(0, {'server': 'dev-petstore', 'port': '8080'}))

def testConfigurationDebug(self):
for debug, expected in [(True, True), (False, False), (None, False)]:
with self.subTest(debug=debug, expected=expected):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

TIL about unittest's subTest 🙇

Comment on lines 84 to 86
with self.subTest(expected=False):
c = petstore_api.Configuration()
self.assertEqual(expected, c.debug)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this works a bit by chance, since expected in the assert is the one taken from the for-loop line 80.

Can you make the test a bit more explicit?

Suggested change
with self.subTest(expected=False):
c = petstore_api.Configuration()
self.assertEqual(expected, c.debug)
expected = False
with self.subTest(debug="unset", expected=expected):
c = petstore_api.Configuration()
self.assertEqual(expected, c.debug)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

d'oh, you're right!
rewrote them and added explicit message

Copy link
Contributor

@multani multani left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the tests and fixes!

@ZeeD
Copy link
Contributor Author

ZeeD commented Jun 20, 2024

I was just wondering if there is anything else to do on my side.

@multani
Copy link
Contributor

multani commented Jun 20, 2024

@wing328 can you check this pull request?
It's good to merge for me 👍

@wing328
Copy link
Member

wing328 commented Jun 20, 2024

@multani thanks for reviewing the change

@ZeeD thanks for the PR

will merge after all tests pass

@wing328 wing328 merged commit e5ae07c into OpenAPITools:master Jun 20, 2024
35 checks passed
@ZeeD
Copy link
Contributor Author

ZeeD commented Jun 20, 2024

Thanks!

@ZeeD ZeeD deleted the patch-1 branch June 20, 2024 10:35
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[BUG] [python] the client should not forcefully turn off httplib debug
3 participants