-
-
Notifications
You must be signed in to change notification settings - Fork 636
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Run pytest on multiple files the pytest-way #14941
Comments
Also feeling the pain with session-wide fixtures while using |
Related comment #15197 (comment) |
We are also feeling the pain of per-file (vs. per-directory) We run |
More thoughts: A solution for this would (unfortunately) need to be more complex than a global config toggle. Everything under the Thinking out loud: Could we have a new field on |
I really like an idea like that. What makes me nervous is when Pants tries to get too clever. @danxmoran with your specific use case, would you still want caching to be at the individual file level? We can run 5 test files in a single batch, and then still cache at the individual file level? If that is not essential for you, this actually becomes pretty easy to implement! |
@Eric-Arellano per-file caching would be nice, but I think not essential for us in the short-term... all the code in this tree is so intertwined that if any one |
How would you feel about a global "batch size" setting instead? The |
More magic / less boilerplate sounds good to me 😄 though I'm not fully understanding how I could manually enforce that two groups of tests never run in the same batch with just a batch-size parameter. For example, if I have:
Everything under |
I'm not sure that you need to. Assuming that But automatically batching seems like it is likely to make back up the performance difference by virtue of not needing manual adjustment. |
Ah, the need to keep the batches separate comes from Django, not |
@stuhood @Eric-Arellano is there anything I can do to help push this one forward while you're busy working on the docker-environment support? My first thought is to work through all the test-related types and replace single addresses with collections (i.e. in |
I didn't quite understand this. Ignoring Pants, if you run pytest directly on all your tests (across multiple conftests), does the right thing happen, somehow? Or would you have to manually batch the pytest invocations in that case as well? |
Yes, before Pants we had separate CI jobs/workflows for the different Django projects. |
Hi, I've got similar problems. I am using pants inside a monorepo with hundrets of tests. When running pants test I recognized that each test takes about 7 secs. When using pytest it only takes 200ms. I saw someone else had simmilar problems so I tried the following steps: https://app.slack.com/client/T046T6T8L/C046T6T9U/thread/C046T6T9U-1658183838.164319
pants.toml
|
Ouch! OK, so that's a wrinkle to take care of. |
@danxmoran's design for this is over here: https://docs.google.com/document/d/1U0Q43bRod_EeVP4eQpcN36NMlxZ4CpRzTl2gMQ5HHvg/edit# |
Update: I'm working on translating @thejcannon's recently-added pattern of partitioning |
Closes #14941 It's _sometimes_ safe to run multiple `python_test`s within a single `pytest` process, and doing so can give significant wins by allowing reuse of expensive test setup/teardown logic. To allow users to opt into this behavior we introduce a new `batch_compatibility_tag` field on `python_test`, with semantics: * If unset, the test is assumed to be incompatible with all others and will run it a dedicated `pytest` process * If set and != the value on some other `python_test`, the test is explicitly incompatible with the other and is guaranteed to not run in the same `pytest` process * If set and == the value on some other `python_test`, the tests are explicitly compatible and _may_ run in the same `pytest` process Compatible tests may not end up in the same `pytest` batch if: * There are "too many" compatible tests in a partition, as determined by `[test].batch_size` * Compatible tests have some incompatibility in Pants metadata (i.e. different `resolve` or `extra_env_vars`) When tests with the same `batch_compatibility_tag` have incompatibilities in some other Pants metadata, the custom partitioning rule will split them into separate partitions. We'd previously discussed raising an error in this case when calling the field `batch_key`/`batch_tag`, but IMO that approach will have a worse UX - this way users can set a high-level `batch_compatibility_tag` using `__defaults__` and then have things continue to Just Work as they add/remove `extra_env_vars` and parameterize `resolve`s on lower-level targets.
pants test
triggers pytest on a per-file basis. This has different behavior compared to runningpytest <foldername>
because each pytest session only knows the single test file. This results in:I'm just speaking for myself, but I got so used to these points just working when using pytest that running python tests in pants as it is now feels like a downgrade to me. It would be great if there was a way to have the "vanilla" pytest experience also when using pants.
The text was updated successfully, but these errors were encountered: