-
-
Notifications
You must be signed in to change notification settings - Fork 1.5k
WIP: Randomized and parallel tests #3151
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: stable
Are you sure you want to change the base?
Conversation
Help detection of leaks and flaky tests
|
Docs has the test runner as not thread safe so I am interested in what you find: These tools should only be used for testing since they change the entire interpreter state for simplicity. They are not thread-safe! |
|
Maybe #1572 is fixing the issue. |
|
I don't think you're doing this for performance, but I investigated pytest-xdist in a different context a couple of years ago, and at the time it was difficult to get a speedup because the startup time was linear to the number of workers. The startup time still seems to be linear to the number of workers, but the constant is much better now, here is a log with timing for As comparison, |
No, not in the results I got. I was under the impression that running tests in parallel would uncover the issues you were discussing in #3139 and #3140 with @getzze and @neutrinoceros. But it looks completely unrelated. Here the issues are with the pager: And the failing tests are completely random depending on the order they're executed (from 6 to 47 failing tests on my machine). That's why I was thinking that maybe a solution might be in the direction of #1572. So I guess you can continue working on #3139 and #3140 independently of this PR. |
Well, #3139 does contain a test that I took from one of the referenced tickets, and that test triggers Don't take my comment as discouragement from what you're trying to achieve here, I'm just stating that my goal with #3139 was the incredibly narrowly scoped "fix the immediate Click issue I'm suffering from". I would guess that not having multiple objects trying to close the same buffers in their finalizers would also be beneficial here, but I don't know the Click code base so take my guesses for what they are worth. |
sounds like a job for https://pypi.org/project/detect-test-pollution/ (just passing by, ignore me if that's not actually relevant) |
|
I'd prefer not adding these as a regular test, it introduces a lot of complexity and uncertainty. Maybe just use it to clean up tests here and then remove again. The tests already run very fast, and I'd prefer to speed them up directly if possible rather than by using multiprocessing. |
|
Oh that's definitely not a tool you want to run regularly in CI. It's really meant to help for local debugging only. Edit: hidden as off-topic, I thought David was talking about detect-test-pollution but on second thought that's probably not what he meant |
|
No that's fine, I was referring to all three plugins. |
Help detection of leaks and flaky tests.
This PR is a WIP to check the effects on different platforms and contexts.
Preliminary results:
Multiple cores are properly detected and tests are run in parallel:
Unittests fails as some are leaky.
Relates to:
click.testing.StreamMixer's finalization #2993click.testing.StreamMixer's finalization, seen in a multi-threaded context #2991