Skip to content

Conversation

@user27182
Copy link
Contributor

@user27182 user27182 commented Apr 6, 2025

Purpose

The duration extension is great but is not configurable. It currently only shows 5 slowest durations and only
prints to the console.

This PR adds configuration options:

  1. duration_n_slowest to print any number of durations to console (not just 5)
  2. duration_print_slowest to enable/disable printing to console altogether
  3. duration_write_json to write all durations to a JSON file
  4. duration_print_total to print the total read time to console
  5. duration_limit to emit a warning for long durations above the limit

No tests are yet included. I am a first time contributor and not familiar with dev tools and conventions here... I may need some extra help.

@user27182 user27182 marked this pull request as draft April 6, 2025 00:56
@user27182 user27182 marked this pull request as ready for review April 6, 2025 02:12
@user27182
Copy link
Contributor Author

This is ready but I don't think I have the tests configured properly. When I run each test individually locally they each pass on their own. But running the whole suite causes some to fail. Is there some kind of interaction between tests?

@user27182 user27182 changed the title Extend duration extension with configuration options Add configuration options to duration extension Apr 6, 2025
@AA-Turner AA-Turner added the type:proposal a feature suggestion label Apr 9, 2025
@AA-Turner
Copy link
Member

Please add yourself to AUTHORS too. I think it's better to add each option as a new top-level option rather than as a dictionary.

@user27182 user27182 marked this pull request as draft April 10, 2025 20:44
@user27182
Copy link
Contributor Author

user27182 commented Apr 10, 2025

I'm going to modify the API a bit to better match pytest. pytest uses --durations=0 to show all durations, whereas this extention currently uses n_slowest=-1 to do the same, so will rename n_slowest to durations and change -1 to 0.
Will rework the console formatting too to match pytest.

@user27182 user27182 marked this pull request as ready for review April 11, 2025 01:47
@user27182
Copy link
Contributor Author

I've also added a new option to limit the duration to some value. Should be ready now.

CI is green except there are seemingly unrelated autodoc Python 3.14 errors...

@sphinx-doc sphinx-doc deleted a comment from octocat2244 Jul 7, 2025
@user27182
Copy link
Contributor Author

If there's any chance of this being approved, please let me know and I'll work to make CI green again.

@AA-Turner AA-Turner merged commit 187a2d2 into sphinx-doc:master Nov 27, 2025
51 of 52 checks passed
@AA-Turner AA-Turner added this to the 9.0.0 milestone Nov 27, 2025
@jayaddison
Copy link
Contributor

@user27182 please be mindful of test failures in GitHub's continuous integration, even when they're as a result of an update-merge -- this one seemed actionable, for example: https://github.com/sphinx-doc/sphinx/actions/runs/18664715659/job/53213261683

The issue there seems to have been #14109, and I've opened a pull request to resolve that.

@user27182
Copy link
Contributor Author

Thanks for the fix, and apologies for any problems caused by this PR.

While this failure indeed appears actionable in hindsight, there were also many other aspects of the CI that were causing intermittent failures unrelated to this PR (e.g. see #13469 (comment), #13469 (comment)), which is in addition to the current issues sphinx is having with a separate flaky test (e.g. see this other recent merge with master commit where the failure is not caused this PR https://github.com/sphinx-doc/sphinx/actions/runs/19314188434/job/55241608094?pr=13469). This led me to simply (incorrectly) assume the failures were caused by some other Sphinx CI issue.
I am also a new contributor unfamiliar with CI maintenance here, and have no ability to simply retry CI steps when there is a failure, and thus have limited ability to assess the flakiness in this regard.

Even the most recent commit had a flaky failure caused this PR (see https://github.com/sphinx-doc/sphinx/actions/runs/19723500767/job/56510331350?pr=13469), but this was quickly ignored/retried and merged anyway. And so it seems there were multiple opportunities to address this that were missed.

Again, thank you for the fix, I will be sure to look at CI failures more closely for future contributions.

@jayaddison
Copy link
Contributor

Thank for understanding and responding @user27182 - we're trying to resolve one or two other flaky tests at the moment (including that unrelated one from the mainline branch), and yep, it does become much more difficult to catch genuine test failures when even a small number of the tests are sufficiently unreliable. If interested, overall progress/status for test reliability in the codebase is tracked at: #12191

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

type:proposal a feature suggestion

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants