Skip to content

Conversation

@Maegereg
Copy link
Contributor

@Maegereg Maegereg commented Nov 10, 2025

Covers addition, multiplication, and equality (contains, for arbs) for acb, arb, fmpq, and fmpz.

The primary goal is to use these to measure the performance effect of using the stable API (#338), but they could be useful for other things in the future.

I'm particularly looking for feedback on whether this should include additional types or operations.

@oscarbenjamin
Copy link
Collaborator

Is there some package that could be used for benchmarking here?

Ideally what you want is to be able to compare two different versions to see possible statistically significant differences.

@oscarbenjamin
Copy link
Collaborator

The failed CI job is possibly due to the Cython constraint and might be fixed after gh-350.

@Maegereg
Copy link
Contributor Author

Is there some package that could be used for benchmarking here?

I was initially assuming that we'd want to follow the philosophy of the tests and keep things pretty minimal. But I've done a bit of research now, and it looks like pyperf could be useful here - it has good support for running a suite of benchmarks, and comparing multiple runs (which would allow us to get comparisons between different builds of the library). We'd still need either some manual effort to set up the different builds in different environments, or some scripting on top of pyperf to automate that a little (I was planning to do that anyway in the world where we aren't using pyperf).

If that sounds reasonable to you, I can re-write these benchmarks to use pyperf. I plan to leave the scaffolding for handling multiple builds to a future PR, so that right now we can focus on whether these are the right things to measure.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants