Replies: 3 comments 7 replies
-
|
I don't know if such benchmarks necessarily need to run on GHA or even live in the main repository (I tentatively made https://github.com/form-dev/form-bench a while ago), but indeed having a standard set of benchmarks which people can at least run on their own machines while making code alterations would indeed be useful. Currently I use something which is essentially a bash script +
The tests take between 1s and 2min (tform, 12 cores), and I run them multiple times with hyperfine to get a run-time + SD. I have the tests configured to not use much/any disk. Maybe this set of tests is a good starting point? |
Beta Was this translation helpful? Give feedback.
-
|
Of course the mincer code is public. Only unfinished code is not public (yet).
… On 25 Aug 2025, at 11:21, jodavies ***@***.***> wrote:
The status is,
trace: public already (fu)
mincer: depends on mincer2c.h (not public I think, optimised version of public mincer) and treatgzgz.prc
minceex: already public https://www.nikhef.nl/~form/maindir/packages/mincer/mincerex.tgz
mass-factorisation: private code from Andreas Vogt, which he used to use to compare performance of machines, I can ask about making it public
forcer + forcer-exp: public https://github.com/form-dev/form/blob/master/check/extra/forcer.frm
mbox1l: my code, already public https://github.com/form-dev/form/blob/master/check/user.frm
color: public https://www.nikhef.nl/~form/maindir/packages/color/color.html
chromatic poly: as I understand https://arxiv.org/pdf/hep-ph/0702279 , was once distributed with FORM
So mincer needs @vermaseren <https://github.com/vermaseren>'s "OK" and I will ask Andreas about mass-fact.
I can already put the rest in form-bench.
—
Reply to this email directly, view it on GitHub <#702 (reply in thread)>, or unsubscribe <https://github.com/notifications/unsubscribe-auth/ABJPCERULMTR4U2FCS5GNHL3PLIR3AVCNFSM6AAAAACEW5C6LWVHI2DSMVQWIX3LMV43URDJONRXK43TNFXW4Q3PNVWWK3TUHMYTIMRQHA3DAOA>.
You are receiving this because you were mentioned.
|
Beta Was this translation helpful? Give feedback.
-
|
Here is a prototype of a benchmarking setup on GitHub Actions: https://tueda.github.io/form-bench-results-wip1/dev/bench/. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
This has come up a few times, most recently in PR701, and it makes sense to have a standard set of FORM benchmark programs in the repository. We already have the FORM unit benchmark, which computes the Dirac trace of 14 gamma matrices. It might be helpful to discuss what minimal (and/or extended) set of benchmarks should be included, and optionally reconsider the directory structure for organising them.
It may also be useful to set up GitHub Actions to run benchmarks, using something like GitHub Action for Continuous Benchmarking. Even just running the FORM unit benchmark could be useful. Though results would certainly fluctuate due to a noisy host environment, it should still be possible to detect major performance regressions (e.g., a slowdown of about 100%) and track trends in execution time. To detect environmental changes, we can also include a baseline benchmark unrelated to FORM, such as
openssl speed, or, to mitigate the effects of such changes, we can use the ratio relative to the baseline (FORM-benchmark / baseline).Any thoughts?
Beta Was this translation helpful? Give feedback.
All reactions