You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We have been using benchmarks extensively to validate and steer. Alas, we have no way to include them in our CI pipelines. Nevermind code not having fully settled in its place (at least as of the time of writing), benchmarks take an inordinate amount of time to run. A developer cannot wait a day for CI check to come back. We need shorter, yet meaningful benchmarks and a suitable infrastructure for these to run repeatably on code submission.
Implementation considerations
LAMBKIN's black-box techniques may be reused on shorter datasets, but there may be other tools and approaches (e.g. SLAMBench) from which to draw ideas considering we would be benchmarking Beluga against a modified version of itself in the regression performance testing case.
The text was updated successfully, but these errors were encountered:
Feature description
We have been using benchmarks extensively to validate and steer. Alas, we have no way to include them in our CI pipelines. Nevermind code not having fully settled in its place (at least as of the time of writing), benchmarks take an inordinate amount of time to run. A developer cannot wait a day for CI check to come back. We need shorter, yet meaningful benchmarks and a suitable infrastructure for these to run repeatably on code submission.
Implementation considerations
LAMBKIN's black-box techniques may be reused on shorter datasets, but there may be other tools and approaches (e.g. SLAMBench) from which to draw ideas considering we would be benchmarking Beluga against a modified version of itself in the regression performance testing case.
The text was updated successfully, but these errors were encountered: