Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

making ongoing evaluation more streamlined #266

Open
ingberam opened this issue Jan 14, 2024 · 0 comments
Open

making ongoing evaluation more streamlined #266

ingberam opened this issue Jan 14, 2024 · 0 comments

Comments

@ingberam
Copy link
Collaborator

Currently, when a new algorithm is submitted, only @harsha-simhadri can run it since he has the existing results files from all other algorithms on the standard Azure machine (in order to produce the updated plots). This is not scalable.

I opened this issue to collect different ideas to let others also evaluate algorithms, and also consider fully automatic evaluation.

Idea 1: Let others evaluate new algorithms on the standard hardware, and update the ongoing leaderboard. This may require simplifying the mechanism for generating plots, which now requires full results files for all algorithms (large hdf5 files).

Idea 2: fully automatic evaluation - when someone submits a PR, it is evaluated automatically using the CI or some other method. This is not working today (and even the small unit tests are flakey since there is variability in the type of the machine that is running the CI).

@maumueller @sourcesync I know you also thought about this. Any ideas?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant