-
Notifications
You must be signed in to change notification settings - Fork 18
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Discrepancy between the train time and evaluation time #193
Comments
As a side note, @glemaitre don't you think that having a "Frontend" etc. labels would be easier to manage that renaming each issue :) |
I feel stupid now :) I like only to use 10% of GitHub ;) |
Be aware that the reported time for training is only on a single fold. So you need to multiply by the number of folds. However, x3000 seems kinda of unlikely :) |
You also need to score which is slow sometimes. |
Uhm we solve this issue. We were reporting the validation time instead of the training time. |
The training times shown on RAMP don't seem to be consistent with the evaluation time.
For instance, for the solar_wind problem, the starting kit says that the train time is 0.54 s (and test is 0.17 s), while to evaluate that submission it takes ~25min on the same server.
The evaluation time is consistent with what I get by running the
ramp_test_submission
locally, and the reported train/test time is faster by 3000x which doesn't make sense even allowing for multiple CV folds.The text was updated successfully, but these errors were encountered: