-
Notifications
You must be signed in to change notification settings - Fork 325
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Models missing after leaderboard code refresh #2096
Comments
Apologies for the delayed response, I didn't see your reply until today!
This page is missing our models, despite them showing up on the old MTEB leaderboard. Results for our models also appear to be missing from the BEIR page, despite MTEB Retrieval being adapted from BEIR and the datasets overlapping (as seen in the figure from the MTEB paper) |
They do appear if you press "Allow All": ![]() They are simply being filtered now due to not being zero-shot on the leaderboard. As an important provider in the embedding models space we would love to hear your opinion on how we should set the default over on #2119 (which is currently ongoing, but we will probably make a decision soon-ish) |
I believe this resolved this issue, but do add to the discussion over on #2119 |
Ah, I think I have found the source of the confusion:
This description is misleading, because the page has changed which models are displayed. I will chime in separately on #2119, but the choice of showing all models which do not report their training data but disqualifying those which do report it gives a very misleading view! |
For some reason the new leaderboard only displays Snowflake's Arctic Embed v1.5 model, dropping both our v1 and v2 models from the listings. The actual MTEB eval results are still present in all v1 and v2 models' model cards, so it seems that something went awry in the leaderboard code changes themselves.
The text was updated successfully, but these errors were encountered: