-
Notifications
You must be signed in to change notification settings - Fork 125
Results for BarExamQA PR #240
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Results for BarExamQA PR #240
Conversation
umarbutler
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I can confirm that I have reviewed and approve this PR on behalf of Isaacus.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think you've run model incorrectly. You should use model from mteb.get_model
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hey @Samoed,
These results were generated by following the instructions for adding a dataset to MTEB: https://github.com/embeddings-benchmark/mteb/blob/main/docs/adding_a_dataset.md#submit-a-pr
The exact same code was used:
from mteb import MTEB
from sentence_transformers import SentenceTransformer
# Define the sentence-transformers model name
model_name = "sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2"
model = SentenceTransformer(model_name)
evaluation = MTEB(tasks=[YourNewTask()])There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I see. I think it's a bit outdated. Thanks for pointing!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you rerun this model using updated instruction embeddings-benchmark/mteb#2922?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yep so I reran the model using mteb.get_model and it seems to produce the right model meta data. I believe the sentence transformers version of the model is newer, leading to different meta data.
Hi,
I’m submitting this pull request to push the results of
intfloat/multilingual-e5-smallandsentence-transformers/paraphrase-multilingual-MiniLM-L12-v2on BarExamQA.This PR is connected to embeddings-benchmark/mteb#2916, which adds BarExamQA to MTEB.
This pull request is being submitted courtesy of Isaacus, a legal AI research company.
Checklist
mteb/models/this can be as an API. Instruction on how to add a model can be found here