-
Notifications
You must be signed in to change notification settings - Fork 555
model: add kalm_models ModelMeta #2775
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Signed-off-by: xinshuohu <xinshuohu@tencent.com>
Signed-off-by: xinshuohu <xinshuohu@tencent.com>
Signed-off-by: xinshuohu <xinshuohu@tencent.com>
| "MTOPIntentClassification": ["train"], | ||
| } | ||
|
|
||
| KaLM_Embedding_X_0605 = ModelMeta( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you add implementation of your model? If it similar to original KALM, I can push work on that PR
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If it similar to original KALM, I can push work on that PR
Yes, it is almost the same implementation of HIT_TMG__KaLM_embedding_multilingual_mini_instruct_v1.
Perhaps the entire set of models related to KaLM should be moved to kalm_models.py.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, I will try to finish work on #2478 on weekends then
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@YanshekWoo can you try to run your models with implementation from #2478? It was merged to main
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@Samoed OK, I will try to test it. Thanks.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@Samoed
I have tested the latest version of MTEB (1.38.30), and I believe its results are completely fine now.
Some of the results (from different task type) for HIT-TMG/KaLM-embedding-multilingual-mini-instruct-v1.5 are as follows:
| Task | Reported | Reproduced |
|---|---|---|
| EmotionClassification | 0.86900 | 0.86885 |
| FiQA2018 | 0.44741 | 0.44072 |
| SprintDuplicateQuestions | 0.93057 | 0.930568 |
| STS12 | 0.80167 | 0.801666 |
The code of evaluation is as follows:
import mteb
# Specify the model that we want to evaluate
model = mteb.get_model("HIT-TMG/KaLM-embedding-multilingual-mini-instruct-v1.5")
# specify what you want to evaluate it on
tasks = mteb.get_tasks(tasks=["EmotionClassification", "FiQA2018", "STS12", "SprintDuplicateQuestions"])
# run the evaluation
evaluation = mteb.MTEB(tasks=tasks)
results = evaluation.run(model,
encode_kwargs={"batch_size": 256},
verbosity=2,)|
I have created a new PR via the latest mteb. |
Checklist
Model Checklist
mteb.get_model(model_name, revision)andmteb.get_model_meta(model_name, revision)