diff --git a/docs/adding_a_model.md b/docs/adding_a_model.md index c2976e1b4b..0e22e1ea3a 100644 --- a/docs/adding_a_model.md +++ b/docs/adding_a_model.md @@ -30,10 +30,10 @@ These will save the results in a folder called `results/{model_name}/{model_revi 2. **Push Results to the Leaderboard** -To add results to the public leaderboard you can push your results to the [results repository](https://github.com/embeddings-benchmark/results) afterwards they will appear on the leaderboard after a day. +To add results to the public leaderboard you can push your results to the [results repository](https://github.com/embeddings-benchmark/results) via a PR. Once merged they will appear on the leaderboard after a day. -3. (Optional) **Add the results using to the model card:** +3. (Optional) **Add results to the model card:** `mteb` implements a cli for adding results to the model card: @@ -49,7 +49,7 @@ If the readme already exists: mteb create_meta --results_folder results/{model_name}/{model_revision} --output_path model_card.md --from_existing your_existing_readme.md ``` -Note that if you can run the model on many tasks, this can lead to an excessively large readme frontmatter. +Note that running the model on many tasks may lead to a huge readme front matter. 4. **Wait for a refresh the leaderboard:** @@ -70,4 +70,4 @@ The leaderboard [automatically refreshes daily](https://github.com/embeddings-be ###### Instantiating the Model with Prompts - If you are unable to directly add the prompts in the model configuration, you can instantiate the model using the `sentence_transformers_loader` and pass `prompts` as an argument. For more details, see the `mteb/models/bge_models.py` file. \ No newline at end of file + If you are unable to directly add the prompts in the model configuration, you can instantiate the model using the `sentence_transformers_loader` and pass `prompts` as an argument. For more details, see the `mteb/models/bge_models.py` file.