Skip to content

Updated wavlm and whisper models to fit maeb structure #2572

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged

Conversation

alisartazkhan
Copy link

Following Structure from Silky's w2v2 model file: https://github.com/embeddings-benchmark/mteb/blob/maeb/mteb/models/wav2vec2_models.py

Code Quality

  • Code Formatted: Format the code using make lint to maintain consistent style.

Documentation

  • Updated Documentation: Add or update documentation to reflect the changes introduced in this PR.

Testing

  • New Tests Added: Write tests to cover new functionality. Validate with make test-with-coverage.
  • Tests Passed: Run tests locally using make test or make test-with-coverage to ensure no existing functionality is broken.

Adding datasets checklist

Reason for dataset addition: ...

  • I have run the following models on the task (adding the results to the pr). These can be run using the mteb -m {model_name} -t {task_name} command.
    • sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2
    • intfloat/multilingual-e5-small
  • I have checked that the performance is neither trivial (both models gain close to perfect scores) nor random (both models gain close to random scores).
  • If the dataset is too big (e.g. >2048 examples), considering using self.stratified_subsampling() under dataset_transform()
  • I have filled out the metadata object in the dataset file (find documentation on it here).
  • Run tests locally to make sure nothing is broken using make test.
  • Run the formatter to format the code using make lint.

Adding a model checklist

  • I have filled out the ModelMeta object to the extent possible
  • I have ensured that my model can be loaded using
    • mteb.get_model(model_name, revision) and
    • mteb.get_model_meta(model_name, revision)
  • I have tested the implementation works on a representative set of tasks.

Run command:

import mteb
model_name = "microsoft/wavlm-large"
model = mteb.get_model(model_name=model_name)
print("model loaded")
tasks = mteb.get_tasks(tasks=["BeijingOpera"])
print("task loaded")
evaluation = mteb.MTEB(tasks=tasks)
results = evaluation.run(model)
print("eval complete")

@KennethEnevoldsen
Copy link
Contributor

We need some linting and there seems to be a failing test

@KennethEnevoldsen KennethEnevoldsen removed the request for review from isaac-chung April 27, 2025 10:47
@mnasser3 mnasser3 changed the title Updated wavlm models to fit maeb structure Updated wavlm and whisper models to fit maeb structure Apr 29, 2025
@KennethEnevoldsen KennethEnevoldsen merged commit 040d5c6 into embeddings-benchmark:maeb May 2, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants