maestro is a tool designed to streamline and accelerate the fine-tuning process for multimodal models. It provides ready-to-use recipes for fine-tuning popular vision-language models (VLMs) such as Florence-2, PaliGemma, and Qwen2-VL on downstream vision-language tasks.
Pip install the supervision package in a Python>=3.8 environment.
pip install maestro
VLMs can be fine-tuned on downstream tasks directly from the command line with
maestro
command:
maestro florence2 train --dataset='<DATASET_PATH>' --epochs=10 --batch-size=8
Alternatively, you can fine-tune VLMs using the Python SDK, which accepts the same arguments as the CLI example above:
from maestro.trainer.common import MeanAveragePrecisionMetric
from maestro.trainer.models.florence_2 import train, Configuration
config = Configuration(
dataset='<DATASET_PATH>',
epochs=10,
batch_size=8,
metrics=[MeanAveragePrecisionMetric()]
)
train(config)
Explore our collection of notebooks that demonstrate how to fine-tune various vision-language models using maestro. Each notebook provides step-by-step instructions and code examples to help you get started quickly.
model and task | colab | video |
---|---|---|
Fine-tune Florence-2 for object detection | ||
Fine-tune Florence-2 for visual question answering (VQA) |
We would love your help in making this repository even better! We are especially looking for contributors with experience in fine-tuning vision-language models (VLMs). If you notice any bugs or have suggestions for improvement, feel free to open an issue or submit a pull request.