You Actually Look Twice At it
This provides an adapter for Kraken to use YOLOv8 (1.0.0 update; use previous version to reuse YOLOv5 models) Object Detection routine.
This tool can be used for both segmenting and conversion of models.
pip install YALTAi
Convert (and split optionally) your data
# Keeps .1 data in the validation set and convert all alto into YOLOv5 format
# Keeps the segmonto information up to the regions
yaltai convert alto-to-yolo PATH/TO/ALTOorPAGE/*.xml my-dataset --shuffle .1 --segmonto region
And then train YOLO
yolo task=detect mode=train model=yolov8n.pt data=my-dataset/config.yml epochs=100 plots=True device=0 batch=8 imgsz=960
YALTAi has the same CLI interface as Kraken, so:
- You can use base BLLA model for line or provide yours with
-i model.mlmodel
- Use a GPU (
--device cuda:0
) or a CPU (--device cpu
) - Apply on batch (
*.jpg
)
# Retrieve the best.pt after the training
# It should be in runs/train/exp[NUMBER]/weights/best.pt
# And then annotate your new data with the same CLI API as Kraken !
yaltai kraken --device cuda:0 -I "*.jpg" --suffix ".xml" segment --yolo runs/train/exp5/weights/best.pt
The metrics produced from various libraries never gives the same mAP or Precision. I tried
object-detection-metrics==0.4
mapCalc
mean-average-precision
which ended up being the chosen one (cleanest in terms of how I can access info)
and of course I compared with YOLOv5 raw results. Nothing worked the same. And the library YOLOv5 derives its metrics from is uninstallable through pip.