Skip to content

[EVAL] #265

@barneylogo

Description

@barneylogo

Evaluation short description

I've run nanotron script for training model with my own fineweb dataset
so I only got this result, model checkpoints
image
I'd like to evaluate this using lighteval
Please help this problem.
I've tried with this command

accelerate launch --multi_gpu --num_processes=8 -m \
    lighteval accelerate \
    --model_args "pretrained=./checkpoints/10" \
    --tasks "lighteval|truthfulqa:mc|0|0" \
    --override_batch_size 1 \
    --output_dir="./evals/"

but how to use checkpoints for evaluating.
Python version: 3.10.12

Metadata

Metadata

Assignees

No one assigned

    Labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions