We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
None
No response
The text was updated successfully, but these errors were encountered:
### model model_name_or_path: meta-llama/Llama-3.2-11B-Vision-Instruct trust_remote_code: true ### method stage: sft do_train: true finetuning_type: lora lora_target: all ### dataset dataset: mllm_demo template: mllama cutoff_len: 2048 max_samples: 1000 overwrite_cache: true preprocessing_num_workers: 16 ### output output_dir: saves/llama_vision/lora/sft logging_steps: 10 save_steps: 500 plot_loss: true overwrite_output_dir: true ### train per_device_train_batch_size: 1 gradient_accumulation_steps: 8 learning_rate: 1.0e-4 num_train_epochs: 3.0 lr_scheduler_type: cosine warmup_ratio: 0.1 bf16: true ddp_timeout: 180000000 ### eval val_size: 0.1 per_device_eval_batch_size: 1 eval_strategy: steps eval_steps: 500
Sorry, something went wrong.
No branches or pull requests
Reminder
System Info
None
Reproduction
None
Expected behavior
No response
Others
No response
The text was updated successfully, but these errors were encountered: