Skip to content

Commit

Permalink
update yaml with lora_finetune_fsdp2
Browse files Browse the repository at this point in the history
Summary:

Test Plan:

Reviewers:

Subscribers:

Tasks:

Tags:
  • Loading branch information
weifengpy committed Jun 3, 2024
1 parent 1a692b3 commit 8fbbc4b
Show file tree
Hide file tree
Showing 3 changed files with 8 additions and 8 deletions.
6 changes: 3 additions & 3 deletions recipes/configs/dev/llama2/13B_lora_fsdp2.yaml
Original file line number Diff line number Diff line change
@@ -1,17 +1,17 @@
# Config for multi-device LoRA in lora_finetune_distributed.py
# Config for multi-device LoRA with FSDP2 in lora_finetune_fsdp2.py
# using a Llama2 13B model
#
# This config assumes that you've run the following command before launching
# this run:
# tune download meta-llama/Llama-2-13b-hf --output-dir /tmp/Llama-2-13b-hf --hf-token <HF_TOKEN>
#
# To launch on 4 devices, run the following command from root:
# tune run --nnodes 1 --nproc_per_node 4 lora_finetune_distributed --config llama2/13B_lora
# tune run --nnodes 1 --nproc_per_node 4 lora_finetune_fsdp2 --config llama2/13B_lora
#
# You can add specific overrides through the command line. For example
# to override the checkpointer directory while launching training
# you can run:
# tune run --nnodes 1 --nproc_per_node 4 lora_finetune_distributed --config llama2/13B_lora checkpointer.checkpoint_dir=<YOUR_CHECKPOINT_DIR>
# tune run --nnodes 1 --nproc_per_node 4 lora_finetune_fsdp2 --config llama2/13B_lora checkpointer.checkpoint_dir=<YOUR_CHECKPOINT_DIR>
#
# This config works best when the model is being fine-tuned on 2+ GPUs.
# For single device LoRA finetuning please use 7B_lora_single_device.yaml
Expand Down
4 changes: 2 additions & 2 deletions recipes/configs/dev/llama2/70B_lora_fsdp2.yaml
Original file line number Diff line number Diff line change
@@ -1,12 +1,12 @@
# Config for multi-device LoRA in lora_finetune_distributed.py
# Config for multi-device LoRA with FSDP2 lora_finetune_fsdp2.py
# using a Llama2 70B model
#
# This config assumes that you've run the following command before launching
# this run:
# tune download meta-llama/Llama-2-70b-hf --output-dir /tmp/Llama-2-70b-hf --hf-token <HF_TOKEN>
#
# This config needs 8 GPUs to run
# # tune run --nproc_per_node 8 lora_finetune_distributed --config llama2/70B_lora
# # tune run --nproc_per_node 8 lora_finetune_fsdp2 --config llama2/70B_lora
#

# Model Arguments
Expand Down
6 changes: 3 additions & 3 deletions recipes/configs/dev/llama2/7B_lora_fsdp2.yaml
Original file line number Diff line number Diff line change
@@ -1,17 +1,17 @@
# Config for multi-device LoRA finetuning in lora_finetune_distributed.py
# Config for multi-device LoRA finetuning with FSDP2 in lora_finetune_fsdp2.py
# using a Llama2 7B model
#
# This config assumes that you've run the following command before launching
# this run:
# tune download meta-llama/Llama-2-7b-hf --output-dir /tmp/Llama-2-7b-hf --hf-token <HF_TOKEN>
#
# To launch on 2 devices, run the following command from root:
# tune run --nnodes 1 --nproc_per_node 2 lora_finetune_distributed --config llama2/7B_lora
# tune run --nnodes 1 --nproc_per_node 2 lora_finetune_fsdp2 --config llama2/7B_lora
#
# You can add specific overrides through the command line. For example
# to override the checkpointer directory while launching training
# you can run:
# tune run --nnodes 1 --nproc_per_node 2 lora_finetune_distributed --config llama2/7B_lora checkpointer.checkpoint_dir=<YOUR_CHECKPOINT_DIR>
# tune run --nnodes 1 --nproc_per_node 2 lora_finetune_fsdp2 --config llama2/7B_lora checkpointer.checkpoint_dir=<YOUR_CHECKPOINT_DIR>
#
# This config works best when the model is being fine-tuned on 2+ GPUs.
# For single device LoRA finetuning please use 7B_lora_single_device.yaml
Expand Down

0 comments on commit 8fbbc4b

Please sign in to comment.