Skip to content

Commit

Permalink
Update defaults in FSDPStrategy for limit_all_gathers and backward_pr…
Browse files Browse the repository at this point in the history
…efetch (pytorch#457)

Summary:
Pull Request resolved: pytorch#457

FSDP default recently changed: https://github.com/pytorch/pytorch/pull/104900/files, so updating here as well

TODO: is there a better way to keep these dataclasses in sync with pytorch?

Reviewed By: daniellepintz

Differential Revision: D47449311

fbshipit-source-id: f2cfc152fa34dc5bf695ad941d72a3a2e0f4dfe7
  • Loading branch information
rohan-varma authored and facebook-github-bot committed Jul 14, 2023
1 parent 8f6a4ad commit 3cbec3c
Showing 1 changed file with 2 additions and 2 deletions.
4 changes: 2 additions & 2 deletions torchtnt/utils/prepare_module.py
Original file line number Diff line number Diff line change
Expand Up @@ -67,11 +67,11 @@ class FSDPStrategy(Strategy):
sharding_strategy: Optional[ShardingStrategy] = None
cpu_offload: Optional[CPUOffload] = None
auto_wrap_policy: Optional[Callable[[torch.nn.Module, bool, int], bool]] = None
backward_prefetch: Optional[BackwardPrefetch] = None
backward_prefetch: Optional[BackwardPrefetch] = BackwardPrefetch.BACKWARD_PRE
ignored_modules: Optional[Iterable[torch.nn.Module]] = None
sync_module_states: bool = False
forward_prefetch: bool = False
limit_all_gathers: bool = False
limit_all_gathers: bool = True
use_orig_params: bool = False

# FSDP set_state_dict_type params: https://pytorch.org/docs/stable/fsdp.html#torch.distributed.fsdp.FullyShardedDataParallel.set_state_dict_type
Expand Down

0 comments on commit 3cbec3c

Please sign in to comment.