From 3ee6cc3db46819ba587572d5f51b0d837ae19527 Mon Sep 17 00:00:00 2001 From: Jason Senthil Date: Thu, 10 Aug 2023 14:39:26 -0700 Subject: [PATCH] update checkpoint docs to use unit instead of auto unit Reviewed By: ananthsub Differential Revision: D48245705 fbshipit-source-id: 815d79c2184c34198e374be6440c4c74a816bc86 --- docs/source/checkpointing.rst | 11 ++++++----- 1 file changed, 6 insertions(+), 5 deletions(-) diff --git a/docs/source/checkpointing.rst b/docs/source/checkpointing.rst index 6a7dc746d0..b1a3b65e84 100644 --- a/docs/source/checkpointing.rst +++ b/docs/source/checkpointing.rst @@ -6,7 +6,7 @@ TorchTNT offers checkpointing via the :class:`~torchtnt.framework.callbacks.Torc .. code-block:: python module = nn.Linear(input_dim, 1) - unit = MyAutoUnit(module=module) + unit = MyUnit(module=module) tss = TorchSnapshotSaver( dirpath=your_dirpath_here, save_every_n_train_steps=100, @@ -32,7 +32,8 @@ The state dict type to be used for checkpointing FSDP modules can be specified i # sets state dict type of FSDP module state_dict_type=STATE_DICT_TYPE.SHARDED_STATE_DICT ) - unit = MyAutoUnit(module=module, strategy=fsdp_strategy) + module = prepare_fsdp(module, strategy=fsdp_strategy) + unit = MyUnit(module=module) tss = TorchSnapshotSaver( dirpath=your_dirpath_here, save_every_n_epochs=2, @@ -49,9 +50,9 @@ Or you can manually set this using `FSDP.set_state_dict_type