Skip to content

Conversation

@cavdard
Copy link
Contributor

@cavdard cavdard commented Apr 26, 2022

What does this PR do?

  • Uses smp.rdp_rank() instead of smp.rank() for partial checkpoint saving in should_save.
  • Uses local_state_dict() with partial checkpoint saving.
  • Uses smp.save for SMP.
  • Usessmp.load for SMP. Reorders partial checkpoint loading to happen after wrapping of model, since smp.load can only load to a smp model.
  • Updated checks for the existence of checkpoint files since smp partial checkpoints contain postfixes in addition to filename(example: filename_0_0 or filename_0_0_0).
  • adds load_best_model_at_end support for SMP

Fixes # (issue)

Before submitting

  • This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
  • [x ] Did you read the contributor guideline,
    Pull Request section?
  • Was this discussed/approved via a Github issue or the forum? Please add a link
    to it if that's the case.
  • Did you make sure to update the documentation with your changes? Here are the
    documentation guidelines, and
    here are tips on formatting docstrings.
  • Did you write any new necessary tests?

Who can review?

Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.

@HuggingFaceDocBuilderDev

The docs for this PR live here. All of your documentation changes will be reflected on that endpoint.

@philschmid philschmid requested a review from sgugger April 27, 2022 07:02
Copy link
Collaborator

@sgugger sgugger left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for your PR. Two comments on it:

  1. This breaks the current behavior of the Trainer where each checkpoint can be loaded as a model. In particular, this will push to the Hub the partial checkpoints with no config during training when push_to_hub=True (whereas a regular training pushes models that can be used).
  2. The feature is always on. Maybe we should let the user decide if they want it or not?

Comment on lines +2246 to +2249
if is_sagemaker_mp_enabled():
smp.save(state_dict, os.path.join(output_dir, WEIGHTS_NAME), partial=True)
else:
self.model.save_pretrained(output_dir, state_dict=state_dict)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No calling save_pretrained here means the config will not be saved and the checkpoint won't be able to be loaded with from_pretrained independently of the training. It's not a regular checkpoint anyway, so maybe it's okay. Flagging this here anyway.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

SMP checkpoints are saved partially hence we do not want to shard SMP checkpoints. In order to use save_pretrained for SMP, we need to skip shard_checkpoint for SMP. Independent ofmax_shard_size shard_checkpoint is called and we hit errors in shard_checkpoint since SMP checkpoints are different. If shard_checkpoint can be optional in save_pretrained, we can use save_pretrained with save_function=smp.save. In my previous PR I tried to skip shard_checkpoint for SMP, but feedback was not change save_pretrained.

from_pretrained won't work for SMP models. We are working on how to support fine-tuning. In this PR, I added support partial checkpoint saving/loading during training.

Comment on lines +1198 to +1202
checkpoint_file_exists = (
glob.glob(os.path.join(resume_from_checkpoint, WEIGHTS_NAME) + "_*")
if is_sagemaker_mp_enabled()
else os.path.isfile(os.path.join(resume_from_checkpoint, WEIGHTS_NAME))
)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is used several times, could we refactor it in a util function that takes the filename?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I will do that.

Comment on lines +1198 to +1202
checkpoint_file_exists = (
glob.glob(os.path.join(resume_from_checkpoint, WEIGHTS_NAME) + "_*")
if is_sagemaker_mp_enabled()
else os.path.isfile(os.path.join(resume_from_checkpoint, WEIGHTS_NAME))
)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is used several times, could we refactor it in a util function that takes the filename?

@cavdard
Copy link
Contributor Author

cavdard commented Apr 27, 2022

Thanks for your PR. Two comments on it:

  1. This breaks the current behavior of the Trainer where each checkpoint can be loaded as a model. In particular, this will push to the Hub the partial checkpoints with no config during training when push_to_hub=True (whereas a regular training pushes models that can be used).
  2. The feature is always on. Maybe we should let the user decide if they want it or not?

Thanks for reviewing.

In order user to decide to save/load partial checkpoints or not, we need new training args. In my previous PR, I got feedback not to introduce new HF training args. So we decided to support partial checkpointing as default.

@sgugger
Copy link
Collaborator

sgugger commented Apr 27, 2022

There are plenty of other ways to control whether a feature is on or off. For instance, you could use the environment variable "SM_HP_MP_PARAMETERS".

Since this partial checkpointing is completely incompatible with from_pretrained, thus won't work with the Hugging Face Hub and its inference widget, it should be turned off by default.

@cavdard
Copy link
Contributor Author

cavdard commented May 12, 2022

@sgugger Thanks for you feedback. Based on your comments, we decided to enable partial checkpointing for optimizer state only where model weights will be saved in full. With this approach, model weights will be saved using save_pretrained.

Here is the link for the new PR: #17219

@github-actions
Copy link
Contributor

github-actions bot commented Jun 6, 2022

This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.

Please note that issues that do not follow the contributing guidelines are likely to be ignored.

@github-actions github-actions bot closed this Jun 14, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants