Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Avoid the problem of gradient backpropagation being truncated when fine-tuning CLIP text encoder in diffusers #31758

Open
wants to merge 2 commits into
base: main
Choose a base branch
from

Conversation

liming-ai
Copy link

@liming-ai liming-ai commented Jul 2, 2024

…orting errors

What does this PR do?

Fixes # (issue)
In many cases, we may want to fine-tune CLIP Text Encoder. However, naively modify the training code such as train_t2i_model in diffusers to make text_encoder.requires_grad_(True) leads to gradient truncated issue and results in training errors like this:

Traceback (most recent call last):
  File "code/diffusers/examples/instruct_pix2pix/train_instruct_pix2pix.py", line 1261, in <module>
    main()
  File "code/diffusers/examples/instruct_pix2pix/train_instruct_pix2pix.py", line 1136, in main
    accelerator.backward(loss)
  File "/usr/local/lib/python3.9/dist-packages/accelerate/accelerator.py", line 2134, in backward
    loss.backward(**kwargs)
  File "/usr/local/lib/python3.9/dist-packages/torch/_tensor.py", line 492, in backward
    torch.autograd.backward(
  File "/usr/local/lib/python3.9/dist-packages/torch/autograd/__init__.py", line 251, in backward
    Variable._execution_engine.run_backward(  # Calls into the C++ engine to run the backward pass
RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.cuda.LongTensor [1, 77]] is at version 3; expected version 2 instead. Hint: enable anomaly detection to find the operation that failed to compute its gradient, with torch.autograd.set_detect_anomaly(True).

To fix this issue, we should avoid the position_ids be changed by:

position_ids = self.position_ids[:, :seq_length].clone()

Before submitting

  • This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
  • Did you read the contributor guideline,
    Pull Request section?
  • Was this discussed/approved via a Github issue or the forum? Please add a link
    to it if that's the case.
  • Did you make sure to update the documentation with your changes? Here are the
    documentation guidelines, and
    here are tips on formatting docstrings.
  • Did you write any new necessary tests?

Who can review?

Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.

@liming-ai liming-ai changed the title Avoid the problem of gradient backpropagation being truncated and leads to error Avoid the problem of gradient backpropagation being truncated when fine-tuning CLIP text encoder in diffusers Jul 2, 2024
Copy link
Collaborator

@amyeroberts amyeroberts left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @liming-ai, thanks for opening this PR and adding this fix!

Running make fix-copies and then make fixup should resolve the failing quality checks

@liming-ai
Copy link
Author

Hi @liming-ai, thanks for opening this PR and adding this fix!

Running make fix-copies and then make fixup should resolve the failing quality checks

Done, could you please check this PR again?

@amyeroberts
Copy link
Collaborator

@liming-ai Could you rebase on main to include any upstream changes? I think this should resolve the failing tokenization tests

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants