Skip to content

Conversation

@BlackNoodle
Copy link
Contributor

@BlackNoodle BlackNoodle commented Oct 18, 2024

What does this PR do?

In #34198, the line loss *= self.args.gradient_accumulation_steps was introduced due to Negate accelerate grad accum div. This change was made to correct errors encountered during gradient accumulation. However, the scaling should only occur when compute_loss_func is used, so the code was modified accordingly.

Before submitting

  • This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
  • Did you read the contributor guideline,
    Pull Request section?
  • Was this discussed/approved via a Github issue or the forum? Please add a link
    to it if that's the case.
  • Did you make sure to update the documentation with your changes? Here are the
    documentation guidelines, and
    here are tips on formatting docstrings.
  • Did you write any new necessary tests?

Who can review?

Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@muellerzr
@ArthurZucker

Copy link
Collaborator

@ArthurZucker ArthurZucker left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

cc @muellerzr compute_loss_func is never None right?

Copy link
Contributor

@muellerzr muellerzr left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No, it should always be used, period. The correct check is if we passed in num_items_in_batch or not. If not, then we can mult.

(There are model loss functions which accept num_items_in_batch as part of their forward)

@muellerzr
Copy link
Contributor

muellerzr commented Oct 24, 2024

This will conflict with #34373, however the correct version of this is either if compute_loss_func is used or self.model_accepts_loss_kwargs is True

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants