Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Tensorboard logging should use num_grad_updates not batch_idx #835

Closed
ibeltagy opened this issue Feb 14, 2020 · 2 comments
Closed

Tensorboard logging should use num_grad_updates not batch_idx #835

ibeltagy opened this issue Feb 14, 2020 · 2 comments
Labels
bug Something isn't working

Comments

@ibeltagy
Copy link
Contributor

When accumulate_grad_batches > 1, the x-axis in tensorboard should be number of gradient updates, not number of batches that have been processed.

@ibeltagy ibeltagy added the bug Something isn't working label Feb 14, 2020
@timofeev1995
Copy link

Have the authors changed the logic in master branch? Now 1 tensorboard step == 1 optimizer step?
Thanks!

@ibeltagy
Copy link
Contributor Author

ibeltagy commented Mar 3, 2020

AFAICT, this was fixed here #832

@ibeltagy ibeltagy closed this as completed Mar 3, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants