Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Why you are use custom gradint appiler? #21

Open
dm-mch opened this issue Dec 23, 2016 · 1 comment
Open

Why you are use custom gradint appiler? #21

dm-mch opened this issue Dec 23, 2016 · 1 comment

Comments

@dm-mch
Copy link

dm-mch commented Dec 23, 2016

Hello, thank you for your code!
Why you are not use standart apply_gradients for rms optimizer? Any sync issue for multithreading? https://www.tensorflow.org/versions/master/api_docs/python/train/optimizers#Optimizer.apply_gradients

@miyosuda
Copy link
Owner

miyosuda commented Dec 23, 2016

I created this class because I needed to calculate gradients with local loss and variables, but we need to apply calculated gradients to global variables.
I also needed to share "rms" and "momentum" slots among threads, to do SharedRMSProp.

I chose to create my own class, but it might be possible to use standard RMSPropOptimizer, if we call compute_gradients() with local loss and variables, and call apply_gradients with calculated gradients and global variables.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants