You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I created this class because I needed to calculate gradients with local loss and variables, but we need to apply calculated gradients to global variables.
I also needed to share "rms" and "momentum" slots among threads, to do SharedRMSProp.
I chose to create my own class, but it might be possible to use standard RMSPropOptimizer, if we call compute_gradients() with local loss and variables, and call apply_gradients with calculated gradients and global variables.
Hello, thank you for your code!
Why you are not use standart apply_gradients for rms optimizer? Any sync issue for multithreading? https://www.tensorflow.org/versions/master/api_docs/python/train/optimizers#Optimizer.apply_gradients
The text was updated successfully, but these errors were encountered: