You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have a question. If I want to add dropout into the network for policy gradient, how can I do that?
I think in order to do that, I need to completely change the code. Right now the workflow is as follows.
Having state -> do a forward computation -> having the output -> compute the gradient -> create a new input, output to train the network -> perform training the network with the <input, output> for one epoch -> repeating again.
However, to add dropout we need to change the workflow as follows:
Having state -> do a forward computation -> having the output -> compute the gradient -> backpropogate the gradient -> modifying network parameters -> repeating.
This would really complicate for an automatic differentiation system like Keras, I think. Any idea?
Thanks a lot for your help!
Best,
The text was updated successfully, but these errors were encountered:
Hi all,
Thanks for your amazing project!
I have a question. If I want to add dropout into the network for policy gradient, how can I do that?
I think in order to do that, I need to completely change the code. Right now the workflow is as follows.
Having state -> do a forward computation -> having the output -> compute the gradient -> create a new input, output to train the network -> perform training the network with the <input, output> for one epoch -> repeating again.
However, to add dropout we need to change the workflow as follows:
Having state -> do a forward computation -> having the output -> compute the gradient -> backpropogate the gradient -> modifying network parameters -> repeating.
This would really complicate for an automatic differentiation system like Keras, I think. Any idea?
Thanks a lot for your help!
Best,
The text was updated successfully, but these errors were encountered: