-
Notifications
You must be signed in to change notification settings - Fork 75
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
wan-gp #8
Comments
wgan-gp took a strangely long time. |
Thanks. In fact wgan-lp also does not work |
I have the same problem. It seems that gradient penalty couldn't be back-propagated successfully. As I figured out, Cannot solve this. Anyone has suggestions? |
I I wonder why there is nothing in the results folder during the training phase.I look forward to hearing from you |
I try to use wgan-gp , it stuck a long time. I even think it doesn't work at that time. |
Hi, I try this code with a small amount of data and there is a error about ResourceExhaustedError, so I want to know how to change the set in the code "gpu_device = '/gpu:0'" to use 4 gpus? thank you! |
hi |
Hi @taki0112
Thank your contribution. I am trying you code. What I using is as following:
python main.y --dataset celebs --gan_type hinge --img_size 128
which works.
But when I try
python main.y --dataset celebs --gan_type wgan-gp --img_size 128 --critic_num 5
It stuck in
self.d_optim = tf.train.AdamOptimizer(self.d_learning_rate, beta1=self.beta1, beta2=self.beta2).minimize(self.d_loss, var_list=d_vars)
Did you test this?
The text was updated successfully, but these errors were encountered: