Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

wan-gp #8

Open
yaxingwang opened this issue May 11, 2019 · 7 comments
Open

wan-gp #8

yaxingwang opened this issue May 11, 2019 · 7 comments

Comments

@yaxingwang
Copy link

Hi @taki0112

Thank your contribution. I am trying you code. What I using is as following:

python main.y --dataset celebs --gan_type hinge --img_size 128

which works.

But when I try
python main.y --dataset celebs --gan_type wgan-gp --img_size 128 --critic_num 5

It stuck in
self.d_optim = tf.train.AdamOptimizer(self.d_learning_rate, beta1=self.beta1, beta2=self.beta2).minimize(self.d_loss, var_list=d_vars)

Did you test this?

@taki0112
Copy link
Owner

wgan-gp took a strangely long time.
I haven't found a cause yet.

@yaxingwang
Copy link
Author

Thanks. In fact wgan-lp also does not work

@syning94
Copy link

syning94 commented Jul 4, 2019

I have the same problem. It seems that gradient penalty couldn't be back-propagated successfully.

As I figured out, tf.gradients() cannot calculate gradients and stuck in it. But almost all gradient penalty in WGAN-GP implemented like this.

Cannot solve this. Anyone has suggestions?

@xuhui1994
Copy link

I I wonder why there is nothing in the results folder during the training phase.I look forward to hearing from you

@xuhui1994
Copy link

I try to use wgan-gp , it stuck a long time. I even think it doesn't work at that time.

@Orchid0714
Copy link

Hi @taki0112

Thank your contribution. I am trying you code. What I using is as following:

python main.y --dataset celebs --gan_type hinge --img_size 128

which works.

But when I try
python main.y --dataset celebs --gan_type wgan-gp --img_size 128 --critic_num 5

It stuck in
self.d_optim = tf.train.AdamOptimizer(self.d_learning_rate, beta1=self.beta1, beta2=self.beta2).minimize(self.d_loss, var_list=d_vars)

Did you test this?

Hi, I try this code with a small amount of data and there is a error about ResourceExhaustedError, so I want to know how to change the set in the code "gpu_device = '/gpu:0'" to use 4 gpus? thank you!

@manvirvirk
Copy link

Hi @taki0112

Thank your contribution. I am trying you code. What I using is as following:

python main.y --dataset celebs --gan_type hinge --img_size 128

which works.

But when I try
python main.y --dataset celebs --gan_type wgan-gp --img_size 128 --critic_num 5

It stuck in
self.d_optim = tf.train.AdamOptimizer(self.d_learning_rate, beta1=self.beta1, beta2=self.beta2).minimize(self.d_loss, var_list=d_vars)

Did you test this?

Hi @taki0112

Thank your contribution. I am trying you code. What I using is as following:

python main.y --dataset celebs --gan_type hinge --img_size 128

which works.

But when I try
python main.y --dataset celebs --gan_type wgan-gp --img_size 128 --critic_num 5

It stuck in
self.d_optim = tf.train.AdamOptimizer(self.d_learning_rate, beta1=self.beta1, beta2=self.beta2).minimize(self.d_loss, var_list=d_vars)

Did you test this?

hi
i m getting memory error:
Total size of variables: 198818145
Total bytes of variables: 795272580
[] Reading checkpoints...
[
] Failed to find a checkpoint
[!] Load failed...
i m using nvidia geforce rtx 6gb memory with 32 gb ram. Can you solve this??

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants