Skip to content

Commit

Permalink
Patch ctgan batchsize1 (#263)
Browse files Browse the repository at this point in the history
* reuse encoders

* ensure categorical encoder is trained on real and synthetic

* better transformer

* remove unnecessary imports

* better error message

* compatbility with DDIM

* Fix CTGAN for batchsize=1

A .squeeze statement caused errors during training when one of the batches is 1. Ensuring only the last dimension is squeezed fixes this
  • Loading branch information
bvanbreugel authored Apr 12, 2024
1 parent de3fce4 commit 44cd36d
Showing 1 changed file with 1 addition and 1 deletion.
2 changes: 1 addition & 1 deletion src/synthcity/plugins/core/models/gan.py
Original file line number Diff line number Diff line change
Expand Up @@ -682,7 +682,7 @@ def _loss_gradient_penalty(
interpolated = (
alpha * real_samples + ((1 - alpha) * fake_samples)
).requires_grad_(True)
d_interpolated = self.discriminator(interpolated).squeeze()
d_interpolated = self.discriminator(interpolated).squeeze(-1)
labels = torch.ones((len(interpolated),), device=self.device)

# Get gradient w.r.t. interpolates
Expand Down

0 comments on commit 44cd36d

Please sign in to comment.