This repo contains all the code necessary to train Generative Adversarial Networks (GANs) with discrete outputs.
Usually this is not possible as discrete outputs are not differentiable and thus the error is not "back-propagatable".
In order to overcome this problem 2 solutions are proposed in this repo.
The first solution is to use reinforcement learning to estimate the generator gradient, considering the reward the negative loss, and sampling over the generator output.
The second method used is the straight-through-estimator, used in VQVAE in order to have a discrete latent space (codebook), which consist in adding a residual to the continuous output of the generator in order to make it look discrete, and since additions don't break the gradient, it can be trained as a normal GAN