To gain insight in generative adversarial networks, i am trying to implement a GAN for the MNIST Dataset myself based on this Stanford university assignment using tensorflow.
I reviewed and researched my solutions to the given exercises carefully, and made the tests pass. However, my generator will just create noise.
I am pretty sure that i got the helper functions right, all the tests pass, and i found references online that show the exact same implementation. So where it could go wrong is just the discriminator and generator architectures:
def discriminator(x):
with tf.variable_scope("discriminator"):
l_1 = leaky_relu(tf.layers.dense(x, 256, activation=None))
l_2 = leaky_relu(tf.layers.dense(l_1, 256, activation=None))
logits = tf.layers.dense(l_2, 1, activation=None)
return logits
def generator(z):
with tf.variable_scope("generator"):
l_1 = tf.maximum(tf.layers.dense(z, 1024, activation=None), 0)
l_2 = tf.maximum(tf.layers.dense(l_1, 1024, activation=None), 0)
img = tf.tanh(tf.layers.dense(l_2, 784, activation=None))
return img
I see that generator and discriminator errors are dropping close to zero in the first iterations.
Iter: 0, D: 1.026, G:0.6514
Iter: 50, D: 2.721e-05, G:5.066e-06
Iter: 100, D: 1.099e-05, G:3.084e-06
Iter: 150, D: 7.546e-06, G:1.946e-06
Iter: 200, D: 3.386e-06, G:1.226e-06
...
With a lower learning rate, e.g. 1e-7
, error rates decay slowly for discriminator and generator, but will eventually drop to zero, and just generate noise.
Iter: 0, D: 1.722, G:0.6772
Iter: 50, D: 1.704, G:0.665
Iter: 100, D: 1.698, G:0.661
Iter: 150, D: 1.663, G:0.6594
Iter: 200, D: 1.661, G:0.6574
...
I got the tensorflow graph up and running for my experiment but failed so far to interpret anything meaningful from it. If you have any suggestions or can recommend a technique for debugging, i will be happy to hear it.
As requested, here is my code for the GAN - Loss:
def gan_loss(logits_real, logits_fake):
labels_real = tf.ones_like(logits_real)
labels_fake = tf.zeros_like(logits_fake)
d_loss_real = tf.nn.sigmoid_cross_entropy_with_logits(logits=logits_real, labels=labels_real)
d_loss_fake = tf.nn.sigmoid_cross_entropy_with_logits(logits=logits_fake, labels=labels_fake)
D_loss = tf.reduce_mean(d_loss_real + d_loss_fake)
G_loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=logits_fake, labels=labels_fake))
return D_loss, G_loss
As i understand this model you should to change this:
G_loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(
logits=logits_fake, labels=labels_fake))
to this:
G_loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(
logits=logits_fake, labels=tf.ones_like(logits_fake)))