pythontensorflowmachine-learningneural-networkgenerative-adversarial-network

Loss functions in GANs


I'm trying to build a simple mnist GAN and need less to say, it didn't work. I've searched a lot and fixed most of my code. Though I can't really understand how loss functions are working.

This is what I did:

loss_d = -tf.reduce_mean(tf.log(discriminator(real_data))) # maximise
loss_g = -tf.reduce_mean(tf.log(discriminator(generator(noise_input), trainable = False))) # maxmize cuz d(g) instead of 1 - d(g)
loss = loss_d + loss_g

train_d = tf.train.AdamOptimizer(learning_rate).minimize(loss_d)
train_g = tf.train.AdamOptimizer(learning_rate).minimize(loss_g)

I get -0.0 as my loss value. Can you explain how to deal with loss functions in GANs?


Solution

  • It seems you try to sum the generator and discriminator losses together which is completely wrong! Since the Discriminator train with both real and generated data you have to create two distinct losses, one for real data and other one for noise data(generated) that you pass into the discriminator network.

    Try to change your code as follows:

    1)

    loss_d_real = -tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=discriminator(real_data),labels= tf.ones_like(discriminator(real_data))))
    

    2)

    loss_d_fake=-tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=discriminator(noise_input),labels= tf.zeros_like(discriminator(real_data))))
    

    then the discriminator loss will be equal to = loss_d_real+loss_d_fake. Now create loss for your generator:

    3)

    loss_g= tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=discriminator(genereted_samples), labels=tf.ones_like(genereted_samples)))