pythonmachine-learningdeep-learningautoencoderencoder-decoder

Encoder-Decoder noise problem after decoding


I have an array of size (12960, ) and I'm using very simple dense autoencoder architecture to reproduce array as shown below.

input_img = Input(shape=(12960,))

encoded = Dense(units=2000, activation='relu')(input_img)
decoded = Dense(units=12960, activation='relu')(encoded)

Now I'm using 20 epoch and 64 batch size to train the model.

But I'm getting some kind of noise(or 0 value) at many places when I plot the array after decoding. I have attached original and decoded image below. Can someone explain me why this is happening. I'm new to deep learning so I don't have much idea about it's working. Is it because I'm using very simple architecture or I'm compressing a lot while encoding?

Original

Decoded


Solution

  • Try using LeakyReLU instead of ReLU.

    It might be because since ReLU is defined as ReLU(x) = max(0, x), for negative values ReLU always returns 0.