machine-learningneural-networkunsupervised-learningrbm

Deep autoencoder using RBM


I m implementing Deep autoencoder using RBM. I understand that, for unfolding the network, we need to use the transposed weights of the encoder for the decoder. But I'm not sure which biases should we use for the decoder. I appreciate it if anyone can elaborate it for me or send me a link for pseudocode.


Solution

  • I believe Geoff Hinton makes all of his source code available on his website. He is the go-to guy for the RBM version of this technique.

    Basically, if you have an input matrix M1 with dimension 10000 x 100 where 10000 is the number of samples you have and 100 is the number of features and you want to transform it into 50 dimensional space you would train a restricted boltzman machine with a weight matrix of dimensionality 101 x 50 with the extra row being the bias unit that is always on. On the decoding side then you would take you 101 x 50 matrix, drop the extra row from the bias making it a 100 x 50 matrix, transpose it to 50 x 100 and then add another row for the bias unit making it 51 x 100. You can then run the entire network through backpropogation to train the weights of the overall network.