Having read this text I learnt that I can create what people call "reconstructions" by turning only one hidden unit active and Gibbs sampling the visible from the hidden units.
But now I am trying to implement some Convolutional Restricted Boltzmann Machines in python. My plan is to stick to the version presented in Section 3.2 (so, notice, I don't intend implement the Convolutional Deep Belief Network yet), and only insert Probabilistic Max-Pooling once that part is working.
To check that it is working, I wanted to create "features" like those presented in the article (e.g., figure 3). The learned features of the first layer resemble a lot those features learned by other types of networks; but I am not sure how they are creating those features. It is not clear to me if those learned "features" are the weights of the filters, or if I should somehow create a reconstruction by turning on all hidden units of a certain filter. I am also not sure how relevant that section 3.6 is to my simpler version (in which I don't even have Probabilistic Max-Pooling).
(I tried doing both and my results still look completely different, and I am not sure if it is a bug in my code or I am simply doing something wrong)
Any help? (I found this code randomly in the internet, but I am still new to Matlab syntax and couldn't not find out yet what they do to create reconstructions -- supposing they do)
Well yes, I was already wondering why they didn't provide details on plotting their bases for higher layers in this paper.
For visualizing the features of the first layer (Figure 3, upper image) it is definitely sufficient to plot just the weights (i.e. filters) of the individual hidden units. If your results look differently, there can be many reasons for that. Next to errors in your code, any training parameter can make the filters look different again. Note that for natural images you need gaussian units.
If you want some python code to start with, you can check this framework: https://github.com/OFAI/lrn2
If you once would like to visualize whats going on in higher layers, this paper might help (where section 2.4 is also implemented in the above framework (under stacks.py/NNGenerative): http://www.dumitru.ca/files/publications/invariances_techreport.pdf
Hope that helps!